BRN Discussion Ongoing

Frangipani

Regular
Speaking of your legendary scroll - I’ve been meaning to ask you, what was it that ultimately convinced you to include Ericsson in your list shortly after posting the above? 🤔

To me, this is a little premature, as I understood those circulating lists to be about companies and institutions Brainchip has verifiably been engaged with and not about companies or institutions whose researchers may have experimented with AKD1000 without Brainchip even being aware (and possibly finding out from us here on TSE, once two or more of the 1000 Eyes have spotted a publication.) That list would evidently be much, much longer!

Unfortunately, FF’s question about Ericsson went unanswered during the recent Virtual Investor Roadshow, but he had nevertheless included Ericsson in his list and has been pushing this connection very hard ever since. Mind you, I am not saying at all they couldn’t be behind one of the NDAs, but currently there is not enough evidence for me to add them to one of those lists.

Personally, I’d much prefer to place Ericsson under the iceberg waterline for the time being, as we simply don’t know whether those Ericsson researchers who published that often quoted December 2023 paper (Towards 6G Zero-Energy Internet of Things…) have been engaged with our company in any official way.

Especially since Ericsson has been closely collaborating with Intel - they even established a joint lab in Santa Clara, CA in 2022.

https://www.ericsson.com/en/news/20...ach-new-milestone-with-tech-hub-collaboration

“The Ericsson-Intel Tech Hub has established itself as an incubator for cutting-edge advancements and new hardware technology exploration, achieving one milestone after another. Celebrating a series of firsts – including the recent successful Cloud RAN call using Intel’s future Xeon processor, codenamed Granite Rapids-D – the hub has been instrumental in the development of technologies that help service providers build open, resilient, sustainable and intelligent mobile networks.”

Around the same time some Ericsson researchers were playing with Akida last year, as is evident by the December 2023 publication @Fullmoonfever shared with us on Boxing Day, at least one other Ericsson employee who is a Senior Researcher for Future Computing Platforms was very much engaged with Intel and gave a presentation on “Neuromorphic Optimisers in Telecommunications Networks” using Loihi in July 2023.

View attachment 60752
View attachment 60753

View attachment 60768

And a couple of weeks ago, Intel published a podcast with Mischa Dohler, VP of Emerging Technologies at Ericsson in Silicon Valley and formerly a university professor at King’s College London, who was the Chair of the Wireless Communications Department there as well as Director of their Centre for Telecommunications Research.

He was already involved with Ericsson while still in the UK:

View attachment 60765


In this podcast, shared here on TSE a couple of times in recent weeks, Mischa Dohler - amongst other things - shared his thoughts on neuromorphic and quantum computing, and while it was a great endorsement of the benefits of neuromorphic technology in general, he was rather vague about the scope of its future integration at Ericsson.

See the podcast video in Bravo’s link:
https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-418271

And here you can read the transcript:


Camille Morhardt 15:39

Explain neuromorphic computing to us.

Mischa Dohler 15:42

It’s an entirely new compute paradigm, right? So in the computer science world, we talk about a Von Neumann architecture. So Von Neumann introduced that architecture, saying, “Hey, let’s have the compute engine, which we call CPU these days, decoupled from the memory where you store your information, and then connect it with a little bus there.” And that takes a lot of time to actually get this information forth and back between memory and compute. It costs a lot of energy.

Now, along comes neuromorphic, where you have actually completely new materials, which allow you to put computing and memory in the very same instance. So you don’t need to ship all that information forth and back, you can do it in many ways. One way is just to use very new material, kind of meta materials to make that happen. And it turns out by doing this, you save a lot of energy, because you know, you can suddenly just maintain part of the little chip infrastructure, which you need to do certain calculus rather than keeping everything powered at the level of ones and zeros as we deal with our traditional infrastructure.

So it turns out that bringing this memory and compute together, we save a lot of energy. Then people came along said, “Hey, why don’t we build entirely new ways of calculating things?” And the neuromorphic compute fabric allows us to do operations without using energy for multiplications. And multiplications, we need that a lot. Right? So we roughly have additions and multiplications. Now multiplications take about a 10th of the energy today in neuromorphic. Put it all together, and suddenly very complex operations like AI consume a million times less energy than our traditional CPU fabric and GPU fabric. And everyone was, “Hey, why don’t we use that?” And everybody got very excited about this, of course, loads of technology challenge this, like the very early kind of CPU years in a way.

But you know, companies like Intel, really pushing this very hard and as a great fabric, and other companies out there. And I’m trying to understand, where are we commercially? Would that make sense to implement that? You know, and our gear, we’ll have 6G gear, which we’ll have by then at the end of this decade.

Camille Morhardt 17:56

So how is neuromorphic computing and a roll into 6G?

Mischa Dohler 18:00
So we still don’t know; we’re still investigating as a community. I’m not saying Ericsson per se, but as a community trying to understand where will it be. What we are starting to see, 6G will really be about a lot more antenna elements. So we call this ultra-massive mime, whatever you want to call it at the moment, we may have a 64 elements on the roof. And then maybe you know, you have like six maybe in the phone, “Hey, what if we scale this up to 1,000 antenna elements on the roof?” And then suddenly, you start thinking, “Hey, you know, if I have to power all these 1,000 elements, and connect all the processing, in addition, my bandwidths are getting wider. More users are coming on. My compute energy, you know, will just go through the roof.” And we’ve done the calculus, it’s really crazy. So there’s no way we can do that. So we need new ways of dealing with that energy increase. Neuromorphic comes along. It’s one of the contenders. So it’s not the only one. There’s other stuff as well, we’re looking at. But neuromorphic essentially gives you the ability to really bring down this energy envelope, whilst not jeopardizing the performance on that.

So it turns out that neuromorphic cannot be used for all algorithmic families. So we’re trying to understand what can be done, what cannot be done. Should it be really integral as part of our stack to process data? Or should it sit on the side as we like to do it today?
You know, this is publicly available, then we just call certain acceleration functions when we need it, and then continue with processing. So a lot of question marks, and that makes it so exciting because we need to take very difficult strategic decisions very quickly, to make sure we remain competitive towards the end of this decade.

(…)


So quantum is just such a fascinating fabric, and it’s all evolving. The only downside it has at the moment is it’s extremely energy consuming. So contrast that with neuromorphic, which consumes almost zero, quantum is you need to cool it bring it down. So we need a lot of innovation there. And we also need to make sure that if we use a quantum computer, the problem is so hard that we would need like trillions of years to do it on a normal fabric because then the whole energy story makes sense. It needs to be sustainable.

Camille Morhardt 22:36

It sounds like rather than a consolidation of compute technologies, you’re looking at a heterogeneous mix of compute technologies, depending on the function or the workload, is that accurate?

Mischa Dohler 22:46

Absolutely. It’s a very great observation, Camille. That’s exactly what we’re looking at. And you know, specifically in the heterogeneous setting of quantum and traditional compute, we just published a blog on how we use quantum and traditional compute as a hybrid solution to find the optimum antenna tilt. It’s a very hard problem when you have loads of antennas, many users; we work with heuristics so far, so heuristics are algorithmic approaches, which aren’t optimum, but try to get as close as to the optimum, we can do that. With a quantum solver, suddenly, you get much closer to the true optimum in a much quicker time, and we’re able to do that much better than just a non-heterogeneous solution.

Camille Morhardt 23:24

If you had just a huge amount of funding to look at something that’s of personal interest to you, what would it be?

Mischa Dohler 23:30

You know, I would probably try a little bit what we tried to do in London, push the envelope on both, really. So you know, try to understand how can we bring the innovative element of technology together with a creative element of the arts? And really get both communities start thinking, how can they disrupt their own ecosystems; it’s a very general view, but you know, usually it comes out when you bring them together. And we have new stuff coming out now in technology, and I think this envelope between accelerated fabric like neuromorphic, and quantum is one, AI is another and robotics, is yet another, specifically soft robotics. So it’s not only about hard robots walking but actually soft robots which are quite useful in medicine, many other applications. So that’s the technology envelope, and the connectivity connecting it all–5G, 6G, etc.

And then on the artistic side, we have new ways of procuring the arts–whether you use let’s say, glasses, new ways of stages, haptic equipment, you know, creating immersive experiences, creating emotional bonds, creating a digital aura in arts, which we couldn’t do before, right. So before you would go in an exhibition, there is nothing before going exhibition, great experience going out of the exhibition, and then you forget about it. So building these digital aura trails, I think, you know, this is where technology can really help.


So loads of opportunities there. It would really bring arts back into the curriculum, bring it back into schools, bring it back into universities, make it an integral part of our educational process. That’s really what I’d love to see.

Camille Morhardt 24:58

What is the soft robot?

Mischa Dohler 25:00

A soft robot is a robot which mimics the way how, let’s say an octopus walks. It’s very soft. There’s no hard element there. And we love to explore that world because, you know, nobody can be very close to real big robots. So I’m not sure you’ve ever been close to one. I had one at King’s ABB. These are beasts, these are industrial things, you know, you don’t really trust it. If somebody hacks in there or something happens, you know, they just swing and you’re just basically toast. But the soft robot can really enable that coexistence I think with humans, use it in surgery. So the ability to control soft tissue, you know, like an octopus, I think or a snake. That’s the title of inspirational biological phenomena we use to design that.

Camille Morhardt 25:42

Well, Mischa, everything from octopuses to Apple Vision Pro to neuromorphic computing. Thank you so much for your time today. It’s been a fascinating conversation.

Mischa Dohler 25:53

My pleasure, Camille. Thank you for having me.


[I happened to notice the transcript is not 100% accurate, eg it misses him saying “So it’s still a big question mark.” at 17:24 min before continuing with “So we still don’t know”…
Oh, and I thought his remarks about soft robots were very intriguing! ]


View attachment 60762 View attachment 60763

Here is a follow-up of my post on Ericsson (see above).

When I recently searched for info on Ericsson’s interest in neuromorphic technology besides the Dec 2023 paper, in which six Ericsson researchers described how they had built a prototype of an AI-enabled ZeroEnergy-IoT device utilising Akida, I not only came across an Ericsson Senior Researcher for Future Computing Platforms who was very much engaged with Intel’s Loihi (he even gave a presentation at the Intel Neuromorphic Research Community’s Virtual Summer 2023 Workshop), as well as an Intel podcast with Ericsson’s VP of Emerging Technologies, Mischa Dohler.

I also spotted the following LinkedIn post by a Greek lady, who had had a successful career at Ericsson spanning more than 23 years before taking the plunge into self-employment two years ago:

F795C7C1-84CE-4BCC-B837-444720BB3FF2.jpeg


C686B5DF-3FC4-44E1-9EE6-F5C067955E51.jpeg



361eabcc-0b13-4ea1-a40a-fe23e802b8a1-jpeg.61045



Since Maria Boura concluded her post by sharing that very Intel podcast with Mischa Dohler mentioned earlier, my gut feeling was that those Ericsson 6G researchers she had talked to at MWC (Mobile World Congress) 2024 in Barcelona at the end of February had most likely been collaborating with Intel, but a quick Google search didn’t come up with any results at the time I first saw that post of hers back in March.

Then, last night, while reading an article on Intel’s newly revealed Hala Point (https://www.eejournal.com/industry_...morphic-system-to-enable-more-sustainable-ai/), there it was - the undeniable evidence that those Ericsson researchers had indeed been utilising Loihi 2:

“Advancing on its predecessor, Pohoiki Springs, with numerous improvements, Hala Point now brings neuromorphic performance and efficiency gains to mainstream conventional deep learning models, notably those processing real-time workloads such as video, speech and wireless communications. For example, Ericsson Research is applying Loihi 2 to optimize telecom infrastructure efficiency, as highlighted at this year’s Mobile World Congress.”

The blue link connects to the following article on the Intel website, published yesterday:


Ericsson Research Demonstrates How Intel Labs’ Neuromorphic AI Accelerator Reduces Compute Costs​

Subscribe
Article Options

large


Philipp_Stratmann

Philipp_Stratmann
Employee
‎04-17-2024
00344
Philipp Stratmann is a research scientist at Intel Labs, where he explores new neural network architectures for Loihi, Intel’s neuromorphic research AI accelerator. Co-author Péter Hága is a master researcher at Ericsson Research, where he leads research activities focusing on the applicability of neuromorphic and AI technologies to telecommunication tasks.

Highlights
  • Using neuromorphic computing technology from Intel Labs, Ericsson Research is developing custom telecommunications AI models to optimize telecom architecture.
  • Ericsson Research developed a radio receiver prototype for Intel’s Loihi 2 neuromorphic AI accelerator based on neuromorphic spiking neural networks, which reduced the data communication by 75 to 99% for energy efficient radio access networks (RANs).
  • As a member of Intel’s Neuromorphic Research Community, Ericsson Research is searching for new AI technologies that provide energy efficiency and low latency inference in telecom systems.

Using neuromorphic computing technology from Intel Labs, Ericsson Research is developing custom telecommunications artificial intelligence (AI) models to optimize telecom architecture. Ericsson currently uses AI-based network performance diagnostics to analyze communications service providers’ radio access networks (RANs) to resolve network issues efficiently and provide specific parameter change recommendations. At Mobile World Congress (MWC) Barcelona 2024, Ericsson Research demoed a radio receiver algorithm prototype targeted for Intel’s Loihi 2 neuromorphic research AI accelerator, demonstrating a significant reduction in computational cost to improve signals across the RAN.

In 2021, Ericsson Research joined the Intel Neuromorphic Research Community (INRC), a collaborative research effort that brings together academic, government, and industry partners to work with Intel to drive advances in real-world commercial usages of neuromorphic computing.

Ericsson Research is actively searching for new AI technologies that provide low latency inference and energy efficiency in telecom systems. Telecom networks face many challenges, including tight latency constraints driven by the need for data to travel quickly over the network, and energy constraints due to mobile system battery limitations. AI will play a central role in future networks by optimizing, controlling, and even replacing key components across the telecom architecture. AI could provide more efficient resource utilization and network management as well as higher capacity.

Neuromorphic computing draws insights from neuroscience to create chips that function more like the biological brain instead of conventional computers. It can deliver orders of magnitude improvements in energy efficiency, speed of computation, and adaptability across a range of applications, including real-time optimization, planning, and decision-making from edge to data center systems. Intel's Loihi 2 comes with Lava, an open-source software framework for developing neuro-inspired applications.

Radio Receiver Algorithm Prototype

Ericsson Research’s working prototype of a radio receiver algorithm was implemented in Lava for Loihi 2. In the demonstration, the neural network performs a common complex task of recognizing the effects of reflections and noise on radio signals as they propagate from the sender (base station) to the receiver (mobile). Then the neural network must reverse these environmental effects so that the information can be correctly decoded.

During training, researchers rewarded the model based on accuracy and the amount of communication between neurons. As a result, the neural communication was reduced, or sparsified, by 75 to 99% depending on the difficulty of the radio environment and the amount of work needed by the AI to correct the environmental effects on the signal.

Loihi 2 is built to leverage such sparse messaging and computation. With its asynchronous spike-based communication, neurons do not need to compute or communicate information when there is no change. Furthermore, Loihi 2 can compute with substantially less power due to its tight compute-memory integration. This reduces the energy and latency involved in moving data between the compute unit and the memory.

Like the human brain’s biological neural circuits that can intelligently process, respond to, and learn from real-world data at microwatt power levels and millisecond response times, neuromorphic computing can unlock orders of magnitude gains in efficiency and performance.

Neuromorphic computing AI solutions could address the computational power needed for future intelligent telecom networks. Complex telecom computation results must be produced in tight deadlines down to the millisecond range. Instead of using GPUs that draw substantial amounts of power, neuromorphic computing can provide faster processing and improved energy efficiency.

Emerging Technologies and Telecommunications

Learn more about emerging technologies and telecommunications in this episode of InTechnology. Host Camille Morhardt interviews Mischa Dohler, VP of Emerging Technologies at Ericsson, about neuromorphic computing, quantum computing, and more.



While Ericsson being deeply engaged with Intel even in the area of neuromorphic research doesn’t preclude them from also knocking on BrainChip’s door, this new reveal reaffirms my hesitation about adding Ericsson to our list of companies above the waterline, given the lack of any official acknowledgment by either party to date.

So to sum it up: Ericsson collaborating with Intel in neuromorphic research is a verifiable fact, while an NDA with BrainChip is merely speculation so far.
 

Attachments

  • 361EABCC-0B13-4EA1-A40A-FE23E802B8A1.jpeg
    361EABCC-0B13-4EA1-A40A-FE23E802B8A1.jpeg
    382.3 KB · Views: 1,240
  • Like
  • Love
Reactions: 23 users
Just posted yesterday apparently.






Vibration Classification with BrainChip's Akida​

With predictive maintenance, you can monitor your equipment while it’s running: This means that there is less downtime for inspections and repair jobs because the monitoring process takes place during operation instead of waiting until something breaks or wears out.
The Edge Impulse platform and solutions engineering team enables companies to make more accurate predictions about when devices might fail, which lets them optimize their fleet maintenance and use service crews most effectively. This saves the companies money by letting them lower overall asset downtime and allows customers to be more satisfied with their product and services.
In this article, we will explain some of the beneficial applications of predictive maintenance, and then show how to build a predictive maintenance solution that will detect abnormal vibrations using Edge Impulse’s platform, the BrainChip Akida hardware, and a computer cooling fan.
Business Case Examples for Edge Predictive Maintenance
Predictive maintenance provides a wide variety of business benefits, such as:
Predicting asset depreciation and maintenance timelines The security and building-access industry have been experiencing increasing pressure due to the global pandemic, and it’s imperative for customers to understand when a security door or component might fail. By anticipating maintenance, companies can reduce unplanned out-of-service intervals, allowing for minimal disruption in office buildings where there is huge traffic of people.
Lowering cost and gaining more ROI Global shipping companies are looking for ways to lower their costs and increase efficiency. Focusing on predictive maintenance can allow them to proactively address any issues before they become costly or cause unsafe conditions in order to avoid downtime on ships.
 
  • Like
  • Fire
  • Love
Reactions: 36 users

cosors

👀
Last edited:
  • Like
  • Thinking
  • Fire
Reactions: 10 users
  • Like
  • Fire
Reactions: 11 users

Tothemoon24

Top 20

IMG_8803.jpeg

tinyML Summit – April 22-24, 2024 (San Francisco)​


BrainChip is proud to be a Silver Sponsor of the 2024 tinyML© Summit. This annual event, whichis hosted by the tinyML Foundation, attracts global innovators and companies with a common goal of advancing and promoting Tiny ML.
BrainChip will be presenting 2nd-generation Akida’s enhanced features including an expanded capacity to learn efficiently at extremely small form factors. Join us at our booth, whose location will be announced in the coming months. We look forward to talking shop with you there!
 
  • Like
  • Fire
  • Love
Reactions: 32 users

cosors

👀
Interestingly, BrainChip has a Software Development Centre, in Hyberabad.

I wonder if any employees from our Company, will be attending?
were
me too
"To strengthen collaborations with its partners and developer ecosystem, Qualcomm is hosting an in-person Developer Conference in Hyderabad on April 17th, 2024, which will bring together 150+ developers, engineers, and industry leaders to showcase their solutions and discuss the latest advancements in tools and technologies that will help accelerate this ecosystem."

I rather wonder why I can't find anything in Hyderabad with... I often drop by there, the reason why I noticed it.
If Anil or one of his aides hadn't been sitting there I would be disappointed.

...or we have a confirmation ?
1:150+
 
Last edited:
  • Like
Reactions: 4 users

rgupta

Regular

"Intel builds world’s largest neuromorphic system​

News Analysis
Apr 17, 2024

Code-named Hala Point, the brain-inspired system packs 1,152 Loihi 2 processors in a data center chassis the size of a microwave oven.
View attachment 61058
Quantum computing is billed as a transformative computer architecture that’s capable of tackling difficult optimization problems and making AI faster and more efficient. But quantum computers can’t be scaled yet to the point where they can outperform even classical computers, and a full ecosystem of platforms, programming languages and applications is even farther away.
Meanwhile, another new technology is poised to make a much more immediate difference: neuromorphic computing.
Neuromorphic computing looks to redesign how computer chips are built by looking at human brains for inspiration. For example, our neurons handle both processing and memory storage, whereas in traditional computers the two are kept separate. Sending data back and forth takes time and energy.

In addition, neurons only fire when needed, reducing energy consumption even further. As a result, neuromorphic computing offers massive parallel computing capabilities far beyond traditional GPU architecture, says Omdia analyst Lian Jye Su. “In addition, it is better at energy consumption and efficiency.”

According to Gartner, neuromorphic computing is one of the technologies with the most potential to disrupt a broad cross-section of markets, as “a critical enabler,” however, it is still three to six years away from making an impact.
Intel has achieved a key milestone, however. Today, Intel announced the deployment of the world’s largest neuromorphic computer yet, deployed at Sandia National Laboratories.

The computer, which uses Intel’s Loihi 2 processor, is code named Hala Point, and it supports up to 20 quadrillion operations per second with an efficiency exceeding 15 trillion 8-bit operations per second per watt – all in a package about the size of a microwave oven. It supports up to 1.15 billion neurons and 128 billion synapses, or about the level of an owl’s brain.
According to Intel, this is the first large-scale neuromorphic system that surpasses the efficiency and performance of CPU- and GPU-based architectures for real-time AI workloads. Loihi-based systems can perform AI inference and solve optimization problems 50 times faster than CPU and GPU architectures, the company said, while using 100 times less energy.

And the technology is available now, for free, to enterprises interested in researching its potential, says Mike Davies, director of Intel’s Neuromorphic Computing Lab.

To get started, companies should first join the Intel Neuromorphic Research Community, whose members include GE, Hitachi, Airbus, Accenture, Logitech, as well as many research organizations and universities – more than 200 participants as of this writing. There is a waiting list, Davies says. But participation doesn’t cost anything, he adds.
“The only requirement is that they agree to share their results and findings so that we can continue improving the hardware,” Davies says. Membership includes free access to cloud-based neuromorphic computing resources, and, if the project is interesting enough, free on-site hardware, as well.
“Right now, there’s only one Hala Point, and Sandia has it,” he says. “But we are building more. And there are other systems that are not as big. We give accounts on Intel’s virtual cloud, and they log in and access the systems remotely.”

Intel was able to build a practical, usable, neuromorphic computer by sticking with traditional manufacturing technology and digital circuits, he says. Some alternate approaches, such as analog circuits, are more difficult to build.

View attachment 61059

But the Loihi 2 processor does use many core neuromorphic computing principles, including combining memory and processing. “We do really embrace all the architectural features that we find in the brain,” Davies says.
The system can even continue to learn in real time, he says. “That’s something that we see brains doing all the time.”
Traditional AI systems train on a particular data set and then don’t change once they’ve been trained. In Loihi 2, however, the communications between the neurons are configurable, meaning that they can change over time.

The way that this works is that an AI model is trained – by traditional means – then loaded into the neuromorphic computer. Each chip contains just a part of the full model. Then, when the model is used to analyze, say, streaming video, the chip already has the model weights in memory so it processes things quickly – and only if it is needed. “If one pixel changes, or one region of the image changes from frame to frame, we don’t recompute the entire image,” says Davies.
The original training does happen elsewhere, he admits. And while the neuromorphic computer can update specific weights over time, it’s not retraining the entire network from scratch.
This approach is particularly useful for edge computing, he says, and for processing streaming video, audio, or wireless signals. But it could also find a home in data centers and high-performance computing applications, he says.
“The best class of workloads that we found that work very well are solving optimization problems,” Davies says. “Things like finding the shortest path through a map or graph. Scheduling, logistics – these tend to run very well on the architecture.”

The fact that these use cases overlap with those of quantum computing was a surprise, he says. “But we have a billion-neuron system shipped today and running, instead of a couple of qubits.”
Intel isn’t the only player in this space. According to Omdia’s Su, a handful of vendors, including IBM, have developed neuromorphic chips for cloud AI compute, while companies like BrainChip and Prophesee are starting to offer neuromorphic chips for devices and edge applications.

However, there are several major hurdles to adoption, he adds. To start with, neuromorphic computing is based on event-based spikes, which requires a complete change in programming languages.
There are also very few event-driven AI models, Su adds. “At the moment, most of them are based on conventional neural networks that are designed for traditional computing architecture.”

Finally, these new programming languages and computing architectures aren’t compatible with existing technologies, he says. “The technology is too immature at the moment,” he says. “It is not backwardly compatible with legacy architecture. At the same time, the developer and software ecosystem are still very small with lack of tools and model choices.”
*"


*Question for the techies among us. Is it like this how the analyst from them describes it here?
View attachment 61060
That means approx 9216 loihi chips coz loihi 2 is a stack of 8 loihi chips. Think if we combine 9216 akida 1000 that will be human brain 11.2 billion neurons and 100 trillion synapse. But surprisingly brainchip is not trying the same. May be company is aware bigger models may give extra revenue but misuse of technology could also be there.
Dyor
 
  • Like
  • Fire
Reactions: 3 users

IloveLamp

Top 20
I guess this is why Rob's been liking GM posts....

1000015202.jpg
 
  • Like
  • Fire
  • Love
Reactions: 20 users
Good article here outlining the arising issues happening as the urging need for more and more data centres:


"Research group Dgtl Infra has estimated that global data centre capital expenditure will surpass $225bn in 2024. Nvidia’s chief executive Jensen Huang said this year that $1tn worth of data centres would need to be built in the next several years to support generative AI, which is power intensive and involves the processing of enormous volumes of information."

Such growth would require huge amounts of electricity, even if systems become more efficient. According to the International Energy Agency, the electricity consumed by data centres globally will more than double by 2026 to more than 1,000 terawatt hours, an amount roughly equivalent to what Japan consumes annually.

“Updated regulations and technological improvements, including on efficiency, will be crucial to moderate the surge in energy consumption from data centres,” the IEA said this year.

US data centre electricity consumption is expected to grow from 4 per cent to 6 per cent of total demand by 2026, while the AI industry is forecast to expand “exponentially” and consume at least 10 times its 2023 demand by 2026, said the IEA."

"Finding appropriate sites can be challenging, with power just one factor to consider among others such as the availability of large volumes of water to cool data centres.

“For every 50 sites I look at, maybe two get to the point where they may be developed,” said Appleby Strategy’s Golding. “Folks are sifting through large numbers of properties.”
 
  • Like
  • Sad
  • Fire
Reactions: 11 users
  • Like
  • Fire
  • Love
Reactions: 24 users

Frangipani

Regular
At least because Ai has to be everywhere at the moment, 'no' company can avoid the topic, it has to be worth a try and there is subsidy money and there have to be conditions for investments in future tec and the old one is out of the question. Just yesterday I saw a global analysis of Ai and where it is taking place, in addition to research. I couldn't see Germany at a quick glance. But maybe I'll have another look to see if I can find it again. What interests me are products that can be bought from companies like those in Germany.) Just because companies are running a research lab here and dropping slices by slices of white paper on projects because they have obviously or perhaps lost touch doesn't convince me to be confident that Germany plays a really important role in this topic. So far, I'm taking MB's hub seriously.
I'm one of the sceptics, but I'm happy to be proven wrong. But I don't want to get into debate 'loops', as I'm just an observer from the sidelines and don't have serious insight and I'm not a politician too.
:)

Bear in mind, though, that these AI Labs were established long before ChatGPT became a household name, so they were not born as a result of a some sort of “AI hype”.

Bosch: established in 2017

ZF: established in March 2019

Conti: started inofficialy at the end of 2021, official inauguration of its new home on the AI Campus Berlin on February 1, 2023


These three Tier 1 suppliers employ a number of researchers with ties to neuromorphic computing/engineering. (I am, however, not aware of any project to date involving Akida).

Bosch alone employs a whopping 270 (!) AI researchers all over the world
- four of them focusing on neuromorphic computing/engineering:

0EF88CB8-F76A-48F8-857A-5B1807C62C48.jpeg


ZF is one of ~40 partners in the EU- and BMBF-funded StorAIge project (July 1, 2021 - June 30, 2024); their specific interest in this project lies in wind turbine monitoring / predictive maintenance.


“StorAIge aims to develop and industrialize FDSOI 28nm and next generation embedded Phase Change Memory (ePCM) world-class semiconductor technologies enabling competitive Artificial Intelligence for Edge applications.”

2304C016-97B0-4B93-8AE0-F32473FE89D5.jpeg



And when looking for a link between Conti and neuromorphic, this AI robotics engineer came up in my search, and I am pretty sure he won’t be the only one:

7C4DA155-73F4-4DBB-8969-BA289CF44F57.jpeg


Out of those three tech giants, my bet would be on Bosch to come out of the shadows first.
Not in the automotive sector, initially though (as automotive grade chips are a prerequisite for that and would need to satisfy eg the functional safety standard ISO 26262), but maybe for some kind of wearables, olfactory sensors, household appliances etc.

Bosch, Conti & ZF can’t just sit back and rest on their laurels, but need to be innovative to survive, embracing technologies for a sustainable future. Otherwise we’ll see further laying off staff or even closing of plants and resp relocation of industry to low-wage countries.

Or to aptly say it with ZF’s motto: see.think.act.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 14 users

wilzy123

Founding Member
 
  • Like
  • Fire
  • Love
Reactions: 39 users

Damo4

Regular
  • Like
  • Fire
  • Love
Reactions: 13 users

rgupta

Regular

"Intel builds world’s largest neuromorphic system​

News Analysis
Apr 17, 2024

Code-named Hala Point, the brain-inspired system packs 1,152 Loihi 2 processors in a data center chassis the size of a microwave oven.
View attachment 61058
Quantum computing is billed as a transformative computer architecture that’s capable of tackling difficult optimization problems and making AI faster and more efficient. But quantum computers can’t be scaled yet to the point where they can outperform even classical computers, and a full ecosystem of platforms, programming languages and applications is even farther away.
Meanwhile, another new technology is poised to make a much more immediate difference: neuromorphic computing.
Neuromorphic computing looks to redesign how computer chips are built by looking at human brains for inspiration. For example, our neurons handle both processing and memory storage, whereas in traditional computers the two are kept separate. Sending data back and forth takes time and energy.

In addition, neurons only fire when needed, reducing energy consumption even further. As a result, neuromorphic computing offers massive parallel computing capabilities far beyond traditional GPU architecture, says Omdia analyst Lian Jye Su. “In addition, it is better at energy consumption and efficiency.”

According to Gartner, neuromorphic computing is one of the technologies with the most potential to disrupt a broad cross-section of markets, as “a critical enabler,” however, it is still three to six years away from making an impact.
Intel has achieved a key milestone, however. Today, Intel announced the deployment of the world’s largest neuromorphic computer yet, deployed at Sandia National Laboratories.

The computer, which uses Intel’s Loihi 2 processor, is code named Hala Point, and it supports up to 20 quadrillion operations per second with an efficiency exceeding 15 trillion 8-bit operations per second per watt – all in a package about the size of a microwave oven. It supports up to 1.15 billion neurons and 128 billion synapses, or about the level of an owl’s brain.
According to Intel, this is the first large-scale neuromorphic system that surpasses the efficiency and performance of CPU- and GPU-based architectures for real-time AI workloads. Loihi-based systems can perform AI inference and solve optimization problems 50 times faster than CPU and GPU architectures, the company said, while using 100 times less energy.

And the technology is available now, for free, to enterprises interested in researching its potential, says Mike Davies, director of Intel’s Neuromorphic Computing Lab.

To get started, companies should first join the Intel Neuromorphic Research Community, whose members include GE, Hitachi, Airbus, Accenture, Logitech, as well as many research organizations and universities – more than 200 participants as of this writing. There is a waiting list, Davies says. But participation doesn’t cost anything, he adds.
“The only requirement is that they agree to share their results and findings so that we can continue improving the hardware,” Davies says. Membership includes free access to cloud-based neuromorphic computing resources, and, if the project is interesting enough, free on-site hardware, as well.
“Right now, there’s only one Hala Point, and Sandia has it,” he says. “But we are building more. And there are other systems that are not as big. We give accounts on Intel’s virtual cloud, and they log in and access the systems remotely.”

Intel was able to build a practical, usable, neuromorphic computer by sticking with traditional manufacturing technology and digital circuits, he says. Some alternate approaches, such as analog circuits, are more difficult to build.

View attachment 61059

But the Loihi 2 processor does use many core neuromorphic computing principles, including combining memory and processing. “We do really embrace all the architectural features that we find in the brain,” Davies says.
The system can even continue to learn in real time, he says. “That’s something that we see brains doing all the time.”
Traditional AI systems train on a particular data set and then don’t change once they’ve been trained. In Loihi 2, however, the communications between the neurons are configurable, meaning that they can change over time.

The way that this works is that an AI model is trained – by traditional means – then loaded into the neuromorphic computer. Each chip contains just a part of the full model. Then, when the model is used to analyze, say, streaming video, the chip already has the model weights in memory so it processes things quickly – and only if it is needed. “If one pixel changes, or one region of the image changes from frame to frame, we don’t recompute the entire image,” says Davies.
The original training does happen elsewhere, he admits. And while the neuromorphic computer can update specific weights over time, it’s not retraining the entire network from scratch.
This approach is particularly useful for edge computing, he says, and for processing streaming video, audio, or wireless signals. But it could also find a home in data centers and high-performance computing applications, he says.
“The best class of workloads that we found that work very well are solving optimization problems,” Davies says. “Things like finding the shortest path through a map or graph. Scheduling, logistics – these tend to run very well on the architecture.”

The fact that these use cases overlap with those of quantum computing was a surprise, he says. “But we have a billion-neuron system shipped today and running, instead of a couple of qubits.”
Intel isn’t the only player in this space. According to Omdia’s Su, a handful of vendors, including IBM, have developed neuromorphic chips for cloud AI compute, while companies like BrainChip and Prophesee are starting to offer neuromorphic chips for devices and edge applications.

However, there are several major hurdles to adoption, he adds. To start with, neuromorphic computing is based on event-based spikes, which requires a complete change in programming languages.
There are also very few event-driven AI models, Su adds. “At the moment, most of them are based on conventional neural networks that are designed for traditional computing architecture.”

Finally, these new programming languages and computing architectures aren’t compatible with existing technologies, he says. “The technology is too immature at the moment,” he says. “It is not backwardly compatible with legacy architecture. At the same time, the developer and software ecosystem are still very small with lack of tools and model choices.”
*"


*Question for the techies among us. Is it like this how the analyst from them describes it here?
View attachment 61060
9216 akida 1000 will be 10 billion neurons and 100 trillion synapse and 9216 loihi will be 10 billion neurons and only 1 trillion synapse. Is that a reason for better performance of akida. Neurons of akida are better joined than loihi and loihi 2
 
  • Like
  • Wow
  • Thinking
Reactions: 12 users

IloveLamp

Top 20
  • Like
  • Love
Reactions: 19 users

Damo4

Regular
9216 akida 1000 will be 10 billion neurons and 100 trillion synapse and 9216 loihi will be 10 billion neurons and only 1 trillion synapse. Is that a reason for better performance of akida. Neurons of akida are better joined than loihi and loihi 2

Hi rgupta,

It all comes down to Synaptic Density.
This a good read to understand the relationship: What Is the Akida Event Domain Neural Processor?

I think however, there is more to it than just a Neuron/Synapse ratio.
See here, there are explanations that go over my head and could point towards the reason for Loihi 2 having less MAX synapses per neuron than Loihi 1
Taking Neuromorphic Computing to the Next Level with Loihi 2 Technology Brief

1713489948225.png


1713489932586.png
 
  • Like
  • Fire
  • Love
Reactions: 14 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Could someone please, please, PLEEEEEEEAAAASE ask Steven Thorne or Rob Telson to get on the blower to Sam Altman ASAP?!

Didn't Sam Altman attend Intel‘s IFS Connect Forum in Feb earlier this year? If he did, then wouldn't he have seen our demo? And if he saw our demo, then he'd have to know that we're like the Betty Ford clinic for those suffering from crippling addictions to NVIDIA's expensive and energy guzzling GPU's.

I'd call Sam myself, but I haven't got his phone number at present.

Thanks in advance to Steve or Rob! Do us proud lads!

Screenshot 2024-04-19 at 12.47.29 pm.png



Microsoft and OpenAI Will Spend $100 Billion to Wean Themselves Off Nvidia GPUs​

The companies are working on an audacious data center for AI that's expected to be operational in 2028.
By Josh Norem April 18, 2024

Credit: Microsoft
Microsoft was the first big company to throw a few billion at ChatGPT-maker OpenAI. Now, a new report states the two companies are working together on a very ambitious AI project that will cost at least $100 billion. Both companies are currently huge Nvidia customers; Microsoft uses Nvidia hardware for its Azure cloud infrastructure, while OpenAI uses Nvidia GPUs for ChatGPT. However, the new data center will host an AI supercomputer codenamed "Stargate," which might not include any Nvidia hardware at all.
The news of the companies' plans to ditch Nvidia hardware comes from a variety of sources, as noted by Windows Central. The report details a five-phase plan developed by Microsoft and OpenAI to advance the two companies' AI ambitions through the end of the decade, with the fifth phase being the so-called Stargate AI supercomputer. This computer is expected to be operational by 2028 and will reportedly be outfitted with future versions of Microsoft's custom-made Arm Cobalt processors and Maia XPUs, all connected by Ethernet.

Microsoft is reportedly planning on using its custom-built Cobalt and Maia silicon to power its future AI ambitions. Credit: Microsoft
This future data center, which will house Stargate, will allow both companies to pursue their AI ambitions far into the future; reports say it will cost around $115 billion. That level of investment shows both companies have no plans to move their respective feet off the AI gas pedal any time soon and that they expect this market to continue to expand far into the future. TechRadar also notes that the amount required to get this supercomputer running is more than triple what Microsoft spent on CapEx last year, so the company is tripling down on AI, it seems.
What's also notable is at least one source says the data center itself will be the computer, as opposed to just housing it. Multiple data centers may link together, like Voltron, to form the supercomputer. This futuristic machine will reportedly push the boundaries of AI capabilities. Given how fast things are advancing in this field, it's impossible to imagine what that will even mean four years from now.


This situation, where massive companies abandon Nvidia for custom-made AI accelerators, will likely become a significant issue for Nvidia soon. Long wait times for Nvidia GPUs and exorbitant pricing have resulted in many companies reportedly beginning to look elsewhere to satisfy their AI hardware needs, which is why Nvidia is already looking to capture this market. OpenAI CEO Sam Altman is reportedly looking to build a global infrastructure of fabs and power sources to make custom silicon, so its plans with Microsoft might be aligned along this front.



 
Last edited:
  • Like
  • Love
  • Thinking
Reactions: 25 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
  • Like
  • Haha
  • Love
Reactions: 31 users

Diogenese

Top 20
All that glitters ...



Exclusive: Apple acquires Xnor.ai, edge AI spin-out from Paul Allen’s AI2, for price in $200M range

BY ALAN BOYLE, TAYLOR SOPER & TODD BISHOP on January 15, 2020

Apple buys Xnor.ai, an edge-centric AI2 spin-out, for price in $200M range (geekwire.com)


Apple has acquired Xnor.ai, a Seattle startup specializing in low-power, edge-based artificial intelligence tools, sources with knowledge of the deal told GeekWire.

The acquisition echoes Apple’s high-profile purchase of Seattle AI startup Turi in 2016. Speaking on condition of anonymity, sources said Apple paid an amount similar to what was paid for Turi, in the range of $200 million.

Xnor.ai didn’t immediately respond to our inquiries, while Apple emailed us its standard response on questions about acquisitions: “Apple buys smaller technology companies from time to time and we generally do not discuss our purpose or plans.” (The company sent the exact same response when we broke the Turi story.



The arrangement suggests that Xnor’s AI-enabled image recognition tools could well become standard features in future iPhones and webcams.

Xnor.ai’s acquisition marks a big win for the Allen Institute for Artificial Intelligence, or AI2, created by the late Microsoft co-founder Paul Allen to boost AI research. It was the second spin-out from AI2’s startup incubator, following Kitt.ai, which was acquired by the Chinese search engine powerhouse Baidu in 2017 for an undisclosed sum.

The deal is a big win as well for the startup’s early investors, including Seattle’s Madrona Venture Group; and for the University of Washington, which serves as a major source of Xnor.ai’s talent pool.

The three-year-old startup’s secret sauce has to do with AI on the edge — machine learning and image recognition tools that can be executed on low-power devices rather than relying on the cloud. “We’ve been able to scale AI out of the cloud to every device out there,” co-founder Ali Farhadi, who is the venture’s CXO (chief Xnor officer) as well as a UW professor, told GeekWire in 2018
.


This Apple patent is for compressing AI models. one of the inventors is ex-Xnor.

US11651192B2 Compressed convolutional neural network models 20190212 Rastegari nee Xnor

Systems and processes for training and compressing a convolutional neural network model include the use of quantization and layer fusion. Quantized training data is passed through a convolutional layer of a neural network model to generate convolutional results during a first iteration of training the neural network model. The convolutional results are passed through a batch normalization layer of the neural network model to update normalization parameters of the batch normalization layer. The convolutional layer is fused with the batch normalization layer to generate a first fused layer and the fused parameters of the fused layer are quantized. The quantized training data is passed through the fused layer using the quantized fused parameters to generate output data, which may be quantized for a subsequent layer in the training iteration.

[0018] A convolutional neural network (CNN) model may be designed as a deep learning tool capable of complex tasks such as image classification and natural language processing. CNN models typically receive input data in a floating point number format and perform floating point operations on the data as the data progresses through different layers of the CNN model. Floating point operations are relatively inefficient with respect to power consumed, memory usage and processor usage. These inefficiencies limit the computing platforms on which CNN models can be deployed. For example, field-programmable gate arrays (FPGA) may not include dedicated floating point modules for performing floating point operations and may have limited memory bandwidth that would be inefficient working with 32-bit floating point numbers.

[0019] As described in further detail below, the subject technology includes systems and processes for building a compressed CNN model suitable for deployment on different types of computing platforms having different processing, power and memory capabilities
.


Of course, that does not absolutely preclude the possibility that Apple are running the NN model on a more efficient SoC.


Apple have been developing NNs for several years, albeit with MACs.

US11487846B2 Performing multiply and accumulate operations in neural network processor 20180504

1713495070166.png



1713495142328.png


a neural processor circuit including a plurality of neural engine circuits, a data buffer, and a kernel fetcher circuit. At least one of the neural engine circuits is configured to receive matrix elements of a matrix as at least the portion of the input data from the data buffer over multiple processing cycles. The at least one neural engine circuit further receives vector elements of a vector from the kernel fetcher circuit, wherein each of the vector elements is extracted as a corresponding kernel to the at least one neural engine circuit in each of the processing cycles. The at least one neural engine circuit performs multiplication between the matrix and the vector as a convolution operation to produce at least one output channel of the output data.
 
  • Like
  • Wow
  • Love
Reactions: 13 users
To
Apple have been developing NNs for several years, albeit with MACs.

US11487846B2 Performing multiply and accumulate operations in neural network processor 20180504

View attachment 61092


View attachment 61093

a neural processor circuit including a plurality of neural engine circuits, a data buffer, and a kernel fetcher circuit. At least one of the neural engine circuits is configured to receive matrix elements of a matrix as at least the portion of the input data from the data buffer over multiple processing cycles. The at least one neural engine circuit further receives vector elements of a vector from the kernel fetcher circuit, wherein each of the vector elements is extracted as a corresponding kernel to the at least one neural engine circuit in each of the processing cycles. The at least one neural engine circuit performs multiplication between the matrix and the vector as a convolution operation to produce at least one output channel of the output data.
In layman terms what does that mean as far as BRN being apart of it ? Are we out or still in with a chance ?
 
  • Like
Reactions: 6 users
Top Bottom