BRN Discussion Ongoing

HopalongPetrovski

I'm Spartacus!
Impressive first look and demonstration, of the new Atlas.



All these new humanoid robots coming out, will need efficient brains.

AKIDA technology can deliver.

Dad blang bucket o' bolts........ 🤣
Oh, the pain, the pain!!! 🤣


32d6e27d-fc8c-4a63-ab12-78ecc704e1a8-Lost_in_Space_file_art.jpg
 
  • Haha
Reactions: 8 users

rgupta

Regular
So is there a chance Apple is utilising Akida?
Let us wait and watch.
Either apple uses oor technology or management claim of 3years superiority over others is in question.

But it is big affirmative stamp to brainchip thinking. We will see a big market existence for on device processing and will get a good shre our of that.
 
  • Like
Reactions: 4 users
Let us wait and watch.
Either apple uses oor technology or management claim of 3years superiority over others is in question.

But it is big affirmative stamp to brainchip thinking. We will see a big market existence for on device processing and will get a good shre our of that.
Indeed it puts a big question mark over the 3 year lead doesn’t it.
I would most definitely like to hear management thoughts on this if we are NOT involved ….than this question needs to be answered 100%
 
  • Sad
  • Fire
  • Like
Reactions: 3 users

rgupta

Regular
Indeed it puts a big question mark over the 3 year lead doesn’t it.
I would most definitely like to hear management thoughts on this if we are NOT involved ….than this question needs to be answered 100%
Let us wait and watch.
 
  • Like
Reactions: 2 users

7fĂźr7

Top 20
Let us wait and watch.
Either apple uses oor technology or management claim of 3years superiority over others is in question.

But it is big affirmative stamp to brainchip thinking. We will see a big market existence for on device processing and will get a good shre our of that.
Indeed it puts a big question mark over the 3 year lead doesn’t it.
I would most definitely like to hear management thoughts on this if we are NOT involved ….than this question needs to be answered 100%
😂😂😂 I love this kind of sarcasm! It gives kind of a freshness into the debate.. okok let’s try this… if akida is not in the whole Tesla program incl. falcon etc… I will fire Rob !
 
  • Haha
Reactions: 3 users
At least because Ai has to be everywhere at the moment, 'no' company can avoid the topic, it has to be worth a try and there is subsidy money and there have to be conditions for investments in future tec and the old one is out of the question. Just yesterday I saw a global analysis of Ai and where it is taking place, in addition to research. I couldn't see Germany at a quick glance. But maybe I'll have another look to see if I can find it again. What interests me are products that can be bought from companies like those in Germany.) Just because companies are running a research lab here and dropping slices by slices of white paper on projects because they have obviously or perhaps lost touch doesn't convince me to be confident that Germany plays a really important role in this topic. So far, I'm taking MB's hub seriously.
I'm one of the sceptics, but I'm happy to be proven wrong. But I don't want to get into debate 'loops', as I'm just an observer from the sidelines and don't have serious insight and I'm not a politician too.
:)
"At least because Ai has to be everywhere at the moment, 'no' company can avoid the topic"

👍

It would literally be like shooting fish in a barrel, for the BrainChip sales team, in the current environment.


Even ol snake eyes, would have met his targets.


20240418_163633.jpg



There is absolutely no doubt in my mind, that they are extremely busy.
Don't let the absence of visible progress on the sales front fool you, or anyone here.
 
  • Like
  • Haha
  • Fire
Reactions: 11 users

7fĂźr7

Top 20
I can only repeat myself. There are indeed some small investors who, through their daily meticulous tracking of Akida and the posting of many contributions, believe they have a VIP status and claims against BrainChip. For example, insight into confidential contracts or demanding immediate clarification of certain issues.

I would like to remind once again that we are only voluntary investors who want a piece of the money. The private interest in the technology or the company, or because one is an investor, does not give anyone the right to be rude to the founders or to question their competence. Anyone who now says, "As an investor, I have the right to..." yes, and in this case, there are scheduled votes in which you can participate. With all other demands and interference, one makes oneself ridiculous. So continue to make your research and postings and let the management work!
 
  • Like
  • Love
  • Fire
Reactions: 13 users

CHIPS

Regular
Our CTO thinks this is great


View attachment 61029 View attachment 61030

Anthony Lewis is a great fan of robotics, so it is not surprising that he celebrates this. This does not necessarily have anything to do with BrainChip.
 
  • Like
Reactions: 14 users

rgupta

Regular
Let us wait and watch.
Either apple uses oor technology or management claim of 3years superiority over others is in question.

But it is big affirmative stamp to brainchip thinking. We will see a big market existence for on device processing and will get a good shre our of that.

😂😂😂 I love this kind of sarcasm! It gives kind of a freshness into the debate.. okok let’s try this… if akida is not in the whole Tesla program incl. falcon etc… I will fire Rob !
To me where ever it ends it will be big positive for brainchip and my reasons are
1. It will be big affirmation to ideology of brainchip i.e cloud free processing at the edge
2. Apple is one of the biggest customer of arm and I assume arm tech should have a lot in apple technology. We are fully compatible with arm, which means implementation of akida cores can be quite straigh forward.
3. No one company can capture 100% market share of a product and even if apple does not use us their competitors will need us and looking at our MC that will be a big plus for SH.
4. From the Mercedes what we learned technology is revolutionary but applying the same for commercial usage have its own limitations and no big company will try the same until they find it fool proof. Apple have a history of keep quite until they blast the market.
So I still feel even if we are not there yet, we will there is near future very soon.

Biggest plus to me atleast a company as big as apple is feeling a need for technology like that.
So yes in the end let us wait and watch but to me in all aspects it will be a big positive development for overall develop of brainchip as a tech company.
 
  • Like
  • Love
  • Fire
Reactions: 14 users

cassip

Regular
Hi all,

could be interesting:


"SAN FRANCISCO, April 17 (Reuters) - SiTime (SITM.O)
on Wednesday introduced a chip that it says is designed to help data centers built for artificial intelligence applications run more efficiently.
SiTime makes what are known as timing chips, whose job is set a steady beat for all the parts of a computer and keep them running together in sync, like a conductor in an orchestra directing multiple groups of instruments. The company says its new line of chips, called Chorus, can do so with 10 times more precision than older styles of timing chips.

SiTime CEO Rajesh Vashist said the company aims to help customers save electricity with that precision. SiTime's chips themselves require less than a watt of power, but powerful AI chips such as Nvidia's (NVDA.O)
, opens new tab require more than 1,000 watts of power.
With a more precise clock to keep all the elements of a computer in sync, parts of the machine can be turned off for a few milliseconds at a time when they are not in use. Over the multiple years a power-hungry data center server might be in use, it can generate energy savings, though the amount will depend on how SiTime's chips are used.

"We deliver timing that they can rely on so that they can wake up their products and bring data more efficiently to them, rather than just running more often," Vashist said in an interview.
SiTime said the chips will be available in the second half of this year."

SiTime was discussed here. It is owned by MegaChips (?).

Regards
Cassip
 
  • Like
  • Thinking
  • Fire
Reactions: 19 users

IloveLamp

Top 20
Anthony Lewis is a great fan of robotics, so it is not surprising that he celebrates this. This does not necessarily have anything to do with BrainChip.
I wouldn't be so sure. Personally i think there's a better than average chance we're involved with Hyundai.

Not the first boston dynamics post brn staff have liked either

Imo dyor
 
Last edited:
  • Like
  • Love
  • Thinking
Reactions: 18 users

KMuzza

Mad Scientist
Hi Draed,
Check this out with your article.

BUT look at the LAST FRAME-at -3.min12-????. incase you miss the 00.01 FIRST FRAME.


In Japan as well - but a possible???. All Cameras. ???.
(yes - 1year old)
AKIDA BALLISTA UBQTS.
 
Last edited:
  • Thinking
  • Wow
  • Fire
Reactions: 5 users

KMuzza

Mad Scientist
Hi Draed,
Check this out with your article.

BUT look at the LAST FRAME-at -3.min12-????. incase you miss the 00.01 FIRST FRAME.


In Japan as well - but a possible???. All Cameras. ???.
(yes - 1year old)
AKIDA BALLISTA UBQTS.


The chip offers 4.5 times more computing horsepower compared with its prior generation and is manufactured with Taiwan Semiconductor Manufacturing Co's (2330.TW), opens new tab 7-nanometer process.

"It can support all five star ratings globally, but be extremely power efficient and cost efficient," Nehushtan said. "That's kind of the mission statement of this chip."
The sensors on EyeQ6L include an 8-megapixel camera that is capable of a 120-degree lateral field of vision that can detect environmental conditions and objects at a greater distance.
The company said its more advanced assisted-driving chip, the EyeQ6 High, is set to enter volume production "early next year."

Mobileye is set to report first-quarter results on April 25.
The Technology Roundup newsletter brings the latest news and trends straight to your inbox. Sign up here.

Reporting by Max A. Cherney and Abhirup Roy in San Francisco; Additional reporting by Yuvraj Malik in Bengaluru; Editing by Leslie Adler
 
  • Thinking
  • Fire
  • Like
Reactions: 5 users

Esq.111

Fascinatingly Intuitive.
Evening KMuzza ,

Obviously a very cool chip , with sales to boot.

The only mob that we have , had , a confirmed partnership with was....
MAGIK EYE as opposed to this company called MOBILEYE.

MAGIK EYE was / is a Japanese company when thay were announced as a partner ......some time ago.
From memory over 2 years ago.



Would be nice though , ...who knows... I certainly do not.

Regards,
Esq.
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 10 users
  • Like
  • Fire
Reactions: 3 users

KMuzza

Mad Scientist
Evening KMuzza ,

Obviously a very cool chip , with sales to boot.

The only mob that we have , had , a confirmed partnership with was....
MAGICEYE as opposed to this company called MOBILEYE.

MAGICEYE was / is a Japanese company when thay were announced as a partner ......some time ago.
From memory over 2 years ago.


Would be nice though , ...who knows... I certainly do not.

Regards,
Esq.
Hi Esq-

Architecture of Efficiency
- low power envelope.

-Heterogenous Computing.
Using the most suitable core for each task.
-...including deep learning neural networks.


1713435206211.png
 
Last edited:
  • Thinking
  • Love
Reactions: 4 users

Esq.111

Fascinatingly Intuitive.
  • Like
Reactions: 6 users

GazDix

Regular
To me where ever it ends it will be big positive for brainchip and my reasons are
1. It will be big affirmation to ideology of brainchip i.e cloud free processing at the edge
2. Apple is one of the biggest customer of arm and I assume arm tech should have a lot in apple technology. We are fully compatible with arm, which means implementation of akida cores can be quite straigh forward.
3. No one company can capture 100% market share of a product and even if apple does not use us their competitors will need us and looking at our MC that will be a big plus for SH.
4. From the Mercedes what we learned technology is revolutionary but applying the same for commercial usage have its own limitations and no big company will try the same until they find it fool proof. Apple have a history of keep quite until they blast the market.
So I still feel even if we are not there yet, we will there is near future very soon.

Biggest plus to me atleast a company as big as apple is feeling a need for technology like that.
So yes in the end let us wait and watch but to me in all aspects it will be a big positive development for overall develop of brainchip as a tech company.
Rob Telson was Vice President of Worldwide Foundry Sales for ARM before he joined Brainchip.

This is all I need to know.
 
  • Like
  • Fire
Reactions: 26 users

cosors

👀
View attachment 61021


View attachment 61023








Link to the conference paper Mike Davies refers to:


It looks like Akida is not mentioned alongside other neuromorphic hardware platforms…

View attachment 61024
View attachment 61025

"Intel builds world’s largest neuromorphic system​

News Analysis
Apr 17, 2024

Code-named Hala Point, the brain-inspired system packs 1,152 Loihi 2 processors in a data center chassis the size of a microwave oven.
1713439067409.png

Quantum computing is billed as a transformative computer architecture that’s capable of tackling difficult optimization problems and making AI faster and more efficient. But quantum computers can’t be scaled yet to the point where they can outperform even classical computers, and a full ecosystem of platforms, programming languages and applications is even farther away.
Meanwhile, another new technology is poised to make a much more immediate difference: neuromorphic computing.
Neuromorphic computing looks to redesign how computer chips are built by looking at human brains for inspiration. For example, our neurons handle both processing and memory storage, whereas in traditional computers the two are kept separate. Sending data back and forth takes time and energy.

In addition, neurons only fire when needed, reducing energy consumption even further. As a result, neuromorphic computing offers massive parallel computing capabilities far beyond traditional GPU architecture, says Omdia analyst Lian Jye Su. “In addition, it is better at energy consumption and efficiency.”

According to Gartner, neuromorphic computing is one of the technologies with the most potential to disrupt a broad cross-section of markets, as “a critical enabler,” however, it is still three to six years away from making an impact.
Intel has achieved a key milestone, however. Today, Intel announced the deployment of the world’s largest neuromorphic computer yet, deployed at Sandia National Laboratories.

The computer, which uses Intel’s Loihi 2 processor, is code named Hala Point, and it supports up to 20 quadrillion operations per second with an efficiency exceeding 15 trillion 8-bit operations per second per watt – all in a package about the size of a microwave oven. It supports up to 1.15 billion neurons and 128 billion synapses, or about the level of an owl’s brain.
According to Intel, this is the first large-scale neuromorphic system that surpasses the efficiency and performance of CPU- and GPU-based architectures for real-time AI workloads. Loihi-based systems can perform AI inference and solve optimization problems 50 times faster than CPU and GPU architectures, the company said, while using 100 times less energy.

And the technology is available now, for free, to enterprises interested in researching its potential, says Mike Davies, director of Intel’s Neuromorphic Computing Lab.

To get started, companies should first join the Intel Neuromorphic Research Community, whose members include GE, Hitachi, Airbus, Accenture, Logitech, as well as many research organizations and universities – more than 200 participants as of this writing. There is a waiting list, Davies says. But participation doesn’t cost anything, he adds.
“The only requirement is that they agree to share their results and findings so that we can continue improving the hardware,” Davies says. Membership includes free access to cloud-based neuromorphic computing resources, and, if the project is interesting enough, free on-site hardware, as well.
“Right now, there’s only one Hala Point, and Sandia has it,” he says. “But we are building more. And there are other systems that are not as big. We give accounts on Intel’s virtual cloud, and they log in and access the systems remotely.”

Intel was able to build a practical, usable, neuromorphic computer by sticking with traditional manufacturing technology and digital circuits, he says. Some alternate approaches, such as analog circuits, are more difficult to build.

1713439139914.png


But the Loihi 2 processor does use many core neuromorphic computing principles, including combining memory and processing. “We do really embrace all the architectural features that we find in the brain,” Davies says.
The system can even continue to learn in real time, he says. “That’s something that we see brains doing all the time.”
Traditional AI systems train on a particular data set and then don’t change once they’ve been trained. In Loihi 2, however, the communications between the neurons are configurable, meaning that they can change over time.

The way that this works is that an AI model is trained – by traditional means – then loaded into the neuromorphic computer. Each chip contains just a part of the full model. Then, when the model is used to analyze, say, streaming video, the chip already has the model weights in memory so it processes things quickly – and only if it is needed. “If one pixel changes, or one region of the image changes from frame to frame, we don’t recompute the entire image,” says Davies.
The original training does happen elsewhere, he admits. And while the neuromorphic computer can update specific weights over time, it’s not retraining the entire network from scratch.
This approach is particularly useful for edge computing, he says, and for processing streaming video, audio, or wireless signals. But it could also find a home in data centers and high-performance computing applications, he says.
“The best class of workloads that we found that work very well are solving optimization problems,” Davies says. “Things like finding the shortest path through a map or graph. Scheduling, logistics – these tend to run very well on the architecture.”

The fact that these use cases overlap with those of quantum computing was a surprise, he says. “But we have a billion-neuron system shipped today and running, instead of a couple of qubits.”
Intel isn’t the only player in this space. According to Omdia’s Su, a handful of vendors, including IBM, have developed neuromorphic chips for cloud AI compute, while companies like BrainChip and Prophesee are starting to offer neuromorphic chips for devices and edge applications.

However, there are several major hurdles to adoption, he adds. To start with, neuromorphic computing is based on event-based spikes, which requires a complete change in programming languages.
There are also very few event-driven AI models, Su adds. “At the moment, most of them are based on conventional neural networks that are designed for traditional computing architecture.”

Finally, these new programming languages and computing architectures aren’t compatible with existing technologies, he says. “The technology is too immature at the moment,” he says. “It is not backwardly compatible with legacy architecture. At the same time, the developer and software ecosystem are still very small with lack of tools and model choices.”
*"


*Question for the techies among us. Is it like this how the analyst from them describes it here?
1713439502069.png

 
Last edited:
  • Like
  • Fire
  • Haha
Reactions: 15 users

Frangipani

Regular
Speaking of your legendary scroll - I’ve been meaning to ask you, what was it that ultimately convinced you to include Ericsson in your list shortly after posting the above? 🤔

To me, this is a little premature, as I understood those circulating lists to be about companies and institutions Brainchip has verifiably been engaged with and not about companies or institutions whose researchers may have experimented with AKD1000 without Brainchip even being aware (and possibly finding out from us here on TSE, once two or more of the 1000 Eyes have spotted a publication.) That list would evidently be much, much longer!

Unfortunately, FF’s question about Ericsson went unanswered during the recent Virtual Investor Roadshow, but he had nevertheless included Ericsson in his list and has been pushing this connection very hard ever since. Mind you, I am not saying at all they couldn’t be behind one of the NDAs, but currently there is not enough evidence for me to add them to one of those lists.

Personally, I’d much prefer to place Ericsson under the iceberg waterline for the time being, as we simply don’t know whether those Ericsson researchers who published that often quoted December 2023 paper (Towards 6G Zero-Energy Internet of Things…) have been engaged with our company in any official way.

Especially since Ericsson has been closely collaborating with Intel - they even established a joint lab in Santa Clara, CA in 2022.

https://www.ericsson.com/en/news/20...ach-new-milestone-with-tech-hub-collaboration

“The Ericsson-Intel Tech Hub has established itself as an incubator for cutting-edge advancements and new hardware technology exploration, achieving one milestone after another. Celebrating a series of firsts – including the recent successful Cloud RAN call using Intel’s future Xeon processor, codenamed Granite Rapids-D – the hub has been instrumental in the development of technologies that help service providers build open, resilient, sustainable and intelligent mobile networks.”

Around the same time some Ericsson researchers were playing with Akida last year, as is evident by the December 2023 publication @Fullmoonfever shared with us on Boxing Day, at least one other Ericsson employee who is a Senior Researcher for Future Computing Platforms was very much engaged with Intel and gave a presentation on “Neuromorphic Optimisers in Telecommunications Networks” using Loihi in July 2023.

View attachment 60752
View attachment 60753

View attachment 60768

And a couple of weeks ago, Intel published a podcast with Mischa Dohler, VP of Emerging Technologies at Ericsson in Silicon Valley and formerly a university professor at King’s College London, who was the Chair of the Wireless Communications Department there as well as Director of their Centre for Telecommunications Research.

He was already involved with Ericsson while still in the UK:

View attachment 60765


In this podcast, shared here on TSE a couple of times in recent weeks, Mischa Dohler - amongst other things - shared his thoughts on neuromorphic and quantum computing, and while it was a great endorsement of the benefits of neuromorphic technology in general, he was rather vague about the scope of its future integration at Ericsson.

See the podcast video in Bravo’s link:
https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-418271

And here you can read the transcript:


Camille Morhardt 15:39

Explain neuromorphic computing to us.

Mischa Dohler 15:42

It’s an entirely new compute paradigm, right? So in the computer science world, we talk about a Von Neumann architecture. So Von Neumann introduced that architecture, saying, “Hey, let’s have the compute engine, which we call CPU these days, decoupled from the memory where you store your information, and then connect it with a little bus there.” And that takes a lot of time to actually get this information forth and back between memory and compute. It costs a lot of energy.

Now, along comes neuromorphic, where you have actually completely new materials, which allow you to put computing and memory in the very same instance. So you don’t need to ship all that information forth and back, you can do it in many ways. One way is just to use very new material, kind of meta materials to make that happen. And it turns out by doing this, you save a lot of energy, because you know, you can suddenly just maintain part of the little chip infrastructure, which you need to do certain calculus rather than keeping everything powered at the level of ones and zeros as we deal with our traditional infrastructure.

So it turns out that bringing this memory and compute together, we save a lot of energy. Then people came along said, “Hey, why don’t we build entirely new ways of calculating things?” And the neuromorphic compute fabric allows us to do operations without using energy for multiplications. And multiplications, we need that a lot. Right? So we roughly have additions and multiplications. Now multiplications take about a 10th of the energy today in neuromorphic. Put it all together, and suddenly very complex operations like AI consume a million times less energy than our traditional CPU fabric and GPU fabric. And everyone was, “Hey, why don’t we use that?” And everybody got very excited about this, of course, loads of technology challenge this, like the very early kind of CPU years in a way.

But you know, companies like Intel, really pushing this very hard and as a great fabric, and other companies out there. And I’m trying to understand, where are we commercially? Would that make sense to implement that? You know, and our gear, we’ll have 6G gear, which we’ll have by then at the end of this decade.

Camille Morhardt 17:56

So how is neuromorphic computing and a roll into 6G?

Mischa Dohler 18:00
So we still don’t know; we’re still investigating as a community. I’m not saying Ericsson per se, but as a community trying to understand where will it be. What we are starting to see, 6G will really be about a lot more antenna elements. So we call this ultra-massive mime, whatever you want to call it at the moment, we may have a 64 elements on the roof. And then maybe you know, you have like six maybe in the phone, “Hey, what if we scale this up to 1,000 antenna elements on the roof?” And then suddenly, you start thinking, “Hey, you know, if I have to power all these 1,000 elements, and connect all the processing, in addition, my bandwidths are getting wider. More users are coming on. My compute energy, you know, will just go through the roof.” And we’ve done the calculus, it’s really crazy. So there’s no way we can do that. So we need new ways of dealing with that energy increase. Neuromorphic comes along. It’s one of the contenders. So it’s not the only one. There’s other stuff as well, we’re looking at. But neuromorphic essentially gives you the ability to really bring down this energy envelope, whilst not jeopardizing the performance on that.

So it turns out that neuromorphic cannot be used for all algorithmic families. So we’re trying to understand what can be done, what cannot be done. Should it be really integral as part of our stack to process data? Or should it sit on the side as we like to do it today?
You know, this is publicly available, then we just call certain acceleration functions when we need it, and then continue with processing. So a lot of question marks, and that makes it so exciting because we need to take very difficult strategic decisions very quickly, to make sure we remain competitive towards the end of this decade.

(…)


So quantum is just such a fascinating fabric, and it’s all evolving. The only downside it has at the moment is it’s extremely energy consuming. So contrast that with neuromorphic, which consumes almost zero, quantum is you need to cool it bring it down. So we need a lot of innovation there. And we also need to make sure that if we use a quantum computer, the problem is so hard that we would need like trillions of years to do it on a normal fabric because then the whole energy story makes sense. It needs to be sustainable.

Camille Morhardt 22:36

It sounds like rather than a consolidation of compute technologies, you’re looking at a heterogeneous mix of compute technologies, depending on the function or the workload, is that accurate?

Mischa Dohler 22:46

Absolutely. It’s a very great observation, Camille. That’s exactly what we’re looking at. And you know, specifically in the heterogeneous setting of quantum and traditional compute, we just published a blog on how we use quantum and traditional compute as a hybrid solution to find the optimum antenna tilt. It’s a very hard problem when you have loads of antennas, many users; we work with heuristics so far, so heuristics are algorithmic approaches, which aren’t optimum, but try to get as close as to the optimum, we can do that. With a quantum solver, suddenly, you get much closer to the true optimum in a much quicker time, and we’re able to do that much better than just a non-heterogeneous solution.

Camille Morhardt 23:24

If you had just a huge amount of funding to look at something that’s of personal interest to you, what would it be?

Mischa Dohler 23:30

You know, I would probably try a little bit what we tried to do in London, push the envelope on both, really. So you know, try to understand how can we bring the innovative element of technology together with a creative element of the arts? And really get both communities start thinking, how can they disrupt their own ecosystems; it’s a very general view, but you know, usually it comes out when you bring them together. And we have new stuff coming out now in technology, and I think this envelope between accelerated fabric like neuromorphic, and quantum is one, AI is another and robotics, is yet another, specifically soft robotics. So it’s not only about hard robots walking but actually soft robots which are quite useful in medicine, many other applications. So that’s the technology envelope, and the connectivity connecting it all–5G, 6G, etc.

And then on the artistic side, we have new ways of procuring the arts–whether you use let’s say, glasses, new ways of stages, haptic equipment, you know, creating immersive experiences, creating emotional bonds, creating a digital aura in arts, which we couldn’t do before, right. So before you would go in an exhibition, there is nothing before going exhibition, great experience going out of the exhibition, and then you forget about it. So building these digital aura trails, I think, you know, this is where technology can really help.


So loads of opportunities there. It would really bring arts back into the curriculum, bring it back into schools, bring it back into universities, make it an integral part of our educational process. That’s really what I’d love to see.

Camille Morhardt 24:58

What is the soft robot?

Mischa Dohler 25:00

A soft robot is a robot which mimics the way how, let’s say an octopus walks. It’s very soft. There’s no hard element there. And we love to explore that world because, you know, nobody can be very close to real big robots. So I’m not sure you’ve ever been close to one. I had one at King’s ABB. These are beasts, these are industrial things, you know, you don’t really trust it. If somebody hacks in there or something happens, you know, they just swing and you’re just basically toast. But the soft robot can really enable that coexistence I think with humans, use it in surgery. So the ability to control soft tissue, you know, like an octopus, I think or a snake. That’s the title of inspirational biological phenomena we use to design that.

Camille Morhardt 25:42

Well, Mischa, everything from octopuses to Apple Vision Pro to neuromorphic computing. Thank you so much for your time today. It’s been a fascinating conversation.

Mischa Dohler 25:53

My pleasure, Camille. Thank you for having me.


[I happened to notice the transcript is not 100% accurate, eg it misses him saying “So it’s still a big question mark.” at 17:24 min before continuing with “So we still don’t know”…
Oh, and I thought his remarks about soft robots were very intriguing! ]


View attachment 60762 View attachment 60763

Here is a follow-up of my post on Ericsson (see above).

When I recently searched for info on Ericsson’s interest in neuromorphic technology besides the Dec 2023 paper, in which six Ericsson researchers described how they had built a prototype of an AI-enabled ZeroEnergy-IoT device utilising Akida, I not only came across an Ericsson Senior Researcher for Future Computing Platforms who was very much engaged with Intel’s Loihi (he even gave a presentation at the Intel Neuromorphic Research Community’s Virtual Summer 2023 Workshop), as well as an Intel podcast with Ericsson’s VP of Emerging Technologies, Mischa Dohler.

I also spotted the following LinkedIn post by a Greek lady, who had had a successful career at Ericsson spanning more than 23 years before taking the plunge into self-employment two years ago:

F795C7C1-84CE-4BCC-B837-444720BB3FF2.jpeg


C686B5DF-3FC4-44E1-9EE6-F5C067955E51.jpeg



361eabcc-0b13-4ea1-a40a-fe23e802b8a1-jpeg.61045



Since Maria Boura concluded her post by sharing that very Intel podcast with Mischa Dohler mentioned earlier, my gut feeling was that those Ericsson 6G researchers she had talked to at MWC (Mobile World Congress) 2024 in Barcelona at the end of February had most likely been collaborating with Intel, but a quick Google search didn’t come up with any results at the time I first saw that post of hers back in March.

Then, last night, while reading an article on Intel’s newly revealed Hala Point (https://www.eejournal.com/industry_...morphic-system-to-enable-more-sustainable-ai/), there it was - the undeniable evidence that those Ericsson researchers had indeed been utilising Loihi 2:

“Advancing on its predecessor, Pohoiki Springs, with numerous improvements, Hala Point now brings neuromorphic performance and efficiency gains to mainstream conventional deep learning models, notably those processing real-time workloads such as video, speech and wireless communications. For example, Ericsson Research is applying Loihi 2 to optimize telecom infrastructure efficiency, as highlighted at this year’s Mobile World Congress.”

The blue link connects to the following article on the Intel website, published yesterday:


Ericsson Research Demonstrates How Intel Labs’ Neuromorphic AI Accelerator Reduces Compute Costs​

Subscribe
Article Options

large


Philipp_Stratmann

Philipp_Stratmann
Employee
‎04-17-2024
00344
Philipp Stratmann is a research scientist at Intel Labs, where he explores new neural network architectures for Loihi, Intel’s neuromorphic research AI accelerator. Co-author Péter Hága is a master researcher at Ericsson Research, where he leads research activities focusing on the applicability of neuromorphic and AI technologies to telecommunication tasks.

Highlights
  • Using neuromorphic computing technology from Intel Labs, Ericsson Research is developing custom telecommunications AI models to optimize telecom architecture.
  • Ericsson Research developed a radio receiver prototype for Intel’s Loihi 2 neuromorphic AI accelerator based on neuromorphic spiking neural networks, which reduced the data communication by 75 to 99% for energy efficient radio access networks (RANs).
  • As a member of Intel’s Neuromorphic Research Community, Ericsson Research is searching for new AI technologies that provide energy efficiency and low latency inference in telecom systems.

Using neuromorphic computing technology from Intel Labs, Ericsson Research is developing custom telecommunications artificial intelligence (AI) models to optimize telecom architecture. Ericsson currently uses AI-based network performance diagnostics to analyze communications service providers’ radio access networks (RANs) to resolve network issues efficiently and provide specific parameter change recommendations. At Mobile World Congress (MWC) Barcelona 2024, Ericsson Research demoed a radio receiver algorithm prototype targeted for Intel’s Loihi 2 neuromorphic research AI accelerator, demonstrating a significant reduction in computational cost to improve signals across the RAN.

In 2021, Ericsson Research joined the Intel Neuromorphic Research Community (INRC), a collaborative research effort that brings together academic, government, and industry partners to work with Intel to drive advances in real-world commercial usages of neuromorphic computing.

Ericsson Research is actively searching for new AI technologies that provide low latency inference and energy efficiency in telecom systems. Telecom networks face many challenges, including tight latency constraints driven by the need for data to travel quickly over the network, and energy constraints due to mobile system battery limitations. AI will play a central role in future networks by optimizing, controlling, and even replacing key components across the telecom architecture. AI could provide more efficient resource utilization and network management as well as higher capacity.

Neuromorphic computing draws insights from neuroscience to create chips that function more like the biological brain instead of conventional computers. It can deliver orders of magnitude improvements in energy efficiency, speed of computation, and adaptability across a range of applications, including real-time optimization, planning, and decision-making from edge to data center systems. Intel's Loihi 2 comes with Lava, an open-source software framework for developing neuro-inspired applications.

Radio Receiver Algorithm Prototype

Ericsson Research’s working prototype of a radio receiver algorithm was implemented in Lava for Loihi 2. In the demonstration, the neural network performs a common complex task of recognizing the effects of reflections and noise on radio signals as they propagate from the sender (base station) to the receiver (mobile). Then the neural network must reverse these environmental effects so that the information can be correctly decoded.

During training, researchers rewarded the model based on accuracy and the amount of communication between neurons. As a result, the neural communication was reduced, or sparsified, by 75 to 99% depending on the difficulty of the radio environment and the amount of work needed by the AI to correct the environmental effects on the signal.

Loihi 2 is built to leverage such sparse messaging and computation. With its asynchronous spike-based communication, neurons do not need to compute or communicate information when there is no change. Furthermore, Loihi 2 can compute with substantially less power due to its tight compute-memory integration. This reduces the energy and latency involved in moving data between the compute unit and the memory.

Like the human brain’s biological neural circuits that can intelligently process, respond to, and learn from real-world data at microwatt power levels and millisecond response times, neuromorphic computing can unlock orders of magnitude gains in efficiency and performance.

Neuromorphic computing AI solutions could address the computational power needed for future intelligent telecom networks. Complex telecom computation results must be produced in tight deadlines down to the millisecond range. Instead of using GPUs that draw substantial amounts of power, neuromorphic computing can provide faster processing and improved energy efficiency.

Emerging Technologies and Telecommunications

Learn more about emerging technologies and telecommunications in this episode of InTechnology. Host Camille Morhardt interviews Mischa Dohler, VP of Emerging Technologies at Ericsson, about neuromorphic computing, quantum computing, and more.



While Ericsson being deeply engaged with Intel even in the area of neuromorphic research doesn’t preclude them from also knocking on BrainChip’s door, this new reveal reaffirms my hesitation about adding Ericsson to our list of companies above the waterline, given the lack of any official acknowledgment by either party to date.

So to sum it up: Ericsson collaborating with Intel in neuromorphic research is a verifiable fact, while an NDA with BrainChip is merely speculation so far.
 

Attachments

  • 361EABCC-0B13-4EA1-A40A-FE23E802B8A1.jpeg
    361EABCC-0B13-4EA1-A40A-FE23E802B8A1.jpeg
    382.3 KB · Views: 1,254
  • Like
  • Love
Reactions: 23 users
Top Bottom