BRN Discussion Ongoing

Frangipani

Top 20
Could one of our German friends please have a look at this video posted 4 mths ago to do with MB and includes neuromorphic but I can't figure out translation for it....other than the title etc.

I don't believe I could pickup Akida or BRN directly but maybe / hopefully a German review may shed additional light?

TIA.

Dr. Dominic Blum, Artificial Intelligence Research at Mercedes Benz AG, reports on the developments at Mercedes on the topic of autonomous driving and assisted driving. In detail:
Driving assistance systems/Sensors
Sensor Action Integration of AI into the active chain
Problems: Energy consumption & Latency
Possible solutions: Neuromorphic computing (NMC), NMC in the active chain, event-based cameras, spiking neural networks





Hi @Fullmoonfever,

FYI: I already posted in detail about Dominik Blum’s presentation back in October:

https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-439352

Here are two excerpts:

“(…) There was one slide that showed - amongst other things - various brands of neuromorphic hardware (ABR, BrainChip, Innatera, Intel, SynSense), including Akida 1.0 and 2.0, but none was mentioned by name. Same with the slide showing the EQXX: there was merely mention of the voice assistant’s keyword spotting having been realised on “a neuromorphic chip”, which was said to have considerably improved this function’s energy efficiency.

(…) To me the gist was that while MB considers neuromorphic computing a very promising technology regarding gains in energy efficiency (which will become more and more important on the path towards cars becoming fully autonomous), they are still at a very early stage of research - Dominik Blum literally said so. There you go, you heard it from the horse’s mouth. So don’t expect neuromorphic technology in any MB serial production cars in the near future. At least that’s what I took away from that presentation (…).”


Have a good weekend

Frangipani
 
Last edited:
  • Like
Reactions: 8 users

Frangipani

Top 20
Last edited:
  • Like
  • Fire
Reactions: 3 users

Diogenese

Top 20
So yes to my humble opinion we’re with Merc for sure


Could be!
Who knows?
There's something due any day
I will know right away
Soon as it shows
 
  • Haha
  • Like
  • Love
Reactions: 17 users
 
  • Like
  • Fire
Reactions: 4 users

CHIPS

Regular
Would you rather a ChaptGPT thread or a @Frangipani thread ......I know what I would prefer.......selfless people sharing their own amazing in-depth validated research or a thread based on ChatGPT or other baseless rubbish.....????

Thank you. I do not read the ChatGPT posts anymore, I just scroll down to the next post.

Finding new information with it is fine, but constantly posting the whole lot and hoping it is right is just annoying. Bravo said a few days ago that she does not have the technical background to judge the results. Why post it then? I rather read real research instead of good guesses.
 
  • Like
  • Love
Reactions: 8 users

IloveLamp

Top 20
1000011084.jpg
1000011087.jpg
1000011090.jpg



@Fullmoonfever
 
  • Like
  • Love
  • Fire
Reactions: 24 users

IloveLamp

Top 20

1000011093.jpg
 
  • Like
  • Fire
  • Love
Reactions: 37 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Thank you. I do not read the ChatGPT posts anymore, I just scroll down to the next post.

Finding new information with it is fine, but constantly posting the whole lot and hoping it is right is just annoying. Bravo said a few days ago that she does not have the technical background to judge the results. Why post it then? I rather read real research instead of good guesses.


Rude.

You know there's nothing stopping you from contributing your own research here, instead of just playing town critic.

What both you and @TheDrooben seem to have missed is that I posted an excerpt from an article that nobody else had picked up on, which is what formed the basis of my ChatGPT query. Ironically, that post itself was in response to a comment about ChatGPT being “useless,” to show how it can be used in a constructive way.

The excerpt I shared highlighted Arm stating the Mali GPU and AI accelerator are “optional” in their latest platform. That’s a critical point, because it means chipmakers can slot in whichever accelerator they choose.


Screenshot 2025-09-05 at 10.57.43 pm.png





So far, nobody else has discussed this or Arm’s Zena platform and I’ve been trying to connect the dots

In a previous post #83,058, I commented on Renee Hass (Arm's CEO) being asked whether Arm would consider making its own accelerator, and how that ties into this more recent “optional accelerator” comment.

Likewise, in another previous post #83,075, I pointed out how Paul Williamson (Senior Vice President and General Manager, IoT Line of Business) also hinted about Arm potentially needing a higher-performance NPU.

The interesting angle for me is whether Arm might be weighing RTL versus chiplet integration for Akida/TENNs. That’s what I’m trying to get at, even if I lack the technical depth to do all the heavy lifting myself.

Thanks to ChatGPT, I’ve learned that AI accelerators can be integrated as a) RTL blocks in a monolithic SoC, or b) they can be dropped in as chiplets using frameworks like CSA/UCIe.

ChatGPT is also helping me to ascertain how Akida/TENNs could slot into that optional accelerator role, either as a companion block alongside Ethos-U85/M85, or a chiplet via Arm’s ecosystem. And how Akida 2 + TENNs versus Akida 3 + TENNs might fit into Arm’s longer-term chiplet ambitions.

That’s the line of thinking behind my posts. If it’s not appreciated, fair enough. Maybe I should just keep my research to myself.
 
  • Like
  • Love
  • Fire
Reactions: 46 users
Rude.

You know there's nothing stopping you from contributing your own research here, instead of just playing town critic.

What both you and @TheDrooben seem to have missed is that I posted an excerpt from an article that nobody else had picked up on, which is what formed the basis of my ChatGPT query. Ironically, that post itself was in response to a comment about ChatGPT being “useless,” to show how it can be used in a constructive way.

The excerpt I shared highlighted Arm stating the Mali GPU and AI accelerator are “optional” in their latest platform. That’s a critical point, because it means chipmakers can slot in whichever accelerator they choose.


View attachment 90791




So far, nobody else has discussed this or Arm’s Zena platform and I’ve been trying to connect the dots

In a previous post #83,058, I commented on Renee Hass (Arm's CEO) being asked whether Arm would consider making its own accelerator, and how that ties into this more recent “optional accelerator” comment.

Likewise, in another previous post #83,075, I pointed out how Paul Williamson (Senior Vice President and General Manager, IoT Line of Business) also hinted about Arm potentially needing a higher-performance NPU.

The interesting angle for me is whether Arm might be weighing RTL versus chiplet integration for Akida/TENNs. That’s what I’m trying to get at, even if I lack the technical depth to do all the heavy lifting myself.

Thanks to ChatGPT, I’ve learned that AI accelerators can be integrated as a) RTL blocks in a monolithic SoC, or b) they can be dropped in as chiplets using frameworks like CSA/UCIe.

ChatGPT is also helping me to ascertain how Akida/TENNs could slot into that optional accelerator role, either as a companion block alongside Ethos-U85/M85, or a chiplet via Arm’s ecosystem. And how Akida 2 + TENNs versus Akida 3 + TENNs might fit into Arm’s longer-term chiplet ambitions.

That’s the line of thinking behind my posts. If it’s not appreciated, fair enough. Maybe I should just keep my research to myself.
Nooooo please don't do that Bravo. I very much appreciate your help in searching for us brn's possible opportunities 🙏. Don't let the dribble from some get in the way of great research.
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 28 users

JB49

Regular

View attachment 90786
Thanks Lamp! I was a bit worried when Qualcomm first acquired edge impulse and their website section on Brainchip said "At this time the training of brainchip models is suspended".

I was on Edge Impulse website about 2 weeks ago, and it still had the exact same comment. But I've checked out on the Edge Impulse website today, which seems to have been updated with a new layout, and that note on Brainchip is now gone and they have us listed as an official partner. https://edgeimpulse.com/ecosystem-partners/brainchip

And based on todays announcement, it looks as though the relationship is alive and well!
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 36 users

Frangipani

Top 20

Please do NOT rely on these kind of GenAI translations!

While the translation of what the slide says and what Dominik Blum says from 9:08 min are fine, the other excerpts contain sentences with inaccuracies, random additional words that cannot be found in the original (eg. there is no “for example” in what he says from 13:13 min and neither does he say “great” after “thank you” in the end nor does he talk about “making” neuromorphic computing “event-based” at 13:31 min - rather he says that event-based cameras are “basically predestined to be combined with spiking neural networks”) or are even outright false, like claiming he allegedly said “With normal cameras, every single pixel is independent of the others”. Which obviously doesn’t make sense, as conventional cameras are frame-based.

What he really says is: “Die Event-Based-Kamera arbeitet grundlegend anders als normale, uns bekannte Kameras. Jedes einzelne Pixel ist unabhängig voneinander, und das hat bestimmte Vorteile.”
“The event-based camera works in a fundamentally different way from normal cameras we are familiar with. Each individual pixel is independent of each other, and that has certain advantages.”

The whole paragraph from 12:32 min is also badly translated. [I’m well aware my translation is not perfect, but definitely heaps better than ChatGPT’s…]

Its context is a presentation slide titled
“NMC is a young field of research…with many open questions, eg.”
Dominik Blum briefly addresses the following three questions: Which neuron model do I implement? How do I interpret spike data? How do I train spiking neural networks?
And then he adds a fourth point in reference to the table showing various neuromorphic hardware: “I can (= one can/you can) occupy myself (oneself/yourself) with developing neuromorphic hardware, and of course there are many more points, those were just the first ones.
I asked myself what was the first thing I had given thought to/investigated.
[Next presentation slide with Gartner’s Impact Radar comes up.]
Yes, erm, okay… Gartner placed neuromorphic computing in his Impact Radar where? [Looks at presentation slide.] To “high” “mass”, in three to six years. Anyway, in any case you can imagine that it’s gonna be an exciting future with the development…”


O-Ton [ziemlich wortgetreu, aber am besten selber mal anhören; da klingt das Ganze dann deutlich natürlicher als bei so einer lautsprachlichen Transkription]:

„Ich kann mich mit neuromorpher Hardware beschäftigen, mit der Entwicklung. Also, und es gibt natürlich noch viel mehr Punkte, einfach - das waren so die ersten. Ich hab’ überlegt, was war das erste, worüber ich nachgedacht habe. [Nächste Präsentationsfolie mit Gartners Impact Radar erscheint.] Ja, ähm, gut. Gartner hat, äh, in seinem Impact Radar Neuromorphic Computing, äh, wohin gesetzt? [Guckt auf die Präsentationsfolie.] Auf „high“ „mass“, in drei bis sechs Jahren. Wie auch immer, auf jeden Fall kann man sich vorstellen, dass das eine spannende Zukunft, äh, gibt mit der Entwicklung …“
 
  • Like
  • Fire
Reactions: 9 users

charles2

Regular
Rude.

You know there's nothing stopping you from contributing your own research here, instead of just playing town critic.

What both you and @TheDrooben seem to have missed is that I posted an excerpt from an article that nobody else had picked up on, which is what formed the basis of my ChatGPT query. Ironically, that post itself was in response to a comment about ChatGPT being “useless,” to show how it can be used in a constructive way.

The excerpt I shared highlighted Arm stating the Mali GPU and AI accelerator are “optional” in their latest platform. That’s a critical point, because it means chipmakers can slot in whichever accelerator they choose.


View attachment 90791




So far, nobody else has discussed this or Arm’s Zena platform and I’ve been trying to connect the dots

In a previous post #83,058, I commented on Renee Hass (Arm's CEO) being asked whether Arm would consider making its own accelerator, and how that ties into this more recent “optional accelerator” comment.

Likewise, in another previous post #83,075, I pointed out how Paul Williamson (Senior Vice President and General Manager, IoT Line of Business) also hinted about Arm potentially needing a higher-performance NPU.

The interesting angle for me is whether Arm might be weighing RTL versus chiplet integration for Akida/TENNs. That’s what I’m trying to get at, even if I lack the technical depth to do all the heavy lifting myself.

Thanks to ChatGPT, I’ve learned that AI accelerators can be integrated as a) RTL blocks in a monolithic SoC, or b) they can be dropped in as chiplets using frameworks like CSA/UCIe.

ChatGPT is also helping me to ascertain how Akida/TENNs could slot into that optional accelerator role, either as a companion block alongside Ethos-U85/M85, or a chiplet via Arm’s ecosystem. And how Akida 2 + TENNs versus Akida 3 + TENNs might fit into Arm’s longer-term chiplet ambitions.

That’s the line of thinking behind my posts. If it’s not appreciated, fair enough. Maybe I should just keep my research to myself.
Bravo
You just keep being you. I find your posts informative and thoughtful and the product of considerable effort.
Ignore the static.
Best from Mexico
 
  • Like
  • Love
  • Fire
Reactions: 23 users

Frangipani

Top 20
Thanks Lamp! I was a bit worried when Qualcomm first acquired edge impulse and their website section on Brainchip said "At this time the training of brainchip models is suspended".

I was on Edge Impulse website about 2 weeks ago, and it still had the exact same comment. But I've checked out on the Edge Impulse website today, which seems to have been updated with a new layout, and that note on Brainchip is now gone and they have us listed as an official partner. https://edgeimpulse.com/ecosystem-partners/brainchip

And based on todays announcement, it looks as though the relationship is alive and well!

No, the 25 March 2025 note that the training of BrainChip models has been suspended is not gone, but embarrassingly still very much alive on the Edge Impulse website… It’s just on a different webpage (where it’s always been):



1E2E5BF1-F124-4BCF-80D3-BD8BFFBD4F19.jpeg
 
  • Like
  • Sad
  • Fire
Reactions: 10 users

JB49

Regular
No, the 25 March 2025 note that the training of BrainChip models has been suspended is not gone, but embarrassingly still very much alive on the Edge Impulse website… It’s just on a different webpage (where it’s always been):



View attachment 90792
Apologies, you're right. Nonetheless their LinkedIn post today reaffirms the partnership has not been affected after the acquisition by Qualcomm.
 
  • Like
  • Fire
Reactions: 14 users

Diogenese

Top 20
Rude.

You know there's nothing stopping you from contributing your own research here, instead of just playing town critic.

What both you and @TheDrooben seem to have missed is that I posted an excerpt from an article that nobody else had picked up on, which is what formed the basis of my ChatGPT query. Ironically, that post itself was in response to a comment about ChatGPT being “useless,” to show how it can be used in a constructive way.

The excerpt I shared highlighted Arm stating the Mali GPU and AI accelerator are “optional” in their latest platform. That’s a critical point, because it means chipmakers can slot in whichever accelerator they choose.


View attachment 90791




So far, nobody else has discussed this or Arm’s Zena platform and I’ve been trying to connect the dots

In a previous post #83,058, I commented on Renee Hass (Arm's CEO) being asked whether Arm would consider making its own accelerator, and how that ties into this more recent “optional accelerator” comment.

Likewise, in another previous post #83,075, I pointed out how Paul Williamson (Senior Vice President and General Manager, IoT Line of Business) also hinted about Arm potentially needing a higher-performance NPU.

The interesting angle for me is whether Arm might be weighing RTL versus chiplet integration for Akida/TENNs. That’s what I’m trying to get at, even if I lack the technical depth to do all the heavy lifting myself.

Thanks to ChatGPT, I’ve learned that AI accelerators can be integrated as a) RTL blocks in a monolithic SoC, or b) they can be dropped in as chiplets using frameworks like CSA/UCIe.

ChatGPT is also helping me to ascertain how Akida/TENNs could slot into that optional accelerator role, either as a companion block alongside Ethos-U85/M85, or a chiplet via Arm’s ecosystem. And how Akida 2 + TENNs versus Akida 3 + TENNs might fit into Arm’s longer-term chiplet ambitions.

That’s the line of thinking behind my posts. If it’s not appreciated, fair enough. Maybe I should just keep my research to myself.
Hi Bravo,

I find your Chats stimulating and your skill in drawing out additional information unfailingly enhances the discussion. This is a discussion forum, so, whenever Chatty disagrees with me it stimulates further research and analysis. In a recent debate, I destroyed Chatty's round-earth-ism furphy by graphically demonstrating that, if the earth were spheroidal, it would be downhill in all directions relative to the observer, the observer being an important player in the 6th degree of Einstein's family tree of relativity.
 
  • Like
  • Haha
Reactions: 14 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Rude.

You know there's nothing stopping you from contributing your own research here, instead of just playing town critic.

What both you and @TheDrooben seem to have missed is that I posted an excerpt from an article that nobody else had picked up on, which is what formed the basis of my ChatGPT query. Ironically, that post itself was in response to a comment about ChatGPT being “useless,” to show how it can be used in a constructive way.

The excerpt I shared highlighted Arm stating the Mali GPU and AI accelerator are “optional” in their latest platform. That’s a critical point, because it means chipmakers can slot in whichever accelerator they choose.


View attachment 90791




So far, nobody else has discussed this or Arm’s Zena platform and I’ve been trying to connect the dots

In a previous post #83,058, I commented on Renee Hass (Arm's CEO) being asked whether Arm would consider making its own accelerator, and how that ties into this more recent “optional accelerator” comment.

Likewise, in another previous post #83,075, I pointed out how Paul Williamson (Senior Vice President and General Manager, IoT Line of Business) also hinted about Arm potentially needing a higher-performance NPU.

The interesting angle for me is whether Arm might be weighing RTL versus chiplet integration for Akida/TENNs. That’s what I’m trying to get at, even if I lack the technical depth to do all the heavy lifting myself.

Thanks to ChatGPT, I’ve learned that AI accelerators can be integrated as a) RTL blocks in a monolithic SoC, or b) they can be dropped in as chiplets using frameworks like CSA/UCIe.

ChatGPT is also helping me to ascertain how Akida/TENNs could slot into that optional accelerator role, either as a companion block alongside Ethos-U85/M85, or a chiplet via Arm’s ecosystem. And how Akida 2 + TENNs versus Akida 3 + TENNs might fit into Arm’s longer-term chiplet ambitions.

That’s the line of thinking behind my posts. If it’s not appreciated, fair enough. Maybe I should just keep my research to myself.


I forgot to mention...

Screenshot 2025-09-06 at 11.59.12 am.png




EXTRACT ONLY

Screenshot 2025-09-06 at 11.56.19 am.png







 
  • Like
  • Love
  • Fire
Reactions: 16 users
Features from the EQXX concept car have been implemented into the GLC EQ, as per Auto Express. One feature that stands out is in the interior:

The curved 8K screen that stretches 47.2 inches between the A-pillars makes the most of the EQXX's crisp and sparkling game-engine-powered graphics, including a realtime 3D navigation display. The mini-LED backlit screen features more than 3000 local dimming zones, which means it only consumes power as and when specific parts of the screen are in use.

🤔
 
  • Like
  • Fire
  • Love
Reactions: 9 users
Features from the EQXX concept car have been implemented into the GLC EQ, as per Auto Express. One feature that stands out is in the interior:

The curved 8K screen that stretches 47.2 inches between the A-pillars makes the most of the EQXX's crisp and sparkling game-engine-powered graphics, including a realtime 3D navigation display. The mini-LED backlit screen features more than 3000 local dimming zones, which means it only consumes power as and when specific parts of the screen are in use.

🤔
And hopefully it contains 3000 Akida chips 😀
 
  • Like
  • Fire
  • Haha
Reactions: 7 users

Esq.111

Fascinatingly Intuitive.

Attachments

  • 20250906_152004.jpg
    20250906_152004.jpg
    2 MB · Views: 83
  • Like
  • Fire
  • Love
Reactions: 10 users

em1389

Member
Unsure if this has been posted as I haven't checked in, in a while.

 
  • Like
Reactions: 3 users
Top Bottom