BRN Discussion Ongoing

Bravo

If ARM was an arm, BRN would be its biceps💪!
Mercedes is still listed as customer… question is if they want to improve their own AI with this move or we are in the game… time will tell

I personally think Akida 2 fits the Mercedes + Samsung + LG direction exceptionally well.

If Mercedes is shifting toward a heterogeneous compute architecture for next-gen AI-defined vehicles (which the article strongly suggests) then they’ll need both heavyweight compute for generative models and an ultra-low-power layer for continuous sensing. That second layer is exactly where Akida excels.

In-cabin AI will be multimodal and partly generative, combining voice, gaze, gesture, sound events and cabin behaviour. A single large SoC running all of that 24/7 isn’t practical because it would drain power, create heat and require the system to stay constantly awake.

With Akida 2 + TENNs now being positioned as “edge GenAI,” the fit becomes even clearer IMO. Future cabins will rely on small, efficient SLMs that can run locally for privacy, latency and energy reasons. Natural voice interaction, context-aware assistance or conversational feedback from a Samsung or LG in-cabin system won’t require huge LLMs, they'd rely on compact generative or state-space models, which is precisely what Akida 2 is built for.

So while Samsung and LG will handle the heavy compute and platform electronics, Akida could take on a unique, self-learning, sub-watt always-on intelligence that can watch, listen and adapt without waking the main processor. That’s ideal for wake-word detection, anomaly detection, driver and cabin monitoring, and personalised behaviour.

If Mercedes does move to a multi-vendor AI stack, there’s potential for BrainChip’s neuromorphic tech to become the specialised piece that ties the whole system together. At least, this is what I'm hoping.

IMO. DYOR.
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 10 users
I personally think Akida 2 fits the Mercedes + Samsung + LG direction exceptionally well.

If Mercedes is shifting toward a heterogeneous compute architecture for next-gen AI-defined vehicles (which the article strongly suggests) then they’ll need both heavyweight compute for generative models and an ultra-low-power layer for continuous sensing. That second layer is exactly where Akida excels.

In-cabin AI will be multimodal and partly generative, combining voice, gaze, gesture, sound events and cabin behaviour. A single large SoC running all of that 24/7 isn’t practical because it would drain power, create heat and require the system to stay constantly awake.

With Akida 2 + TENNs now being positioned as “edge GenAI,” the fit becomes even clearer IMO. Future cabins will rely on small, efficient SLMs that can run locally for privacy, latency and energy reasons. Natural voice interaction, context-aware assistance or conversational feedback from a Samsung or LG in-cabin system won’t require huge LLMs, they'd rely on compact generative or state-space models, which is precisely what Akida 2 is built for.

So while Samsung and LG will handle the heavy compute and platform electronics, Akida could take on a unique, self-learning, sub-watt always-on intelligence that can watch, listen and adapt without waking the main processor. That’s ideal for wake-word detection, anomaly detection, driver and cabin monitoring, and personalised behaviour.

If Mercedes does move to a multi-vendor AI stack, there’s potential for BrainChip’s neuromorphic tech to become the specialised piece that ties the whole system together. At least, this is what I'm hoping.

IMO. DYOR.
Here is a comment from Linkedin regarding heterogeneous and brainchip
 

Attachments

  • Screenshot_20251109_174915_LinkedIn.jpg
    Screenshot_20251109_174915_LinkedIn.jpg
    287.8 KB · Views: 36
  • Like
Reactions: 1 users
Top Bottom