"Hi JD,
For what it's worth, I asked ChatGPT for a neutral take on this.
What I thought was interesting, was that it weaved in Anduril's headset into the response. I didn't prompt it to, but earlier in the day, in a separate chat, I had queried ChatGPT on boards or integration possibilities and whether an Akida 1500 + Arm's Cortex M85 module (as an example) could power something like Anduril’s helmet.
ChatGPT's Response
What “supports 128 MACs per node” most likely means
- Each Akida node has a local 4×4 MAC micro-engine (128 MACs) it can invoke for certain ops.
- Those MACs co-exist with the spiking compute (CNP/FNP or equivalent). They’re there so models can include non-spiking layers/ops without leaving the chip.
Why add MACs if you have an SNN?
Because many modern edge models are hybrid. MACs cover things SNNs don’t excel at (or that are simpler to drop in as-is):
- Conv/pointwise (1×1) / depthwise conv blocks
- Linear/projection layers (e.g., classifier heads, QKV in tiny attention)
- Normalisation / pooling / residual glue
- Pre/post-processing (e.g., short FIRs, feature projections)
- Direct reuse of existing INT4/INT8 CNN chunks without full SNN conversion
What a hybrid pipeline might look like in a helmet
Sensors
- Front/rear frame cameras → light MAC preproc (resize/1×1 conv).
- Event camera/radar/audio → feed SNN/TENNs directly.
- Perception
- SNN handles temporal/gesture/track (event streams).
- MAC handles small CNN blocks or projection heads.
- Fusion & decisions on the host MCU (e.g., Cortex-M85).
Bottom line
- The “128 MACs per node” doesn’t signal the end of Akida’s neuromorphic core. It signals a pragmatic hybrid: keep the spiking engine for temporal, sparse, event-driven strengths, and use local MACs to natively run conventional layers (conv/linear/norm) and reuse existing model chunks.
- That combo generally improves accuracy, eases porting, and lowers total latency/power versus forcing everything into SNN or everything into classic CNN on a separate chip.
View attachment 92437
We are pleased to welcome Dr. Sasskia Brüers Freyssinet, AI Research Scientist at BrainChip , for the next lecture in our Current Topics in Digital Neuroscience series:
From Neuroscience to Applied Artificial Intelligence
In this talk, Dr. Brüers-Freyssinet will explore the path from neuroscience research to applied AI, showing how insights from brain science can inspire more efficient and adaptive artificial systems. Drawing from her own experience transitioning from academia to industry, she will discuss the technical challenges, emerging research directions, and interdisciplinary skills needed to bridge fundamental brain research with real-world AI applications.
About the speaker
Dr. Brüers-Freyssinet is an AI Research Scientist at Brainchip, where she develops efficient computer vision models for neuromorphic hardware using techniques such as pruning, quantization, and regularization. She completed her PhD in Neuroscience at Université Paul Sabatier Toulouse III in 2017, studying the oscillatory correlates of conscious perception. Her current work investigates state-space models for temporal problems like gesture recognition and EEG-based behavior prediction—advancing the frontier where neuroscience meets artificial intelligence.
Tuesday 28th of October 2025
Per21, Université de Fribourg - Universität Freiburg Pérolles Campus, Room E140.
Follow the talk online: https://lnkd.in/eQPKxAUX
Join us for a fascinating discussion on how brain-inspired approaches are shaping the next generation of intelligent, energy-efficient AI systems.
View attachment 92436
Thank you, but where is the Link?
Agreed 100%!It was not even funny the first time!
If you can copy and paste the whole article, you must also be capable of copying and pasting the link. I judge you to be smart enough for that.
Everybody here posts the corresponding links; only you are so inconsiderate as not to do it. Why?
pressat.co.uk
Ya know what TTM....I'm starting to think you're the.....
Oh here we go !!!!It was not even funny the first time!
If you can copy and paste the whole article, you must also be capable of copying and pasting the link. I judge you to be smart enough for that.
Everybody here posts the corresponding links; only you are so inconsiderate as not to do it. Why?
View attachment 92437
We are pleased to welcome Dr. Sasskia Brüers Freyssinet, AI Research Scientist at BrainChip , for the next lecture in our Current Topics in Digital Neuroscience series:
From Neuroscience to Applied Artificial Intelligence
In this talk, Dr. Brüers-Freyssinet will explore the path from neuroscience research to applied AI, showing how insights from brain science can inspire more efficient and adaptive artificial systems. Drawing from her own experience transitioning from academia to industry, she will discuss the technical challenges, emerging research directions, and interdisciplinary skills needed to bridge fundamental brain research with real-world AI applications.
About the speaker
Dr. Brüers-Freyssinet is an AI Research Scientist at Brainchip, where she develops efficient computer vision models for neuromorphic hardware using techniques such as pruning, quantization, and regularization. She completed her PhD in Neuroscience at Université Paul Sabatier Toulouse III in 2017, studying the oscillatory correlates of conscious perception. Her current work investigates state-space models for temporal problems like gesture recognition and EEG-based behavior prediction—advancing the frontier where neuroscience meets artificial intelligence.
Tuesday 28th of October 2025
Per21, Université de Fribourg - Universität Freiburg Pérolles Campus, Room E140.
Follow the talk online: https://lnkd.in/eQPKxAUX
Join us for a fascinating discussion on how brain-inspired approaches are shaping the next generation of intelligent, energy-efficient AI systems.
View attachment 92436
Sadly, Richard Resseguie is not the only Richard to have left BrainChip this month (saying that, hopefully there won’t be any Richard III, other than in some of our bookshelves…):
Richard Chevalier, who had been with our Toulouse-based office since 2018 (!), has joined Nio Robotics (formerly known as Nimble One, not to be confused with Nimble AI) as their Vice President of Platform Engineering:
![]()
#niorobotics | Richard Chevalier | 17 comments
I’m excited to share that I will be starting a new chapter at Nio Robotics this Monday. I feel both thrilled and humbled by the maturity, technical expertise, and excellence of the teams, and I look forward to contributing to this journey. #NioRobotics | 17 comments on LinkedInwww.linkedin.com
View attachment 91392
![]()
Richard Chevalier - Nio Robotics (formerly Nimble One) | LinkedIn
¤ Management Skills - Multicultural mindset with experience in multisite… · Experience: Nio Robotics (formerly Nimble One) · Education: Ecole nationale supérieure des Télécommunications · Location: Greater Toulouse Metropolitan Area · 461 connections on LinkedIn. View Richard Chevalier’s...www.linkedin.com
View attachment 91394
View attachment 91393
One can only hope that he will spruik BrainChip’s technology to his new employer - as far as I can tell, there is no indication that Nio Robotics have already been exploring neuromorphic computing for their robots.
![]()
Nio Robotics is currently building Aru, a shape-shifting mobile robot for industrial environments, but also say in their self-description on LinkedIn that they are “reinventing movement to create the first robotic assistant for homes”.
Something our CTO Tony Lewis, who is also a robotics researcher, will likely be very intrigued about.
Robotic maintenance and inspection with interactive capabilities solutions - Nio
Nio created autonomous and polymorphic robot called Aru for industrial applications, providing advanced solutions to automate routine inspections and enhance operational efficiency and maintain industrial infrastructure.www.nio-robotics.com
View attachment 91396
View attachment 91398
View attachment 91400
View attachment 91401
View attachment 91402
Watch Aru in action below, how it climbs stairs, reshapes to avoid obstacles, opens doors etc.
According to Athos Silicon Co-Founder and former Mercedes-Benz North America mSoC Chief Architect François Piednoël, Akida does not pass the minimum requirements for Level 3 or Level 4 automated driving.
![]()
#ai #artificialintelligence #ainews #aichips | AI & Robotics | 20 comments
Mercedes-Benz AG has launched Athos Silicon, a new chip company focused on developing energy-efficient chips for autonomous vehicles. Athos Silicon, based in Santa Clara, California, is a spin-off from Mercedes-Benz's Silicon Valley research arm and aims to create safer and more power-efficient...www.linkedin.com
View attachment 92002
Good question. I suppose you would need to ask François Piednoël directly whether what he means is possibly that there is an Akida-inherent problem, which would also concern the IP. (The BRN shareholder on LinkedIn was asking “Why isn’t Mercedes using Akida technology, for example?”, referring to Akida technology in general, not only to physical chips such as AKD1000 or AKD1500).
Apart from that, since Athos Silicon has so far not signed any IP deal with us, we can’t be in the first mSoC silicon codenamed Polaris anyway that he was referring to in this recent video:
![]()
ipXchange Interview with Athos Silicon Chief mSoC™ Architect Francois Piednoel
IpXchange spotlights Athos Silicon's mSoC™ as a safety-first unified control fabric that combines integrated redundancy, real-time voting, and energy-efficient execution to advance certified autonomy across vehicles, robotics, and mission-critical systemswww.athossilicon.com
View attachment 92176
![]()
Athos Silicon: Multiple System-on-Chip for Safe Autonomy
Athos Silicon’s Multiple System-on-Chip (mSoC) delivers functionally safe chiplet-based compute for autonomous driving and robotics.ipxchange.tech
Athos Silicon: Multiple System-on-Chip for Safe Autonomy
By Luke Forster
Published
1 October 2025
Written by Luke Forster
Building functional safety into compute for autonomy
Athos Silicon, a spin-out from Mercedes-Benz, is addressing one of the most pressing challenges in autonomous systems: how to eliminate the single point of failure in compute architectures. Unlike traditional monolithic processors that can collapse if a single component fails, Athos Silicon’s Multiple System-on-Chip (mSoC) integrates redundancy directly into silicon.
The result is a functionally safe Processor platform designed to meet ISO 26262 and other standards required for safety-critical applications.
Why safety-first design is essential
Conventional computing platforms – with a CPU, GPU, and NPU working together – were never built for Automotive safety. If a processor crashes or a transient error occurs, the entire system may fail. In a consumer PC this means a reboot; in a self-driving vehicle or industrial robot, it could mean disaster.
Athos Silicon has rethought this architecture from the ground up. By focusing on functional safety as a primary design constraint, its mSoC avoids the patchwork redundancy of external systems and instead bakes resilience into the hardware itself.
The mSoC architecture explained
Athos Silicon’s mSoC integrates multiple chiplets into one package, each containing CPUs, controllers, and memory. Instead of a single supervisor chip that itself could fail, the mSoC operates through a voting mechanism — what Athos calls a “silicon democracy.”
Each chiplet executes tasks in parallel, and their outputs are compared in real time. If one diverges from the others, it is overruled and reset. This ensures continuous operation without interruption and prevents cascading system failures.
By embedding this redundancy, Athos Silicon enables High Reliability computing suitable for Level 3 and Level 4 autonomy while maintaining predictable performance.
Power efficiency for EVs and robotics
Safety is not the only benefit. In electric vehicles, compute power directly affects range. Athos Silicon highlights that every 100 watts of compute load can reduce EV range by as much as 15 miles. By designing a chiplet system optimised for Low Power efficiency, the mSoC reduces unnecessary energy consumption and makes autonomy more practical for battery-powered platforms.
From Mercedes-Benz R&D to startup scale
The technology behind Athos Silicon was incubated within Mercedes-Benz before the company was spun out to bring the platform to the wider market.
Its first silicon, codenamed Polaris, is designed to deliver Level 3 and Level 4 autonomous capability in a footprint comparable to current Level 2 hardware.
Working with chiplet-packaging partners, Athos Silicon has accelerated validation and plans to deliver silicon to early customers soon. With no competitors currently offering integrated voting redundancy in a chiplet-based compute platform, Athos Silicon is carving out a unique position in the AI ecosystem.
Applications beyond cars
While autonomous driving is the most visible use case, Athos Silicon’s architecture also applies to Robotics, avionics, and even Medical devices where safety and reliability are paramount. Any system requiring certifiable, functionally safe compute stands to benefit.
By combining chiplet redundancy, real-time voting, and safety-first design, Athos Silicon’s Multiple System-on-Chip may prove to be the missing hardware foundation for truly certifiable autonomy.
This is what the Polaris mSoC will roughly look like sizewise (watch from around 10:50 min):
View attachment 92177
According to François Piednoël, “Project mSoC” as such started in 2020 (still under Mercedes Benz North America R&D).
Not sure what exact date the interview was recorded, but given that Athos Silicon as a Mercedes-Benz spin-off has been around since April 2025, and in the video it is being said the company is about four months old, it must have been recorded sometime between late July and early September.
So when François Piednoël says “In fact, there is silicon coming back shortly. By the end of the summer we’ll have the chiplets in hands” (from 9:06 min), this means they would have received them by now, if everything went according to plan. (“We think we are in good shape for a startup - getting your silicon after, you know, five six months (…) With no visible competition, by the way.”)
He also says, they invented the IP.
| Name | |
|---|---|
|
2 hours ago | |
| .gitignore | 3 hours ago |
| Dataloader.py | 2 hours ago |
| LICENSE | 3 hours ago |
| README.md | 2 hours ago |
| plot.py | 2 hours ago |
| preview.py | 2 hours ago |
| setup_akida_env.sh | 2 hours ago |
| train.py | 2 hours ago |