BRN Discussion Ongoing

Hi Space Cadet,

I’m pretty sure one of our BOD said some time ago that safety certification is handled by the OEM.

From what I’ve read, it works something like this - the OEM (Mercedes in this case), owns the vehicle-level L3/L4 feature and the full ISO 26262 / SOTIF (ISO 21448) / ISO/SAE 21434 safety case. In short, Mercedes are the certifying party.

Mercedes can shift portions of the responsibility to a supplier (i.e Athos) if they ship the mSoC/ECU. In that case, Athos would be responsible for the ECU-level safety case (redundancy, voting, diagnostics) and for providing SEooC documentation back to Mercedes.

BrainChip’s role is different. We provide the IP plus the safety collateral (safety manual, failure-mode/diagnostics data, etc.) so our tech can be used inside the OEM’s vehicle/system-level certification.

From what I understand, for system certification you need the whole kit and caboodle - sensors + perception + planning + actuation + ISO 26262/SOTIF + redundancy. By that standard, no single accelerator (GPU/NPU/CPU/neuromorphic) “achieves L3/L4” on its own. Akida isn’t meant to be the whole ADAS computer; it’s a low-power, event-driven component that fits inside an L3/L4 ECU.

Athos Silicon’s mSoC is meant to give an OEM the building blocks to construct that ISO 26262/SOTIF case around their autonomous driving stack. Their pitch is that their solution is a safety envelope (redundancy + voting + diagnostics + isolation + timing discipline) that makes it easier for OEMs to build an L3/L4 system. It provides the critical safety architecture into which the rest of the stack can be integrated. That envelope is meant to be vendor-agnostic and therefore I suppose it can host NVIDIA, Qualcomm, Arm NPUs, Intel, BrainChip's Akida, etc.

I assume Piednoel said "Akida does not pass minimum requirements for a Level 3 or level 4" to promote his Athos’s mSoC solution, rather than provide free airtime to Akida. The idea being that chips don’t meet L3/L4 alone, but can inside Athos’s safety architecture.

Markus May’s “not on its own” reply reinforces that idea IMO.

These are my opinions only. Please DYOR.
Thanks Bravo, great explanation

SC
 
  • Like
Reactions: 3 users

CHIPS

Regular
Back in May I floated the idea that we might be involved with Anduril’s new helmet. #99,903

This hunch came from Sean’s AGM comment about a military-focused headset doing something that I recall him describing along the lines of "never having been done before", specifically enabling rear-vision for better situational awareness.

Fast-forward to now and check the LinkedIn clip. You can see that there’s a rear-view display band across the top of the HUD (I’ve circled it in red). If I'm not mistaken, I believe this is exactly the kind of capability Sean hinted at.

I also thought the soldier’s point (see below) about battery life is pretty telling because it’s precisely the use case where ultra-low-power neuromorphic would be extra handy.



View attachment 92165




The EagleEye partners publicly named by Anduril so far are Meta, Oakley Qualcomm and Gentex. But as you can see from the two extracts below "other commercials organisations are also involved" with "Luckey telling reporters the company plans to announce more partners over the next year". So if we are involved, an announcement could land within that window.

Obviously, it still speculation about any BrainChip tie-in, but the rear-vision feature aligns uncannily with what Sean told us at the AGM. If nothing else, the use cases with always-on, ultra-low-power, event-driven perception at the edge are a natural fit, so I'd say we are in with a pretty good chance IMO.




View attachment 92166










View attachment 92167




And let's not forget Jonathan Tapson's comments in his Washington post in which he stated, "the US AI industry is becoming increasingly integrated with Defense and associated Departments in the US Government, and companies such as Anduril and Palantir are showing the way. BrainChip will be part of this integration."





View attachment 92170










Also noteworthy is that Palmer Luckey has said Anduril plans to market EagleEye beyond the US military, to firefighters and first responders and he expects interest from US Department of Homeland Security, including the USCoast Guard and Border Patrol.




View attachment 92163


Anduril has partnered with ....
Did they partner with Brainchip?

I doubt that we are part of their EagleEye headware, but I hope that I am wrong.




1760873996218.png
 

CHIPS

Regular

Logo des Newsletters
NextGen Market Trends
387 Abonnent:innen
Abonnieren
Cover image of DarwinWafer neuromorphic chip — a 300 mm wafer integrating 64 Darwin3 chiplets, representing 150 million neurons

Cover image of DarwinWafer neuromorphic chip — a 300 mm wafer integrating 64 Darwin3 chiplets, representing 150 million neurons

The $8.1 Billion Question: Can a 100-Watt Brain Outperform Cloud AI?​

Md Nasiruddin

Md Nasiruddin


B2B Content & SEO Strategist | Energy & Healthcare Specialist | Market Research | Brand Positioning & Insight-Driven Growth



17. Oktober 2025

The Neuromorphic Reckoning | DarwinWafer and the Quiet Redesign of Intelligence​

For decades, the semiconductor industry scaled through force: more transistors, tighter geometries, higher throughput. The result was astonishing—but also blunt. We built machines that calculate magnificently and think poorly.
Neuromorphic computing is the counter-narrative: rather than simulating cognition through linear algebra, it seeks to embody it in silicon.
The latest inflection point in this movement arrives from Zhejiang University’s and Zhijiang Laboratory’s DarwinWafer—a wafer-scale neuromorphic system that fuses 64 Darwin3 chiplets on a 300 mm interposer, yielding roughly 150 million neurons and 6.4 billion synapses on a single substrate.
The entire wafer operates at 333 MHz, consuming about 100 watts and achieving a remarkable ≈ 4.9 pJ per synaptic operation—an energy metric so low it inverts conventional assumptions about computational cost.
This isn’t an academic curiosity. It is the first credible demonstration that wafer-scale neuromorphic architectures can balance density, thermal uniformity (34–36 °C), and system stability (supply droop ≈ 10 mV) in a reproducible environment.
For the first time, the physics of the brain is being approximated at the scale of a fab.

The Physics of Computation Is Tilting Toward Biology​

Traditional accelerators—GPUs, TPUs, NPUs—are engines of deterministic throughput. Their logic is Euclidean: every cycle, every FLOP, every joule accounted for. Neuromorphic systems reject that orthodoxy. They operate in spikes, in sparse temporal bursts, where computation occurs only when a neuron fires. The result is a computational topology that wastes almost nothing.
The DarwinWafer pushes this model to its limit. By internalizing communication across the interposer, it eliminates the PCB latency and energy overhead that have throttled large-scale brain simulations. The fabric’s asynchronous event routing (AER) and globally asynchronous–locally synchronous (GALS) network enable inter-chiplet coherence without a clock tree’s tyranny. What emerges is not a “chip,” but an ecosystem—a distributed nervous system of silicon.
In practical terms, that means near-zero idle power, microsecond reaction times, and an energy-performance envelope suitable for real-time robotics, adaptive sensing, and embodied AI systems where latency equals survival.

Comparative Benchmarks: The Shape of the Competitive Landscape​

Measured against its peers, the DarwinWafer’s architecture redefines the efficiency frontier.

  • Intel’s Loihi 2 / Hala Point (2024) integrates 1,152 chips to simulate ~1.15 billion neurons under ~2.6 kW—demonstrating scale, but at an energy cost roughly an order of magnitude higher per synaptic event.
  • IBM’s TrueNorth (2014) remains the canonical reference for production neuromorphics (~1 million neurons, ~22 pJ/SOP), but its digital rigidity limited adaptability.
  • BrainChip’s Akida Gen 2 (2024), operating at ~3 pJ/SOP, optimizes for edge inference and already ships inside consumer IoT devices, where always-on vision and anomaly detection are redefining “smart.”
  • SynSense extends the same philosophy to visual sensing, merging event-driven cameras and spiking inference at milliwatt power budgets.
  • Cerebras WSE-2, though not neuromorphic, provides proof that wafer-scale integration can be commercialized—an industrial precedent for the Darwin lineage.

DarwinWafer’s position is therefore unique: wafer-scale like Cerebras, brain-fidel like Loihi, and energy-lean like Akida. It collapses three evolutionary branches of computing into one platform.

Economic and Strategic Consequences​

The MarketGenics 2025 Neuromorphic Chip Market Report values the current market at USD 42.6 million, projecting a 69 % CAGR to reach USD 8.1 billion by 2035. At first glance, that figure appears optimistic; on closer inspection, it reflects a broader structural transition in compute economics.

  1. From centralization to distribution: Neuromorphic devices decentralize intelligence. They replace cloud dependence with edge autonomy. Every watt saved becomes a node of creative surplus.
  2. From compute to cognition: The performance metric shifts from teraFLOPs to picojoules per decision—a fundamental re-pricing of intelligence itself.
  3. From ownership to integration: IP-driven models (BrainChip’s Akida Cloud, Intel’s Lava framework) monetize not the silicon, but the ecosystem that trains and deploys it.

The investment thesis is equally clear. Edge AI—consumer electronics, robotics, industrial sensing—already constitutes ~42 % of demand, with North America leading at 63.5 % of global Neuromorphic Chip Market revenue. Defense programs (Raytheon–AFRL, Sandia–SpiNNaker2) provide immediate revenue floors, while Asian research ecosystems drive architectural breakthroughs at unmatched velocity.

Engineering Realities and Philosophical Stakes​

The challenges remain formidable: wafer-scale yield management, redundancy mapping, cross-die synchronization, and—perhaps most critically—the absence of a mature software abstraction layer. Today’s neuromorphic developers inhabit a world that feels pre-CUDA: fragmented tools, bespoke compilers, and fragile model-to-hardware pipelines. Without a unifying software fabric, even the most elegant hardware risks intellectual isolation.
Yet this friction may be temporary. Each iteration closes the gap between neuroscience and engineering, between simulation and embodiment. What is emerging is not merely faster computation—it is a new ontology of computing, one where devices perceive, learn, and adapt with biological parsimony.

The Broader Horizon​

If DarwinWafer proves manufacturable at scale, the implications extend beyond AI. Imagine wafer-scale cognitive co-processors embedded in data centers, replacing megawatt GPU clusters with kilowatt brain fabrics. Imagine prosthetics, drones, or industrial systems endowed with reflexive intelligence, learning in situ rather than retraining in the cloud.
For the semiconductor economy, this represents a gravitational reorientation: the migration of value from processing more to processing smarter. The next great arms race in computing will not be about speed, but about entropy—who can minimize it best.
The brain mastered that equation millions of years ago. With DarwinWafer, silicon is beginning to catch up. And for those reading balance sheets as closely as research papers, that convergence between neural efficiency and manufacturable scale may prove to be the most consequential development in technology this decade.
In the end, the measure of progress is no longer how much power we can burn to simulate thought, but how little power it takes to think.
 
  • Like
  • Fire
Reactions: 4 users

Rach2512

Regular
 
  • Like
Reactions: 1 users
Top Bottom