Another highly entertaining eejournal.com article featuring BrainChip by Max Maxfield:
Hold onto your hippocampus because the latest neuromorphic marvels are firing on all synapses. To ensure weโre all tap-dancing to the same skirl of the bagpipes, letโs remind ourselves that the terโฆ
www.eejournal.com
October 9, 2025
Bodacious Buzz on the Brain-Boggling Neuromorphic Brain Chip Battlefront
by Max Maxfield
Hold onto your hippocampus because the latest neuromorphic marvels are firing on all synapses. To ensure weโre all tap-dancing to the same skirl of the bagpipes, letโs remind ourselves that the term โneuromorphicโ is a portmanteau that combines the Greek words โneuroโ (relating to nerves or the brain) and โmorphicโ (relating to form or structure).
Thus, โneuromorphicโ literally means โin the form of the brain.โ In turn, โneuromorphic computingโ refers to electronic systems inspired by the human brainโs functioning. Instead of processing data step-by-step, like traditional computers, neuromorphic chips attempt to mimic how neurons and synapses communicateโutilizing spikes of electrical activity, massive parallelism, and event-driven operation.
The focus of this column is on hardware accelerator intellectual property (IP) functionsโspecifically neural processing units (NPUs)โthat designers can incorporate into their System-on-Chip (SoC) devices. Some SoC developers use third-party NPU IPs, while others develop their own IPs in-house.
I was just chatting with Steve Brightfield, who is CMO at
BrainChip. As you may recall, BrainChipโs claim to fame is its
Akida AI acceleration processor IP, which is inspired by the human brainโs cognitive capabilities and energy efficiency. Akida delivers low-power, real-time AI processing at the edge, utilizing neuromorphic principles for applications such as vision, audio, and sensor fusion.
The vast majority of NPU IPs accelerate artificial neural networks (ANNs) using large arrays of multiplyโaccumulate (MAC) units. These dense matrixโvector operations are energy-hungry because every neuron participates in every computation, and the hardware must move a lot of data between memory and the MAC array.
By comparison, Akida employs a neuromorphic architecture based on spiking neural networks (SNNs). Akidaโs neurons donโt constantly compute weighted sums; instead, they exchange spikes (brief digital pulses) only when their internal โmembrane potentialโ crosses a threshold. This makes Akida event-driven; that is, computation only occurs when new information is available to process.
In contrast to the general-purpose MAC arrays found in conventional NPUs, Akida utilizes synaptic kernels that perform weighted event accumulation upon the arrival of spikes. Each synapse maintains a small local weight and adds its contribution to a neuronโs membrane potential when it receives a spike. This achieves the same effect as multiply-accumulate, but in a sparse, asynchronous, and energy-efficient manner thatโs more akin to a biological brain.
Akida self-contained AI acceleration processor IP (Source: BrainChip)
According to BrainChipโs website, the Akida self-contained AI neural processor IP features the following:
- Scalable fabric of 1 to 128 nodes
- Each neural node supports 128 MACs
- Configurable 50K to 130K embedded local SRAM per node
- DMA for all memory and model operations
- Multi-layer execution without host CPU
- Integrate with any Microcontroller or Application Processor
- Efficient algorithmic mesh
Hang on! I just told you that, โIn contrast to the general-purpose MAC arrays found in conventional NPUs, Akida utilizes synaptic kernelsโฆโ So, itโs a tad embarrassing to see the folks at BrainChip referencing MACs on their website. The thing is that, in the case of the Akida, the term โMACโ is used somewhat looselyโmore as an engineering shorthand than as a literal, synchronous multiplyโaccumulate unit like those found in conventional GPUs and NPUs.
While each Akida neural node contains hardware that can perform multiply-accumulate operations, these operations are event-driven and sparsely activated. When an input spike arrives, only the relevant synapses and neurons in that node perform a small weighted accumulationโthereโs no continuous clocked matrix multiplication going on in the background.
So, while BrainChipโs documentation calls them โMACs,โ theyโre actually implemented as neuromorphic synaptic processors that behave like a MAC when a spike fires while remaining idle otherwise. This is how the Akida achieves orders-of-magnitude lower power consumption than conventional NPUs, despite performing similar mathematical operations in principle.
Another way to think about this is that a conventional MAC array crunches numbers continuously, with every neuron participating in every cycle. By comparison, an Akida nodeโs neuromorphic synapses sit dormant, only springing into action when a spike arrives, performing their math locally, and then quieting down again. If I were waxing poetical, I might be tempted to say something pithy at this juncture, like โmore firefly than furnace,โ but Iโm not, so I wonโt.
But wait, thereโs more, because the Akida processor IP uses sparsity to focus on the most important data, inherently avoiding unnecessary computation and saving energy at every step. Meanwhile, BrainChipโs neural network model, known as Temporal Event-based Neural Networks (TENNs), builds on a state-space model architecture to track events over time, rather than sampling at fixed intervals, thereby skipping periods of no change to conserve energy and memory. Together, these little scamps deliver unmatched efficiency for real-time AI.
Akida neural processor + TENNs models = awesome (Source: BrainChip)
The name of the game here is โsparse.โ Weโre talking
sparse data (streaming inputs are converted to events at the hardware level, reducing the volume of data by up to 10x before processing begins),
sparse weights (unnecessary weights are pruned and compressed, reducing model size and compute demand by up to 10x), and
sparse activations (only essential activation functions pass data to the next layers, cutting downstream computation by up to 10x).
Since traditional CNNs activate every neural layer at every timestep, they can consume watts of power to process full data streams, even when nothing is changing. By comparison, the fact that the Akida processes only meaningful information enables real-time streaming AI that runs continuously on milliwatts of power, making it possible to deploy always-on intelligence in wearables, sensors, and other battery-powered devices.
Of course, nothing is easy (โIf it were easy, everyone would be doing it,โ as they say). A significant challenge for people who wish to utilize neuromorphic computing is that spiking networks differ from conventional neural networks. This is why the folks at BrainChip provide a CNN-to-SNN converter. This means developers can start out with a conventional CNN (which they may already have) and then convert it to an SNN to run on an Akida.
As usual, there are more layers to this onion than you might first suppose. Consider, for example, BrainChipโs collaboration with
Prophesee. This is one of those rare cases where two technologies fit together as if theyโd been waiting for each other all along.
Propheseeโs event-based cameras donโt capture conventional frames at fixed intervals; instead, each pixel generates a spike whenever it detects a change in light intensity. In other words, the output is already neuromorphic in natureโa continuous stream of sparse, asynchronous events (โspikesโ) rather than dense video frames.
That makes it the perfect companion for BrainChipโs Akida processor, which is itself a spiking neural network. While traditional cameras must be converted into spiking form to feed an SNN, and while Prophesee must normally โde-spikeโ its output to feed a conventional convolutional network, Akida and Prophesee can connect directlyโspike to spike, neuron to neuronโwith no format gymnastics or power-hungry frame buffering in between.
This native spike-based synergy pays off handsomely in power and latency. As BrainChipโs engineers put it, โWeโre working in kilobits per second instead of megabits per second.โ Because the Prophesee sensor transmits information only when something changes, and Akida computes only when spikes arrive, the overall system consumes mere milliwattsโcompared to the tens of milliwatts required by conventional vision systems.
That difference may not matter in a smartphone, but itโs mission-critical for AR/VR glasses, where the battery is a tenth or even a twentieth the size of a phoneโs. By eliminating the need to convert between frames and spikesโand avoiding the energy cost of frame storage, buffering, and transmissionโBrainChip and Prophesee have effectively built a neuromorphic end-to-end vision pipeline that mirrors how biological eyes and brains actually work: always on, always responsive, yet sipping power rather than guzzling it.
As another example, I recently heard that BrainChip and
HaiLa Technologies have partnered to show what happens when brain-inspired computing meets ultra-efficient wireless connectivity. Theyโve created a demonstration that pairs BrainChipโs Akida neuromorphic processor with HaiLaโs BSC2000 backscatter RFIC, a WiโFiโcompatible chip that communicates by reflecting existing radio signals rather than generating its own (I plan to devote a future column to this technology). The result is a self-contained edge-AI platform that can perform continuous sensing, anomaly detection, and condition monitoring while sipping mere microwatts of powerโsmall enough to run a connected sensor for its entire lifetime on a single coin-cell battery.
This collaboration highlights a new class of intelligent, battery-free edge devices, where sensing, processing, and communication are all optimized for power efficiency. Akidaโs event-driven architecture processes only the spikes that matter, while HaiLaโs passive backscatter link eliminates most of the radioโs energy cost. Together they enable always-on, locally intelligent IoT nodes ideal for medical, environmental, and infrastructure monitoringโplaces where replacing batteries is expensive, impractical, or downright impossible. In short, BrainChip and HaiLa are sketching the blueprint for the next wave of ultra-low-power edge AI systems that think before they speak, and that do both with astonishing efficiency.
Sad to relate, none of the above was what I wanted to talk to you about (stop groaningโit was worth reading). What I originally set out to tell you is the newly introduced Akida Cloud (imagine a roll of drums and a fanfare of trombones).
The existing Akida 1, which has been extremely well received by the market, supports 4-, 2-, and 1-bit weights and activations. The next-generation Akida 2, which is expected to be available to developers in the very near future, will support 8-, 4-, and 1-bit weights and activations. Also, the Akida 2 will support spatio-temporal and temporal event-based neural networks.
For years, BrainChipโs biggest hurdle in courting developers wasnโt its neuromorphic siliconโit was logistics. Demonstrating the Akida architecture meant physically shipping bulky FPGA-based boxes to customers, powering them up on-site, and juggling loan periods. With the launch of the Akida Cloud, that bottleneck disappears.
Engineers can now log in, spin up a virtual instance of the existing Akida 1 running on an actual Akida 1, or the forthcoming Akida 2 running on an FPGA, and run their own neural workloads directly in the browser. Models can be trained, loaded, executed, and benchmarked in real timeโno shipping crates, NDAs, or lab setups required.
Akida Cloud represents more than a convenience upgrade; itโs a strategic move to democratize access to neuromorphic technology. By making their latest architectures available online, the chaps and chapesses at BrainChip are lowering the barrier to entry for researchers, startups, and OEMs who want to experiment with event-based AI but lack specialized hardware.
Users can compare the behavior of Akida 1 and Akida 2 side by side, prototype models, and gather performance data before committing to silicon. For BrainChip, the cloud platform also serves as a rapid feedback loopโturning every connected engineer into an early tester and accelerating SNN adoption across the edge AI ecosystem.
And there you have itโbrains in the cloud, spikes on the wire, and AI that thinks before it blinks. If this is what the neuromorphic future looks like, I say โbring it onโ (just as soon as my poor old hippocampus cools down). But itโs not all about me (it should be, but itโs not). So, what do you think about all of this?