BRN Discussion Ongoing

DK6161

Regular
sarcastic laugh GIF
 

7für7

Top 20
It always dumps faster than it climbs… takes two days to gain 2 cents, and just four hours to drop 3. Same pattern over and over again. But I’m really looking forward to the day when the shorters get completely wiped out …. no excuses, just boom.

Tired Drama GIF by Channel 7
 
Last edited:
  • Like
  • Fire
Reactions: 8 users

itsol4605

Regular
It always dumps faster than it climbs… takes two days to gain 2 cents, and just four hours to drop 3. Same pattern over and over again. But I’m really looking forward to the day when the shorters get completely wiped out …. no excuses, just boom.

Tired Drama GIF by Channel 7
Brand new insight from our stock market guru
 
  • Haha
Reactions: 5 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
I just stumbled over an OpenAI job posting on LinkedIn for an edge computing role!

The role is for “Senior Research Engineer — Edge, Future of Computing.” It calls for "experience shipping AI products with models in compute-constrained environments" and "working with compute-constrained hardware for inference".

Since the job title mentions "edge", I'm taking it to mean on-device/edge. And if “compute-constrained” means tight power, memory, and thermal budgets with real-time requirements, then Akida fits that category.

Either way, it's great to see OpenAI thinking beyond the cloud and leaning into edge AI.





Screenshot 2025-10-10 at 2.39.25 pm.png






 
Last edited:
  • Like
  • Fire
  • Thinking
Reactions: 25 users

Frangipani

Top 20
Another highly entertaining eejournal.com article featuring BrainChip by Max Maxfield:


max-0424-image-for-home-page-akida.jpg

October 9, 2025

Bodacious Buzz on the Brain-Boggling Neuromorphic Brain Chip Battlefront​

by Max Maxfield

Hold onto your hippocampus because the latest neuromorphic marvels are firing on all synapses. To ensure we’re all tap-dancing to the same skirl of the bagpipes, let’s remind ourselves that the term “neuromorphic” is a portmanteau that combines the Greek words “neuro” (relating to nerves or the brain) and “morphic” (relating to form or structure).

Thus, “neuromorphic” literally means “in the form of the brain.” In turn, “neuromorphic computing” refers to electronic systems inspired by the human brain’s functioning. Instead of processing data step-by-step, like traditional computers, neuromorphic chips attempt to mimic how neurons and synapses communicate—utilizing spikes of electrical activity, massive parallelism, and event-driven operation.

The focus of this column is on hardware accelerator intellectual property (IP) functions—specifically neural processing units (NPUs)—that designers can incorporate into their System-on-Chip (SoC) devices. Some SoC developers use third-party NPU IPs, while others develop their own IPs in-house.

I was just chatting with Steve Brightfield, who is CMO at BrainChip. As you may recall, BrainChip’s claim to fame is its Akida AI acceleration processor IP, which is inspired by the human brain’s cognitive capabilities and energy efficiency. Akida delivers low-power, real-time AI processing at the edge, utilizing neuromorphic principles for applications such as vision, audio, and sensor fusion.

The vast majority of NPU IPs accelerate artificial neural networks (ANNs) using large arrays of multiply–accumulate (MAC) units. These dense matrix–vector operations are energy-hungry because every neuron participates in every computation, and the hardware must move a lot of data between memory and the MAC array.

By comparison, Akida employs a neuromorphic architecture based on spiking neural networks (SNNs). Akida’s neurons don’t constantly compute weighted sums; instead, they exchange spikes (brief digital pulses) only when their internal “membrane potential” crosses a threshold. This makes Akida event-driven; that is, computation only occurs when new information is available to process.

In contrast to the general-purpose MAC arrays found in conventional NPUs, Akida utilizes synaptic kernels that perform weighted event accumulation upon the arrival of spikes. Each synapse maintains a small local weight and adds its contribution to a neuron’s membrane potential when it receives a spike. This achieves the same effect as multiply-accumulate, but in a sparse, asynchronous, and energy-efficient manner that’s more akin to a biological brain.

max-0424-01-akida-ip.jpg

Akida self-contained AI acceleration processor IP (Source: BrainChip)

According to BrainChip’s website, the Akida self-contained AI neural processor IP features the following:
  • Scalable fabric of 1 to 128 nodes
  • Each neural node supports 128 MACs
  • Configurable 50K to 130K embedded local SRAM per node
  • DMA for all memory and model operations
  • Multi-layer execution without host CPU
  • Integrate with any Microcontroller or Application Processor
  • Efficient algorithmic mesh

Hang on! I just told you that, “In contrast to the general-purpose MAC arrays found in conventional NPUs, Akida utilizes synaptic kernels…” So, it’s a tad embarrassing to see the folks at BrainChip referencing MACs on their website. The thing is that, in the case of the Akida, the term “MAC” is used somewhat loosely—more as an engineering shorthand than as a literal, synchronous multiply–accumulate unit like those found in conventional GPUs and NPUs.

While each Akida neural node contains hardware that can perform multiply-accumulate operations, these operations are event-driven and sparsely activated. When an input spike arrives, only the relevant synapses and neurons in that node perform a small weighted accumulation—there’s no continuous clocked matrix multiplication going on in the background.

So, while BrainChip’s documentation calls them “MACs,” they’re actually implemented as neuromorphic synaptic processors that behave like a MAC when a spike fires while remaining idle otherwise. This is how the Akida achieves orders-of-magnitude lower power consumption than conventional NPUs, despite performing similar mathematical operations in principle.

Another way to think about this is that a conventional MAC array crunches numbers continuously, with every neuron participating in every cycle. By comparison, an Akida node’s neuromorphic synapses sit dormant, only springing into action when a spike arrives, performing their math locally, and then quieting down again. If I were waxing poetical, I might be tempted to say something pithy at this juncture, like “more firefly than furnace,” but I’m not, so I won’t.

But wait, there’s more, because the Akida processor IP uses sparsity to focus on the most important data, inherently avoiding unnecessary computation and saving energy at every step. Meanwhile, BrainChip’s neural network model, known as Temporal Event-based Neural Networks (TENNs), builds on a state-space model architecture to track events over time, rather than sampling at fixed intervals, thereby skipping periods of no change to conserve energy and memory. Together, these little scamps deliver unmatched efficiency for real-time AI.

max-0424-02-akida-ip-and-tenns-models.jpg

Akida neural processor + TENNs models = awesome (Source: BrainChip)

The name of the game here is “sparse.” We’re talking sparse data (streaming inputs are converted to events at the hardware level, reducing the volume of data by up to 10x before processing begins), sparse weights (unnecessary weights are pruned and compressed, reducing model size and compute demand by up to 10x), and sparse activations (only essential activation functions pass data to the next layers, cutting downstream computation by up to 10x).

Since traditional CNNs activate every neural layer at every timestep, they can consume watts of power to process full data streams, even when nothing is changing. By comparison, the fact that the Akida processes only meaningful information enables real-time streaming AI that runs continuously on milliwatts of power, making it possible to deploy always-on intelligence in wearables, sensors, and other battery-powered devices.

Of course, nothing is easy (“If it were easy, everyone would be doing it,” as they say). A significant challenge for people who wish to utilize neuromorphic computing is that spiking networks differ from conventional neural networks. This is why the folks at BrainChip provide a CNN-to-SNN converter. This means developers can start out with a conventional CNN (which they may already have) and then convert it to an SNN to run on an Akida.

As usual, there are more layers to this onion than you might first suppose. Consider, for example, BrainChip’s collaboration with Prophesee. This is one of those rare cases where two technologies fit together as if they’d been waiting for each other all along.

Prophesee’s event-based cameras don’t capture conventional frames at fixed intervals; instead, each pixel generates a spike whenever it detects a change in light intensity. In other words, the output is already neuromorphic in nature—a continuous stream of sparse, asynchronous events (“spikes”) rather than dense video frames.

That makes it the perfect companion for BrainChip’s Akida processor, which is itself a spiking neural network. While traditional cameras must be converted into spiking form to feed an SNN, and while Prophesee must normally “de-spike” its output to feed a conventional convolutional network, Akida and Prophesee can connect directly—spike to spike, neuron to neuron—with no format gymnastics or power-hungry frame buffering in between.

This native spike-based synergy pays off handsomely in power and latency. As BrainChip’s engineers put it, “We’re working in kilobits per second instead of megabits per second.” Because the Prophesee sensor transmits information only when something changes, and Akida computes only when spikes arrive, the overall system consumes mere milliwatts—compared to the tens of milliwatts required by conventional vision systems.

That difference may not matter in a smartphone, but it’s mission-critical for AR/VR glasses, where the battery is a tenth or even a twentieth the size of a phone’s. By eliminating the need to convert between frames and spikes—and avoiding the energy cost of frame storage, buffering, and transmission—BrainChip and Prophesee have effectively built a neuromorphic end-to-end vision pipeline that mirrors how biological eyes and brains actually work: always on, always responsive, yet sipping power rather than guzzling it.

As another example, I recently heard that BrainChip and HaiLa Technologies have partnered to show what happens when brain-inspired computing meets ultra-efficient wireless connectivity. They’ve created a demonstration that pairs BrainChip’s Akida neuromorphic processor with HaiLa’s BSC2000 backscatter RFIC, a Wi–Fi–compatible chip that communicates by reflecting existing radio signals rather than generating its own (I plan to devote a future column to this technology). The result is a self-contained edge-AI platform that can perform continuous sensing, anomaly detection, and condition monitoring while sipping mere microwatts of power—small enough to run a connected sensor for its entire lifetime on a single coin-cell battery.

This collaboration highlights a new class of intelligent, battery-free edge devices, where sensing, processing, and communication are all optimized for power efficiency. Akida’s event-driven architecture processes only the spikes that matter, while HaiLa’s passive backscatter link eliminates most of the radio’s energy cost. Together they enable always-on, locally intelligent IoT nodes ideal for medical, environmental, and infrastructure monitoring—places where replacing batteries is expensive, impractical, or downright impossible. In short, BrainChip and HaiLa are sketching the blueprint for the next wave of ultra-low-power edge AI systems that think before they speak, and that do both with astonishing efficiency.

Sad to relate, none of the above was what I wanted to talk to you about (stop groaning—it was worth reading). What I originally set out to tell you is the newly introduced Akida Cloud (imagine a roll of drums and a fanfare of trombones).
The existing Akida 1, which has been extremely well received by the market, supports 4-, 2-, and 1-bit weights and activations. The next-generation Akida 2, which is expected to be available to developers in the very near future, will support 8-, 4-, and 1-bit weights and activations. Also, the Akida 2 will support spatio-temporal and temporal event-based neural networks.

For years, BrainChip’s biggest hurdle in courting developers wasn’t its neuromorphic silicon—it was logistics. Demonstrating the Akida architecture meant physically shipping bulky FPGA-based boxes to customers, powering them up on-site, and juggling loan periods. With the launch of the Akida Cloud, that bottleneck disappears.

Engineers can now log in, spin up a virtual instance of the existing Akida 1 running on an actual Akida 1, or the forthcoming Akida 2 running on an FPGA, and run their own neural workloads directly in the browser. Models can be trained, loaded, executed, and benchmarked in real time—no shipping crates, NDAs, or lab setups required.

Akida Cloud represents more than a convenience upgrade; it’s a strategic move to democratize access to neuromorphic technology. By making their latest architectures available online, the chaps and chapesses at BrainChip are lowering the barrier to entry for researchers, startups, and OEMs who want to experiment with event-based AI but lack specialized hardware.

Users can compare the behavior of Akida 1 and Akida 2 side by side, prototype models, and gather performance data before committing to silicon. For BrainChip, the cloud platform also serves as a rapid feedback loop—turning every connected engineer into an early tester and accelerating SNN adoption across the edge AI ecosystem.

And there you have it—brains in the cloud, spikes on the wire, and AI that thinks before it blinks. If this is what the neuromorphic future looks like, I say “bring it on” (just as soon as my poor old hippocampus cools down). But it’s not all about me (it should be, but it’s not). So, what do you think about all of this?



8676747C-4E67-4723-9B98-58C76FCA5062.jpeg
 
  • Like
  • Fire
Reactions: 10 users

TECH

Regular


View attachment 91883

As much as I like your research Frangi, please show the same courtesy as you would expect from others on this forum, namely, I asked you a question a number of weeks ago, but you chose not to respond, now for someone whom likes to comment on other posters, for good or bad, can you give me and others your view on how you personally think Brainchip is tracking towards a successful outcome...going by the amount of time you spend in showcasing facts, you must have formed an opinion one way or another.

Be assured, Australians only Sue if you are caught out lying... appreciate an honest response this time !

Best regards....Texta 🍷🍷
 
  • Like
  • Fire
Reactions: 5 users
Top Bottom