BRN Discussion Ongoing

White Horse

Regular
Hi Bravo.

That poster Flectional who has been showing up on the crappers BRN threads lately sure seems to think Loihi 3 is real.
He comes across as a bit of a tech head having seen him post on WBT and 4DS over the past few years.
He posted on 8/2/26 quoting something called "SapienFusion - Feb 4 - 2026" as a source.

See excerpt of that post below...............

SapienFusion - Feb 4 - 2026

Intel Loihi 3​

Intel’s neuromorphic journey began in 2017 with Loihi 1, continued through Loihi 2’s 2021 release, and culminates in Loihi 3’s January 2026 commercial availability, a processor that represents the most significant architectural departure from conventional computing since GPUs themselves emerged. This isn’t an incremental improvement. This is brain-inspired computing that finally delivers on decades-old promises.
Graded Spikes: Bridging Two Worlds
'Loihi 3’s critical innovation introduces 32-bit graded spikes—a bridge between traditional deep neural networks operating on continuous values and spiking neural networks communicating through discrete events.Earlier neuromorphic generations used binary on/off signaling. A neuron either fired or didn’t. This forced algorithms designed for conventional architectures to undergo complete rewriting. Converting a PyTorch model to a binary spiking neural network required redesigning activation functions, adjusting learning algorithms, tuning temporal dynamics, and accepting accuracy degradation. The result created high barriers to adoption, and most developers stayed with GPUs.Graded spikes solve this problem by encoding information into spike amplitudes across a 32-bit range. Each spike carries nuanced information—not just fire or don’t fire, but fire with this specific intensity. This enables mainstream AI workloads to run on neuromorphic hardware with dramatically reduced power while requiring minimal algorithmic adaptation. Developers can convert existing models with automated tools currently in development, maintain accuracy within 1-2% of original performance, and achieve neuromorphic efficiency without complete redesign. This technical bridge makes commercial viability possible.'

Event-Driven Computation at Scale​

'The power efficiency advantage comes from temporal sparsity—the principle that most neurons remain inactive most of the time, processing only when relevant events occur.GPU processing a video stream at 30 frames per second processes all pixels with full computation for every frame, regardless of whether the scene changes. Frame 2 might be 95% identical to Frame 1, but the GPU performs full computation anyway. Frame 3 might be 97% identical to Frame 2, but again receives full computation. The result delivers massive redundant processing, consuming constant power.'

'Loihi 3 processing the same video stream activates neurons to establish a baseline during the initial scene, then only fires 5% of neurons to detect the changes in Frame 2 when 95% remains unchanged. Frame 3 triggers only 3% of neurons, when 97% stays static. Power consumption becomes proportional to actual information content rather than frame rate.For event-driven sensory data from neuromorphic cameras and event-based audio, Loihi 3 achieves theoretical 1,000× efficiency versus GPUs. This isn’t marketing hyperbole—it’s architectural mathematics. Temporal sparsity with 99% of neurons inactive delivers a 100× reduction. Spatial sparsity through local processing without global synchronization provides a 10× reduction. Combined, these factors multiply to 1,000× efficiency. Real-world performance varies by workload, but event-based applications routinely achieve 500-1,000× improvements.'




good to know where the competition is at - dyor
could always be wrong of course - all freely available in the public domain
Hi Hoppa,
This is a link to the article to which you refer.
https://sapienfusion.com/2026/02/04...omputing-just-ended-nvidias-edge-ai-monopoly/

I have just done a bit of cross-pollination.
I pointed her towards Kevin D. Johnson's project with Akida.
 
Last edited:
  • Like
  • Fire
Reactions: 10 users

Dijon101

Regular
Brain inspired machines are better at math than expected | ScienceDaily https://share.google/wG1D8I8d7q4gxeZgi

"Brain inspired machines are better at math than expected
Brain-inspired computers just proved they can tackle supercomputer-level math — using a fraction of the energy.
Date:
February 14, 2026
Source:
DOE/Sandia National Laboratories
Summary:
Neuromorphic computers modeled after the human brain can now solve the complex equations behind physics simulations — something once thought possible only with energy-hungry supercomputers. The breakthrough could lead to powerful, low-energy supercomputers while revealing new secrets about how our brains process information."

Not specifically about brainchip, however just further evidence we are invested in the right space.
 
  • Like
  • Fire
Reactions: 7 users
curlednoodles

To add to this, I found this paper “Robust iterative value conversion: Deep reinforcement learning for neurochip-driven edge robots” (2024).

One author is from MegaChips (Shinya Nishimura, MegaChips Corporation) and others are NAIST-affiliated.
They explicitly use Akida 1000, but call it a neurochip.

MegaChips × NAIST Paper Summary:

It’s a direct MegaChips ↔ NAIST link in print.The author list includes NAIST researchers and a MegaChips Corporation co-author, so this isn’t just corporate website wording — it’s a real joint technical output.
It’s explicitly about SNN robot control on edge hardware. The whole paper is focused on training and running spiking neural network (SNN) policies for battery-limited edge robots using deep reinforcement learning (DRL).
The “neurochip” is named as Akida 1000.In their evaluation, they state the SNN is run on “neurochip (Akida 1000),” which is the key hardware pin that ties this NAIST/MegaChips robotics work to BrainChip/Akida.
They report the exact kind of edge-robot benefit MegaChips would care about.The paper reports ~15× lower power and ~5× faster calculation versus an ARM Cortex-A72 edge CPU baseline — i.e., practical real-time control advantages under tight power constraints.

Conclusion: this paper is strong evidence that MegaChips + NAIST have already implemented SNN-based robot control on Akida 1000, with measured power/speed benefits — which makes MegaChips’ broader NAIST/SNN robotics messaging far more credible.

@manny100@Fact Finder
 
  • Like
  • Fire
  • Thinking
Reactions: 12 users

FuzM

Regular
curlednoodles

To add to this, I found this paper “Robust iterative value conversion: Deep reinforcement learning for neurochip-driven edge robots” (2024).

One author is from MegaChips (Shinya Nishimura, MegaChips Corporation) and others are NAIST-affiliated.
They explicitly use Akida 1000, but call it a neurochip.

MegaChips × NAIST Paper Summary:

It’s a direct MegaChips ↔ NAIST link in print.The author list includes NAIST researchers and a MegaChips Corporation co-author, so this isn’t just corporate website wording — it’s a real joint technical output.
It’s explicitly about SNN robot control on edge hardware. The whole paper is focused on training and running spiking neural network (SNN) policies for battery-limited edge robots using deep reinforcement learning (DRL).
The “neurochip” is named as Akida 1000.In their evaluation, they state the SNN is run on “neurochip (Akida 1000),” which is the key hardware pin that ties this NAIST/MegaChips robotics work to BrainChip/Akida.
They report the exact kind of edge-robot benefit MegaChips would care about.The paper reports ~15× lower power and ~5× faster calculation versus an ARM Cortex-A72 edge CPU baseline — i.e., practical real-time control advantages under tight power constraints.

Conclusion: this paper is strong evidence that MegaChips + NAIST have already implemented SNN-based robot control on Akida 1000, with measured power/speed benefits — which makes MegaChips’ broader NAIST/SNN robotics messaging far more credible.

@manny100@Fact Finder
Video

Paper
 
  • Like
  • Fire
  • Love
Reactions: 9 users

suss

Regular
  • Fire
  • Like
Reactions: 3 users

FuzM

Regular
  • Like
  • Fire
Reactions: 2 users
Top Bottom