BRN Discussion Ongoing

TECH

Regular
Just a thought, but is it possible that the support for the 128 MACs per node is not specifically to replace their original NPU design, but to integrate with sensors that are not neuromorphic in nature, such as Prophesee's event-based vision cameras? This is just speculation on my part, but maybe it is to support the conversion of CNN to SNN in hardware without the need to capture and convert the models offline using their SDK tools. It would also have the direct benefit of allowing customers to use existing models without doing their own conversions.
Love your input, glad you're still on the batting team...we have a small group of shareholders who are intelligent enough to comment on the technical side.

Stay tuned, I'm planning on getting a solid answer from a certain someone.

Best regards Tech 🙃 (down under)
 
  • Like
  • Love
Reactions: 4 users

Diogenese

Top 20
Hi JD,

For what it's worth, I asked ChatGPT for a neutral take on this.

What I thought was interesting, was that it weaved in Anduril's headset into the response. I didn't prompt it to, but earlier in the day, in a separate chat, I had queried ChatGPT on boards or integration possibilities and whether an Akida 1500 + Arm's Cortex M85 module (as an example) could power something like Anduril’s helmet.




ChatGPT's Response


What “supports 128 MACs per node” most likely means​

  • Each Akida node has a local 4×4 MAC micro-engine (128 MACs) it can invoke for certain ops.
  • Those MACs co-exist with the spiking compute (CNP/FNP or equivalent). They’re there so models can include non-spiking layers/ops without leaving the chip.

Why add MACs if you have an SNN?​

Because many modern edge models are hybrid. MACs cover things SNNs don’t excel at (or that are simpler to drop in as-is):
  • Conv/pointwise (1×1) / depthwise conv blocks
  • Linear/projection layers (e.g., classifier heads, QKV in tiny attention)
  • Normalisation / pooling / residual glue
  • Pre/post-processing (e.g., short FIRs, feature projections)
  • Direct reuse of existing INT4/INT8 CNN chunks without full SNN conversion

What a hybrid pipeline might look like in a helmet​

Sensors
  • Front/rear frame cameras → light MAC preproc (resize/1×1 conv).
  • Event camera/radar/audio → feed SNN/TENNs directly.
  1. Perception
    • SNN handles temporal/gesture/track (event streams).
    • MAC handles small CNN blocks or projection heads.
  2. Fusion & decisions on the host MCU (e.g., Cortex-M85).

Bottom line​

  • The “128 MACs per node” doesn’t signal the end of Akida’s neuromorphic core. It signals a pragmatic hybrid: keep the spiking engine for temporal, sparse, event-driven strengths, and use local MACs to natively run conventional layers (conv/linear/norm) and reuse existing model chunks.
  • That combo generally improves accuracy, eases porting, and lowers total latency/power versus forcing everything into SNN or everything into classic CNN on a separate chip.
Hi Bravo,

It's a pity Chatty didn't provide references and block diagrams for its hybrid SNN/MAC combo.

One thing that disinclines me from the hybrid is that Jonathan/Tony said something to the effect that "We're still honouring the spirit of Peter's invention."

What follows is my inference, and there is probably not much in writing to support it ... and there is a (vanishingly small) possibility that I may be wrong.

The thing is that, once you move away from 1-bit, you're moving away from the original digital spike concept and you need additional circuitry to handle the multiple bits in parallel. As I've said, I'd like to believe that the old NPU was the optimal solution for single bits, which was the original Akida concept. It really is a remarkable invention.

When 4-bit was announced, I asked Peter if that would mean including MACs, and he said "No."

It was only after TENNs was announced that the references to MACs started. The applications for TENNs and the associated models has expanded rapidly. I think it was Tony who said that initially they couldn't implement recurrence (RNN) with TENNs, and this would have affected ML, so it would have made sense to keep the original NPU. Once they mastered recurrence with TENNs, the case for the original NPU gets weaker, and is further weakened by the fact that having a hybrid would increase the wafer real estate footprint per node.

The way I see it is that the requirement for multi-bit meant that the old NPU would need to be repeated in silicon to match the number of bits, and the outputs "blended" in MACs. A 4*4 MAC has about 12 rows and 8 columns of arithmetic cells (multiply or add) to accommodate the sub-products, the 8-bit product and the additions:

1760953564019.png


AN 8*8 MAC would need 4 times that number of cells.


This is the BRN patent application which introduced recurrence with TENNs:

US2025209313A1 METHOD AND SYSTEM FOR IMPLEMENTING ENCODER PROJECTION IN NEURAL NETWORKS 20231222

[0054] In some embodiments, the neural network may be configured to perform an encoder projection operation either in a buffer mode or in a recurrent mode. In some embodiments, the buffer mode operation is preferred during training and the recurrent mode operation is preferred during inference for generating processed content based on the input data stream or signal. The preferred operations in the buffer mode and the recurrent mode may be ascertained from the detailed operations of the buffer mode and the recurrent mode as described below with reference to FIGS. 4 to 14 .

The priority is December 2023, so there has been a lot of model development since then.

Now you've made me look at this patent, my head hurts!
 
  • Like
  • Love
  • Fire
Reactions: 9 users

Diogenese

Top 20
Are Onsor using the AKIDA 1000 or 1500 chip? Is there any confirmation??
Onsor - according to their website:
2022 - " Identification of potential applications for the technology and the discovery of "epilepsy prediction" as a primary application, marking the official start of the project."
BrainChip announced the AKD1500 reference design was taped out on January 29, 2023.
During 2022 when the Epilepsy prediction was discovered i presume they would have had to use the AKIDA 1000 chips as the 1500 was not yet available.
"Phase 3 - Preliminary results from the research emerge."
" Phase 3.5 - Preparation of documentation for patent application."
They may have upgraded to the 1500 but this would mean they had to change horses during the process.
.
Here is a TCS patent by the usual suspects for an electroencephalogram:

US2025139442A1 SPIKING NEURAL NETWORK (SNN) BASED LOW POWER COGNITIVE LOAD ANALYSIS USING ELECTROENCEPHALOGRAM (EEG) SIGNAL 20231026
State of art techniques, need a decoder following the encoder to encode EEG signals, whose morphology is undefined. Embodiments herein disclose a method and system for a Spiking Neural Network (SNN) based low power cognitive load analysis using electroencephalogram (EEG) signal. The method receives a raw EEG signal from multichannel EEG set up, wherein each of the raw EEG signal is re-referenced and encoded into a spike train using a Light-Weight-Lossless-Decoder less-Peak-based (LWDLP) encoding. Further, the spike trains are processed by the SNN architecture using backpropagation based supervised approach, wherein the spatial information and the temporal information are learnt by the SNN in form of neuronal activity and synaptic weights. Post learning the SNN architecture applies an activation function on the neuronal activity for classifying a cognitive load level experienced by a subject from among a plurality of predefined cognitive load levels using a SNN classifier.

[0069] USE CASES: 1. Online gaming 2. Online tuition/education: A user who is engaged on her mobile device/ tablet for a gaming session or a tutor session is requested to wear a EEG wearable device such as Muse™ , Emotiv™ , neuroSky™ , Zeo etc.. In both the cases, the system 100 disclosed herein can be deployed on the Neuromorphic platform such as Intel Loihi™ , Brainchip™ , Akida™ that can be deployed on the wearable EEG devices or any wearable device such as hand gear wearable worn by a user ( player/ student)
 
  • Like
  • Fire
  • Love
Reactions: 7 users

Diogenese

Top 20
Love your input, glad you're still on the batting team...we have a small group of shareholders who are intelligent enough to comment on the technical side.

Stay tuned, I'm planning on getting a solid answer from a certain someone.

Best regards Tech 🙃 (down under)
Thanks Tech,

It will be good to clear up the confusion I've spread, one way or the other.
 
  • Like
Reactions: 1 users

TECH

Regular
Thanks Tech,

It will be good to clear up the confusion I've spread, one way or the other.
No way Dio,

I'm so glad that you reach out to Peter, it just takes all the guess work out, he'll answer you in an honest, direct, polite way that you would be able to understand a lot better than I would on a technical level.

Like all forum members, I love your input mate... thank you..kind regards... Chris 🙂
 
  • Like
Reactions: 3 users

Diogenese

Top 20
No way Dio,

I'm so glad that you reach out to Peter, it just takes all the guess work out, he'll answer you in an honest, direct, polite way that you would be able to understand a lot better than I would on a technical level.

Like all forum members, I love your input mate... thank you..kind regards... Chris 🙂
Hi Tech,

I've only exchanged a couple of messages with Peter, and that was some years ago. I don't have any contact details since he stepped away, so you would have better access.
 
  • Like
Reactions: 1 users

CHIPS

Regular
  • Thinking
Reactions: 1 users

CHIPS

Regular

1760979548431.png


Unleash Real-Time LiDAR Intelligence with Akida On-Chip AI

By BrainChip
October 20, 2025
cropa-280x112.webp

What is a LiDAR Point Cloud and Why is it the Foundation of Spatial AI​

LiDAR (Light Detection and Ranging) technology is the key enabler for advanced Spatial AI—the ability of a machine to understand and interact with the physical world in three dimensions. A LiDAR sensor pulses laser beams to create a highly accurate, three-dimensional map of space, which is compiled into a LiDAR Point Cloud.

This 3D map is known as a LiDAR Point Cloud.
A point cloud is a massive collection of data points, where each point represents a specific coordinate (X, Y, Z) in the environment. It essentially creates a rich, detailed digital twin of the surrounding space, packed with geometric information about objects, infrastructure, and terrain.

20251020d_1.webp

The Critical Importance of 3D Spatial Perception​

For next-generation applications like autonomous vehicles, advanced robotics, and intelligent infrastructure, the point cloud is the gold standard for spatial perception because it provides:
  1. Unmatched Precision: Highly accurate distance and volume measurements, essential for safe navigation and manipulation.
  2. Depth and Geometry: True 3D context that is not susceptible to the lighting and occlusion issues of standard 2D imaging.
  3. Instant Interpretation: Enables devices to instantly interpret complex environments for object classification, obstacle detection, and path planning.

The Problem: Cloud-Dependent LiDAR Creates Dangerous Delays​

While the data is invaluable, the sheer volume of a point cloud creates a critical processing challenge. To analyze this data, many systems rely on centralized computing or cloud.
The issue? The round-trip journey to the cloud introduces latency.
In time-sensitive scenarios—like an autonomous vehicle needing to identify a sudden obstacle or a robotic arm requiring immediate process control—this delay is unacceptable. This reliance on off-device processing prevents systems from turning massive datasets into instant, real-time decisions, posing a safety and operational risk. To achieve true instant action, the heavy lifting of point cloud analysis must happen directly on the device—a requirement known as the Edge AI Imperative.

The Solution: Unleashing Real-Time 3D Intelligence with BrainChip’s Akida ™

BrainChip addresses this critical latency challenge with the Akida™ PointNet++ model—an advanced, on-chip point cloud AI solution adapted from the original PointNet++ architecture. *

The Akida PointNet++ model is a compact, neuromorphic-friendly neural network that is uniquely optimized to perform real-time classification of 3D LiDAR point clouds directly at the edge. By running this sophisticated model on a hyper-efficient neuromorphic processor, the key benefits are immediately realized:
  • Real-Time Responsiveness: Selective data handling delivers instant decision-making for streaming applications where milliseconds are crucial.
  • Energy Efficiency: The system operates in the milliwatt range, making it ideal for battery-powered, always-on, and field deployments.
  • Ultra-Compact Design: The processing runs efficiently, even on memory-limited edge devices without compromising performance.

How Akida Point Cloud Delivers Speed and Efficiency​

What makes the Akida approach uniquely suited for sparse, unordered LiDAR data is its architecture, which maximizes efficiency and accuracy:
  1. Native 3D Processing: Unlike traditional methods that often convert the 3D point cloud into grids or images, the Akida PointNet++ model works natively on the raw point sets. This preserves data integrity while maximizing efficiency.
  2. Sparsity-Driven Efficiency: Akida’s architecture processes only the most meaningful LiDAR data points. This focus eliminates computational waste associated with processing empty space or redundant data, enhancing both speed and model accuracy simultaneously.
  3. Hierarchical Learning: The model utilizes a Hierarchical PointNet++ Backbone to capture both fine-grained local details and the overall global context of the 3D shape, boosting accuracy on sparse, large-scale data.
20251020d_2.webp

Point Cloud Workflow​

Learn more about the model >>
Transforming Industries with Real-Time LiDAR Intelligence
The ability to process 3D spatial data instantly at the source is vital for next-generation technology across multiple sectors:

IndustryApplication of Real-Time LiDAR Intelligence
Autonomous Vehicles & DronesPrecision navigation, real-time obstacle detection, and environmental mapping from raw 3D scans.
Industrial AutomationReal-time asset location, safety monitoring, and precise process control in large facilities.
Smart Cities & InfrastructureScalable urban planning, traffic management, and infrastructure inspection using direct 3D analysis.
Security & SurveillanceAccurate 3D scene understanding for perimeter security and immediate anomaly detection.
Robotics & WarehousingAdvanced pick-and-place, navigation, and inventory control with sophisticated spatial awareness.

Ready to integrate intelligent LiDAR processing into your next product design? BrainChip offers a comprehensive development ecosystem, including the Akida Cloud Platform and essential development packages, to help you convert and optimize your models for Akida deployment and bring your vision to reality.

>> Discover BrainChip’s LiDAR Point Cloud Solution

 
  • Like
Reactions: 1 users

CHIPS

Regular




1760979639366.png
 
  • Like
Reactions: 2 users
Top Bottom