BRN Discussion Ongoing

Esq.111

Fascinatingly Intuitive.
  • Haha
  • Like
Reactions: 10 users
  • Haha
  • Like
Reactions: 7 users

Andy38

The hope of potential generational wealth is real
  • Like
  • Love
Reactions: 12 users

TECH

Top 20
Oversubscription, well I was quietly surprised, maybe others were as well.
0.17 was attainable, 0.165 if you happened to be heading the pack in front of 5 million other hopefuls, though not many went through at that price.

So how are we positioned moving into 2026, very, very nicely in my opinion...a clear roadmap, chip developments, happy partners, progress within the company, the landscape is so much more advanced than a few years back, do we now have a real launch pad, I believe that we do, what do you lot think?

Is 2026 our year? or are we still dreaming, what's neuromorphic, what's Edge AI, what's an Event Based Processor, what's SNN mean, what's, what's what's......we have advanced and so have the companies whom matter.

Stay focused, Akida will deliver in my opinion
bots GIF
 
  • Like
  • Fire
  • Love
Reactions: 26 users

Frangipani

Top 20

Gregor Lenz, until recently CTO of our partner Neurobus (https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-456183) and co-author of Low-power Ship Detection in Satellite Images Using Neuromorphic Hardware alongside Douglas McLelland (https://arxiv.org/pdf/2406.11319) has joined the London-based startup Paddington Robotics (https://paddington-robotics.com/ - the website doesn’t yet have any information other than “Paddington Robotics - Embodied AI in Action”):

View attachment 81384



View attachment 81386



Some further info I was able to find about the London-based startup founded late last year, whose co-founder and CEO is Zehan Wang:

View attachment 81419


https://www.northdata.de/Paddington%20Robotics%20Ltd¡,%20London/Companies%20House%2016015385

View attachment 81420 View attachment 81421 View attachment 81422 View attachment 81423

While Paddington Robotics aka P9R7 continue to be rather secretive about what they do on their minimalistic website (https://paddington-robotics.com), the London-based startup somewhat opened up about their work (initially cobots in supermarkets /grocery stores) on LinkedIn in recent weeks, where they also just introduced a few of their employees (according to the company profile, their team currently still consists of 10 people max).

From a BRN shareholder’s perspective, their most interesting team member is of course Gregor Lenz (see my tagged post above), who joined Paddington Robotics in April. With his PhD in Neuromorphic Computing, his SynSense background and hands-on experience with both Loihi and Akida, plus his deep tech-startup experience as Co-Founder and former CTO of Neurobus, he is the perfect guy to promote neuromorphic computing within his company in his role as P9R7’s perception stack lead.

“We are building robots which help people do more, not replace them.”
“We’re building for real world impact, not tech for technology’s sake.”
Zehan Wang, Founder and CEO of Paddington Robotics (P9R7)



791A70FA-8DCA-487D-AA95-D61B7D815A8A.jpeg




DD72778B-FFC7-4444-81E5-6C6947431F2E.jpeg




E45C83E3-8465-4A32-815C-0659914C03CE.jpeg




1D63CD6C-1E1F-454C-B00C-024044ECB0EB.jpeg




A2AD2AA0-279E-4213-B6A9-27D433B627C0.jpeg
2014E1D3-DB1F-484D-904A-31A936EA7087.jpeg
 
  • Like
  • Love
  • Fire
Reactions: 13 users
Maybe one step closer though.

Just up on GitHub.

Suggest readers absorb the whole post to understand the intent of this repository.

Especially terms such as federated learning, scalable, V2X, MQTT, prototype level and distributed.

From what I can find, if correct, the author is as below and doesn't necessarily mean VW involved but suspect would be aware of this work in some division / department.



Fernando Sevilla MartĂ­nez​




SevillaFe/SNN_Akida_RPI5

Fernando Sevilla MartĂ­nezSevillaFe​



SevillaFe/SNN_Akida_RPI5Public

Eco-Efficient Deployment of Spiking Neural Networks on Low-Cost Edge Hardware


SNN_Akida_RPI5​

Eco-Efficient Deployment of Spiking Neural Networks on Low-Cost Edge Hardware

This work presents a practical and energy-aware framework for deploying Spiking Neural Networks on low-cost hardware for edge computing. We detail a reproducible pipeline that integrates neuromorphic processing with secure remote access and distributed intelligence. Using Raspberry Pi and the BrainChip Akida PCIe accelerator, we demonstrate a lightweight deployment process including model training, quantization, and conversion. Our experiments validate the eco-efficiency and networking potential of neuromorphic AI systems, providing key insights for sustainable distributed intelligence. This letter offers a blueprint for scalable and secure neuromorphic deployments across edge networks.

1. Hardware and Software Setup​

The proposed deployment platform integrates two key hardware components: the RPI5 and the Akida board. Together, they enable a power-efficient, cost-effective N-S suitable for real-world edge AI applications.

2. Enabling Secure Remote Access and Distributed Neuromorphic Edge Networks​

The deployment of low-power N-H in networked environments requires reliable, secure, and lightweight communication frameworks. Our system enables full remote operability of the RPI5 and Akida board via SSH, complemented by protocol layers (Message Queuing Telemetry Transport (MQTT), WebSockets, Vehicle-to-Everything (V2X)) that support real-time, event-driven intelligence across edge networks.

3. Training and Running Spiking Neural Networks​

The training pipeline begins with building an ANN using TensorFlow 2.x, which will later be mapped to a spike-compatible format for neuromorphic inference. Because Akida board runs models using low-bitwidth integer arithmetic (4–8 bits), it is critical to align the training phase with these constraints to avoid significant post-training performance degradation.

4. Use case validation: Networked neuromorphic AI for distributed intelligence​

4.1 Use Case: If multiple RPI5 nodes or remote clients need to receive the classification results in real-time, MQTT can be used to broadcast inference outputs​

MQTT-Based Akida Inference Broadcasting​

This project demonstrates how to perform real-time classification broadcasting using BrainChip Akida on Raspberry Pi 5 with MQTT.

Project Structure​

mqtt-akida-inference/
├── config/ # MQTT broker and topic configuration
├── scripts/ # MQTT publisher/subscriber scripts
├── sample_data/ # Sample input data for inference
├── requirements.txt # Required Python packages


Usage​

  1. Install Mosquitto on RPI5
sudo apt update
sudo apt install mosquitto mosquitto-clients -y
sudo systemctl enable mosquitto
sudo systemctl start mosquitto

  1. Run Publisher (on RPI5)
python3 scripts/mqtt_publisher.py


  1. Run Subscriber (on remote device)
python3 scripts/mqtt_subscriber.py


  1. Optional: Monitor from CLI
mosquitto_sub -h <BROKER_IP> -t "akida/inference" -v

Akida Compatibility

python3 outputs = model_akida.predict(sample_image)


Real-Time Edge AI This use case supports event-based edge AI and real-time feedback in smart environments, such as surveillance, mobility, and robotics.

Configurations Set your broker IP and topic in config/config.py

4.2 Use Case: If the Akida accelerator is deployed in an autonomous driving system, V2X communication allows other vehicles or infrastructure to receive AI alerts based on neuromorphic-based vision​

This Use Cases simulates a lightweight V2X (Vehicle-to-Everything) communication system using Python. It demonstrates how neuromorphic AI event results, such as pedestrian detection, can be broadcast over a network and received by nearby infrastructure or vehicles.

Folder Structure​

V2X/
├── config.py # V2X settings
├── v2x_transmitter.py # Simulated Akida alert broadcaster
├── v2x_receiver.py # Listens for incoming V2X alerts
└── README.md


Use Case​

If the Akida accelerator is deployed in an autonomous driving system, this setup allows:

  • Broadcasting high-confidence AI alerts (e.g., "pedestrian detected")
  • Receiving alerts on nearby systems for real-time awareness

Usage​

1. Start the V2X Receiver (on vehicle or infrastructure node)​

python3 receiver/v2x_receiver.py

2. Run the Alert Transmitter (on an RPI5 + Akida node)​

python3 transmitter/v2x_transmitter.py

Notes​

  • Ensure that devices are on the same LAN or wireless network
  • UDP broadcast mode is used for simplicity
  • This is a prototype for real-time event-based message sharing between intelligent nodes

4.3 Use Case: If multiple RPI5-Akida nodes are deployed for federated learning, updates to neuromorphic models must be synchronized between devices​

Federated Learning Setup with Akida on Raspberry Pi 5​

This repository demonstrates a lightweight Federated Learning (FL) setup using neuromorphic AI models deployed on BrainChip Akida PCIe accelerators paired with Raspberry Pi 5 devices. It provides scripts for a centralized Flask server to receive model weight updates and a client script to upload Akida model weights via HTTP.

Overview​

Neuromorphic models trained on individual RPI5-Akida nodes can contribute updates to a shared model hosted on a central server. This setup simulates a federated learning architecture for edge AI applications that require privacy, low latency, and energy efficiency.

Repository Structure​

federated_learning/
├── federated_learning_server.py # Flask server to receive model weights
├── federated_learning_client.py # Client script to upload Akida model weights
├── model_utils.py # (Optional) Placeholder for weight handling utilities
├── model_training.py # (Optional) Placeholder for training-related code
└── README.md


Requirements​

  • Python 3.7+
  • Flask
  • NumPy
  • Requests
  • Akida Python SDK (required on client device)
Install the dependencies using:

pip install flask numpy requests

Getting Started​

1. Launch the Federated Learning Server​

On a device intended to act as the central server:

python3 federated_learning_server.py

The server will listen for HTTP POST requests on port 5000 and respond to updates sent to the /upload endpoint.

2. Configure and Run the Client​

On each RPI5-Akida node:

  • Ensure the Akida model has been trained.
  • Replace the SERVER_IP variable inside federated_learning_client.py with the IP address of the server.
  • Run the script:
python3 federated_learning_client.py

This will extract the weights from the Akida model and transmit them to the server in JSON format.

Example Response​

After a successful POST:

Model weights uploaded successfully.


If an error occurs (e.g., connection refused or malformed weights), you will see an appropriate status message.

Security Considerations​

This is a prototype-level setup for research. For real-world deployment:

  • Use HTTPS instead of HTTP.
  • Authenticate clients using tokens or API keys.
  • Validate the format and shape of model weights before acceptance.

Acknowledgements​

This implementation is part of a broader effort to demonstrate low-cost, energy-efficient neuromorphic AI for distributed and networked edge environments, particularly leveraging the BrainChip Akida PCIe board and Raspberry Pi 5 hardware.
Appears Fernando Sevilla MartĂ­nez (as per prev post & has links to VW) has just updated a GitHub repository again a few hours ago for their current paper.

Snipped the overview and key findings etc below but link worth a read through.

Some real positives in here imo.


SevillaFe/EcoEdgeAI-akida-macPublic

A comprehensive workflow for comparing energy efficiency between conventional hardware (Mac M-series GPU/CPU) and neuromorphic hardware (Akida on Raspberry Pi 5) for autonomous driving steering angle prediction.


EcoEdgeAI: Neuromorphic Computing for Sustainable Autonomous Driving​

License: MIT Python 3.10+ DOI

Comparative Eco-Efficiency Evaluation of CNN Deployment on Conventional vs. Neuromorphic Hardware for Autonomous Driving

This repository contains the complete experimental workflow, trained models, and analysis scripts for our paper:

"Sustainable Neuromorphic Edge Intelligence for Autonomous Driving: A Comparative Eco-Efficiency Evaluation"
Authors: F.Sevilla MartĂ­nez, Jordi Casas-Roma, Laia Subirats, RaĂş Parada]
Conference/Journal: In Review Year: 2025

Overview​

This work provides the first comprehensive hardware-measured evaluation of energy efficiency, accuracy trade-offs, and eco-efficiency scaling for CNN-based steering angle prediction deployed on:

  • Conventional Hardware: MacBook Pro M1 (Apple Silicon)
  • Neuromorphic Hardware: Raspberry Pi 5 + BrainChip Akida v1.0 NPU

Research Questions​

  1. Energy Efficiency: How much energy do NPUs save compared to conventional hardware?
  2. Accuracy Trade-offs: What is the accuracy cost of 4-bit quantization on neuromorphic hardware?
  3. Eco-Efficiency Scaling: Which CNN architectures achieve optimal energy-accuracy balance?

Key Contributions​

✅ Hardware-Measured Energy: Direct measurement via TC66 USB power meter (neuromorphic) and CodeCarbon (conventional)
✅ Three CNN Architectures: PilotNet (1.54M params), LaksNet (768K params), MiniNet (245K params)
✅ Complete Quantization Pipeline: Float32 → PTQ 4-bit → QAT refinement → Akida deployment
✅ Reproducible Workflow: End-to-end scripts from training to statistical analysis
✅ Energy-Error Rate (EER) Metric: Unified eco-efficiency evaluation framework


🔬 Key Findings​

Energy Efficiency (RQ1)​

  • 7.15× to 13.17× energy reduction (615-1,217% savings) on neuromorphic hardware
  • Lighter architectures achieve superior gains: MiniNet 13.17×, PilotNet 7.15×
  • Energy advantage driven 60-70% by throughput improvement (3.4×-7.3× faster)
  • 0.73 W sustained power for MiniNet vs. 3.86 W on Mac M1

Accuracy Trade-offs (RQ2)​

  • Inverse-U quantization pattern: Medium architectures suffer most (LaksNet +113% MSE)
  • Lighter architectures show best tolerance: MiniNet +51% MSE, PilotNet +93% MSE
  • QAT provides zero accuracy benefit across all architectures (4-bit capacity ceiling)
  • Practical impact: 3.4°-7.2° additional steering error

Eco-Efficiency Scaling (RQ3)​

  • 8.5× eco-efficiency improvement for optimal configuration (MiniNet)
  • Super-linear scaling with architectural simplification (2.0×-8.5× EER)
  • 11.7× carbon footprint reduction (12.9 Îźg → 1.1 Îźg CO₂ per 1,000 inferences)
  • 233 Hz control loop capability (4.30 ms latency) enables real-time operation

Project Status​

  • ✅ Training Pipeline: Complete
  • ✅ Quantization & Conversion: Complete
  • ✅ Benchmarking: Complete
  • ✅ Statistical Analysis: Complete
  • ✅ Paper Submission: In Review
  • ✅ Documentation: In Progress
  • ✅ Pre-trained Models: Complete
Last Updated: December 2025
 
  • Like
  • Fire
  • Love
Reactions: 19 users
Oversubscription, well I was quietly surprised, maybe others were as well.
0.17 was attainable, 0.165 if you happened to be heading the pack in front of 5 million other hopefuls, though not many went through at that price.

So how are we positioned moving into 2026, very, very nicely in my opinion...a clear roadmap, chip developments, happy partners, progress within the company, the landscape is so much more advanced than a few years back, do we now have a real launch pad, I believe that we do, what do you lot think?

Is 2026 our year? or are we still dreaming, what's neuromorphic, what's Edge AI, what's an Event Based Processor, what's SNN mean, what's, what's what's......we have advanced and so have the companies whom matter.

Stay focused, Akida will deliver in my opinion
bots GIF
Clearly 2026 is BrainChip year with Onsor and many other's bringing Akida to market.
Were going to need a bigger bank 🏦
 
  • Like
  • Love
Reactions: 13 users

7fĂźr7

Top 20
Clearly 2026 is BrainChip year with Onsor and many other's bringing Akida to market.
Were going to need a bigger bank 🏦
Exactly 👍 and if not..I would guess 2027
Wohooooo
 
  • Like
Reactions: 2 users

manny100

Top 20
While Paddington Robotics aka P9R7 continue to be rather secretive about what they do on their minimalistic website (https://paddington-robotics.com), the London-based startup somewhat opened up about their work (initially cobots in supermarkets /grocery stores) on LinkedIn in recent weeks, where they also just introduced a few of their employees (according to the company profile, their team currently still consists of 10 people max).

From a BRN shareholder’s perspective, their most interesting team member is of course Gregor Lenz (see my tagged post above), who joined Paddington Robotics in April. With his PhD in Neuromorphic Computing, his SynSense background and hands-on experience with both Loihi and Akida, plus his deep tech-startup experience as Co-Founder and former CTO of Neurobus, he is the perfect guy to promote neuromorphic computing within his company in his role as P9R7’s perception stack lead.

“We are building robots which help people do more, not replace them.”
“We’re building for real world impact, not tech for technology’s sake.”
Zehan Wang, Founder and CEO of Paddington Robotics (P9R7)



View attachment 93596



View attachment 93601



View attachment 93597



View attachment 93598



View attachment 93599 View attachment 93600
I guess if you can afford to buy a robot for house duties in say 2030/2 plus and if it can clean, wash clothes, dishes, maybe meals, watch the kids and help them with homework and several other tasks it could have a reasonable pay back period.
Like TVs once they sell in volume every home will have one or two.
 
  • Like
Reactions: 1 users

7fĂźr7

Top 20
I guess if you can afford to buy a robot for house duties in say 2030/2 plus and if it can clean, wash clothes, dishes, maybe meals, watch the kids and help them with homework and several other tasks it could have a reasonable pay back period.
Like TVs once they sell in volume every home will have one or two.
Robots will have a huge impact on our lives—whether in households, industry, or medicine—equipped with sensors that can carry out a range of measurements within seconds to deliver initial diagnoses. But I’m still skeptical about them being used for unsupervised childcare, and I also doubt they’ll replace jobs on a massive scale. Jobs mean income… and that income generates taxes and social security contributions (at least in Germany). If all of that disappears, the system collapses—unless we somehow start paying robots. 🫤

So I see robots and AI more as support rather than a replacement for human labor, etc.

Just my opinion.
 
  • Like
  • Thinking
Reactions: 10 users
Clearly 2026 is BrainChip year with Onsor and many other's bringing Akida to market.
Were going to need a bigger bank 🏦
Hopefully bigger than our current 😂

1764994295295.gif
 
  • Haha
  • Like
  • Fire
Reactions: 8 users

manny100

Top 20
Robots will have a huge impact on our lives—whether in households, industry, or medicine—equipped with sensors that can carry out a range of measurements within seconds to deliver initial diagnoses. But I’m still skeptical about them being used for unsupervised childcare, and I also doubt they’ll replace jobs on a massive scale. Jobs mean income… and that income generates taxes and social security contributions (at least in Germany). If all of that disappears, the system collapses—unless we somehow start paying robots. 🫤

So I see robots and AI more as support rather than a replacement for human labor, etc.

Just my opinion.
I think robotics will replace jobs no one really wants to do because its hard work.
The problem is especially for young people no work = boredom = trouble.
So maybe reduced hours?
Everyone thought when computers came in that mass jobs would be lost - but that never happened.
It will all sort itself out.
I think robotics with a 'hotline' to mum and dad would work with kids because robots run by rules and even though they learn and adapt would never let kids get away with anything plus you do not have the worry of leaving your kids with 'humans' unsupervised. I mean the humans unsupervised.
 
  • Like
Reactions: 7 users

Diogenese

Top 20
I think robotics will replace jobs no one really wants to do because its hard work.
The problem is especially for young people no work = boredom = trouble.
So maybe reduced hours?
Everyone thought when computers came in that mass jobs would be lost - but that never happened.
It will all sort itself out.
I think robotics with a 'hotline' to mum and dad would work with kids because robots run by rules and even though they learn and adapt would never let kids get away with anything plus you do not have the worry of leaving your kids with 'humans' unsupervised. I mean the humans unsupervised.
Never letting kids get away with anything sounds depressing.
 
  • Like
  • Fire
  • Love
Reactions: 7 users
Oversubscription, well I was quietly surprised, maybe others were as well.
0.17 was attainable, 0.165 if you happened to be heading the pack in front of 5 million other hopefuls, though not many went through at that price.

So how are we positioned moving into 2026, very, very nicely in my opinion...a clear roadmap, chip developments, happy partners, progress within the company, the landscape is so much more advanced than a few years back, do we now have a real launch pad, I believe that we do, what do you lot think?

Is 2026 our year? or are we still dreaming, what's neuromorphic, what's Edge AI, what's an Event Based Processor, what's SNN mean, what's, what's what's......we have advanced and so have the companies whom matter.

Stay focused, Akida will deliver in my opinion
bots GIF
Me personally I feel it'll be a steady race, but those glasses that predict seizure coming on, Will brainchip be recognised for this game breaking products,hmmm or is it all about the revenue, if so industry will only know the truth and hopefully they jump on board, abit like NASA could you imagine what should be said on news headlines around the world , a little known company brainchip is playing a major role in the space age going forward plus in military
 
  • Like
Reactions: 4 users
Robots will have a huge impact on our lives—whether in households, industry, or medicine—equipped with sensors that can carry out a range of measurements within seconds to deliver initial diagnoses. But I’m still skeptical about them being used for unsupervised childcare, and I also doubt they’ll replace jobs on a massive scale. Jobs mean income… and that income generates taxes and social security contributions (at least in Germany). If all of that disappears, the system collapses—unless we somehow start paying robots. 🫤

So I see robots and AI more as support rather than a replacement for human labor, etc.

Just my opinion.
The only problem with that logic is one company will introduce something and then their competitors will introduce similar to compete. Greed is another motivator because a Billionaire will want to be a Trillionaire and then Squllionaire. It will be someone else's problem to fix. Too late by then. Bit like nuclear weapons. Surely none would want them. One has them, they all want them.

SC
 
  • Like
Reactions: 1 users
I think robotics will replace jobs no one really wants to do because its hard work.
The problem is especially for young people no work = boredom = trouble.
So maybe reduced hours?
Everyone thought when computers came in that mass jobs would be lost - but that never happened.
It will all sort itself out.
I think robotics with a 'hotline' to mum and dad would work with kids because robots run by rules and even though they learn and adapt would never let kids get away with anything plus you do not have the worry of leaving your kids with 'humans' unsupervised. I mean the humans unsupervised.
There was the case a few years ago, where a robot at Tesla, allegedly had a human worker by the throat.

SC
 

7fĂźr7

Top 20
The only problem with that logic is one company will introduce something and then their competitors will introduce similar to compete. Greed is another motivator because a Billionaire will want to be a Trillionaire and then Squllionaire. It will be someone else's problem to fix. Too late by then. Bit like nuclear weapons. Surely none would want them. One has them, they all want them.

SC

Hmmm sure companies want profit, but politicians and regulators tend to step in to slow things down with safety rules. We can be glad nuclear weapons aren’t in ‘normal’ use — and that’s exactly why high-risk technologies won’t just get a blanket green light. Air taxis show it well: the tech exists, but safety, insurance, and infrastructure still don’t fit cleanly within the regulatory framework.

My opinion… we will see it when it happens I guess
 
  • Like
  • Thinking
Reactions: 2 users
Hmmm sure companies want profit, but politicians and regulators tend to step in to slow things down with safety rules. We can be glad nuclear weapons aren’t in ‘normal’ use — and that’s exactly why high-risk technologies won’t just get a blanket green light. Air taxis show it well: the tech exists, but safety, insurance, and infrastructure still don’t fit cleanly within the regulatory framework.

My opinion… we will see it when it happens I guess
Fair enough but my main point was mankind is it's own worse enemy. There was regulation around the new ev technology but Elon Musk started DOGE and got rid of the people who were carrying out the 2000 odd safety investigations of his Tesla cars. Problem fixed. People in power that have narcissistic tendencies, ego power trips or greed can cause a lot of harm. IMO

SC
 
  • Like
  • Thinking
  • Fire
Reactions: 9 users

7fĂźr7

Top 20
Fair enough but my main point was mankind is it's own worse enemy. There was regulation around the new ev technology but Elon Musk started DOGE and got rid of the people who were carrying out the 2000 odd safety investigations of his Tesla cars. Problem fixed. People in power that have narcissistic tendencies, ego power trips or greed can cause a lot of harm. IMO

SC

I don’t know maaan I just want to get richer

Elon Musk Smoking GIF
 
  • Haha
  • Like
  • Thinking
Reactions: 5 users
A relevant interview regarding Chris Eliasmith of ABR, makes for an interesting read:

It has a few relevant parts such as:

SB: Your group has also come up with Legendre Memory Units to represent time. Listeners may remember that BrainChip have been using this approach as well. Can you talk about what they are and why they’re useful?

CE: So we were really working on, ‘How does the brain represent time?’ But we also came to realize that, well, this is a problem in machine learning. Time series is a massive area of research, and it’s really hard. And people have all kinds of different recurrent networks, such as LSTNs and GRUs and a million variants on each of these. Transformers are now what people are using, where it’s not a recurrent network but it kind of spreads time out in space so you can just process stuff in a feedforward way. So there are all of these different approaches.

We started applying this core dynamical system to these problems. And so the obvious thing to do, I think, was basically: you take that linear system—so this is the thing representing the temporal information—and then you put a non-linear layer afterwards, so you can then manipulate that temporal representation however you want. And you’ll learn that using normal backprop. And that’s what we call the Legendre Memory Unit.

More recently, people have taken that exact same structure and called it a state-space model, for obvious reasons—because basically, having a linear dynamical system and then a non-linear layer, that’s a state-space model. And that’s what BrainChip is using, for instance.


Also,

CE: And so for the last couple of years, that’s what we’ve really been focused on: building a chip that can natively run state-space models, run it extremely efficiently—because it’s specifically designed to do that—and fabricate that chip, and go to customers and start getting into all the many different products that you might be interested in having that in.

So that’s something that we’re really excited about, because we just got the chip back for it at the end of August. We got the chip back, we tested it, and we had it up and running and setting world records in low-power running state-space models within a week and a half.

SB: Talk more about this chip. I’m very interested!

CE: It’s not a neuromorphic chip, in the sense that it’s not event-based. It is a neural accelerator, so it has compute right near the memory. The memory is actually a little bit exotic—it’s called MRAM, so magnetoresistive RAM, which means that it’s non-volatile—so you can load your network on there, you can basically leave it, shut off, and it draws almost no power until you need it—and then it can run the model, which is really cool.

We’re able to do full speech recognition. So it could be sitting here basically typing out everything that I’m saying with about a hundred times less power than other edge hardware that’s available on the market—under 30 milliwatts. We can have it typing out whatever language you’re speaking in.

We can do other things with it, too. We can use it to control a device. So you can basically tell the device what you want to do, using natural language—you don’t have to memorize keywords or key phrases, you just say what you want. We’re also working with customers who want to use it to do things like monitor your biosignals—your heartbeat, your O2, your sweating, you know, anything. You can monitor all that and do on-device inference about the state of the person, warn them that they’re going to have a seizure if it’s EEG, or that they’re having some kind of heart palpitation, or what have you.

And we just started our early access program. So we’re working with customers, getting the hardware in their hands, helping them integrate that into their applications. We’re super excited about what this chip can do. It’s just kind of blowing the competition out of the water from a power and efficiency and performance perspective.

REC: An interesting aside, which is kind of like a reality check for me—a group of us went to speak to a DARPA director not long ago, and we were selling an idea of which one of the primary focuses was that it was basically extremely low power. And she could not care less about the power. She cared mostly about latency. That is the thing. And this was an embodied AI application, Giulia. She said, “I will burn all the power that I need to if you can get me the speed and the decisions to happen as quickly as possible, and as effectively as possible.” Which for me was like, “What!?” I expected that the power efficiency argument would’ve been a winner on the day.

It was interesting to me. We were trying to argue for small drones with small brains and so on, and she was like, eh, but what’s the latency? When do you make the decision? Anyway, very interesting.


A separate thing I don't recall people mentioning was the Youtube presentation he did a few weeks ago in which he went into his new chip in more detail. There were a few things worth noting such as listing some specs (eg. the chip uses 22nm node process, and a comparison with competitors:




1765067674249.png



A few takeaways:
  1. This gives further confirmation that ABR's LMU and Brainchip's State Space models are very similar competitive technologies.
  2. The rush to get Akida Cloud was likely partly in response to the threat ABR proposed. Akida's Gen 2 chip is still in production. However, while ABR have a chip now, Akida Gen 2 hardware was available to customers from 5th August (see link below), slightly earlier than ABR's chip. ABR got their chip back at the end of August, then had a few weeks of testing plus logistics before it could be provided to customers. Akida 2 would likely have been the FPGA prototype, so probably not the final version with maximum efficiency, but it allowed people to test at least a month earlier than ABR. In general, any new prototype chip can be released to customers this way, which provides an edge.
  3. In the Youtube clip, ABR were comparing their new chip to the 1st gen Akida and saying it wasn't comparable. It couldn't do more complex tasks like automatic speech recognition. I would guess Gen 2 woul have similar efficiency / performance to ABR's new chip unless there are significant improvements that Brainchip have made to hardware or algorithm efficiency (patent pending). Both chips are being manufactured in 22nm from memory so performance should be comparable when these values eventually become available.
  4. It's probably no surprise that ABR are working with customers to implement solutions. However, even if ABR have better hardware it doesn't mean customers will rush to adopt it over Akida Gen 2. Brainchip has spent many years building up their ecosystem and working with customers and has other advantages of it's own. Furthermore, Brainchip is more focused on IP than chips, which means they are working with some diffferent customers
  5. Brainchip's competitive advantage may not be as high as some people think. Akida Gen 3 will probably be important to ensuring they retain this


LAGUNA HILLS, Calif. – Aug 5th, 2025 – BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power, fully digital, event-based neuromorphic AI, today announced launch of the BrainChip Developer Akida Cloud, a new cloud-based access point to multiple generations and configurations of the company’s Akida™ neuromorphic technology. The initial Developer Cloud release will feature the latest version of Akida’s 2nd generation technology,
 
  • Like
  • Fire
  • Love
Reactions: 21 users
Top Bottom