BRN Discussion Ongoing

FJ-215

Regular
Sadly, Richard Resseguie is not the only Richard to have left BrainChip this month (saying that, hopefully there won’t be any Richard III, other than in some of our bookshelves…):

Richard Chevalier, who had been with our Toulouse-based office since 2018 (!), has joined Nio Robotics (formerly known as Nimble One, not to be confused with Nimble AI) as their Vice President of Platform Engineering:



View attachment 91392




View attachment 91394


View attachment 91393



One can only hope that he will spruik BrainChip’s technology to his new employer - as far as I can tell, there is no indication that Nio Robotics have already been exploring neuromorphic computing for their robots.

ebdcd98d-ca8f-4c78-bbb4-24c9cb38ee79-jpeg.91395



Nio Robotics is currently building Aru, a shape-shifting mobile robot for industrial environments, but also say in their self-description on LinkedIn that they are “reinventing movement to create the first robotic assistant for homes”.
Something our CTO Tony Lewis, who is also a robotics researcher, will likely be very intrigued about.




View attachment 91396

View attachment 91398


View attachment 91400
View attachment 91401

View attachment 91402


Watch Aru in action below, how it climbs stairs, reshapes to avoid obstacles, opens doors etc.






Hi Fran,

Very sad but thankyou for sharing.
 
  • Like
  • Thinking
Reactions: 4 users

CHIPS

Regular
I hope that BrainChip will become huge (okay, I hope so also for my account) so that all those people leaving BRN after such a short time will truly regret it.

The second Richard had stayed with BRN for over seven years, so I can understand that he wants a change. However, the first Richard probably took the BRN job to fill the gap until he found something else. It is not always BrainChip's fault that people leave!
 
  • Like
Reactions: 4 users

TopCat

Regular
I haven’t heard of EMASS before. Anyone else?


September 16, 2025

EMASS Emerges from Stealth to Redefine Edge AI​


ECS-DoT 22nm processor delivers always-on, milliWatt-scale local intelligence
LOS ANGELES (Sept. 16, 2025) – EMASS, a Nanoveu subsidiary emerging from stealth with next-generation semiconductor technology, has introduced the ECS-DoT, their edge AI system-on-chip (SoC). The new design enables always-on, milliWatt-scale intelligence for edge devices, eliminating the need for cloud-based computation. With this chip, EMASS is targeting the extreme edge of the network, an area occupied by compact and lightweight connected devices powered by small batteries. Application examples include medical wearables and sensor modules operating adjacent to the sensors they support.
The ECS-DoT is designed with four megabytes of on-board SRAM, enabling AI computations to run efficiently on edge and IoT devices. By processing data locally, the chip dramatically reduces latency and power consumption, opening the door for always-on intelligence in wearables, drones and predictive maintenance-dependent systems.
“We are thrilled to bring EMASS out of stealth and introduce the ECS-DoT to market, redefining what’s possible at the edge,” said Mark Goranson, CEO of EMASS. “OEMs have told us that power efficiency is the defining factor for edge AI, and ECS-DoT delivers on that promise while enabling always-on, zero-lag intelligence directly on devices.”
Compared with leading competitors, ECS-DoT operates up to 93% faster while consuming 90% less energy, all while supporting true multimodal sensor fusion on-device, eliminating the need for cloud processing.

Key ECS-DoT metric comparison:

MetricEMASS ECS-DoTCompetitor ACompetitor B
Power per Inference0.1-5 mW5-100 mW30-150 mW
Latency (ms)<1010-1510-100
Energy per Inference1-10 µJ30-150 µJ100-2,000 µJ
On-Device MemoryUp to 2MB SRAM + 2MB MRAM/RRAMUp to 1MB SRAM 2MB SRAM with optional 2MB MRAM
Multimodal Sensor FusionYes, milliwatt-scaleLimitedLimited
Always-On Viable?YesNoNo

 
  • Wow
  • Thinking
Reactions: 3 users

Frangipani

Top 20
Here is the recording of Friday’s InsightJam expert panel on AI Infrastructure Strategy: Build, Buy, and Orchestrate for Intelligence at Scale, also featuring Jonathan Tapson:


8875946A-EBAB-4E3A-B15B-328E91B31C4D.jpeg







“This roundtable brings together industry leaders to examine the strategic frameworks that drive AI infrastructure investments, exploring the true cost dynamics beyond obvious compute expenses to uncover hidden costs from ungoverned AI proliferation across business units. Their discussion will focus on balancing build-versus-buy decisions, examining governance frameworks that enable coordinated infrastructure choices while avoiding technological silos, and identifying critical decision points that signal when fundamental strategy shifts are needed.”
 
  • Like
  • Love
  • Fire
Reactions: 9 users
  • Like
  • Love
Reactions: 8 users
Asking AI..... could Morse micro and brainchip be working together .

This suggests that while they are both partners with MegaChips, their individual technologies address different aspects of the evolving IoT and AI landscape. Morse Micro focuses on the connectivity layer with Wi-Fi HaLow, while BrainChip focuses on efficient AI processing at the edge. Therefore, a direct collaboration between them would likely involve integrating their distinct technologies to create a more comprehensive solution, such as a Wi-Fi HaLow enabled device with integrated neuromorphic processing capabilities
 
  • Like
  • Thinking
Reactions: 5 users

schuey

Regular
Super day all.....
 
  • Like
Reactions: 6 users
The quietest week on tse to date.
 
  • Like
Reactions: 4 users

FJ-215

Regular
Only because it is so quiet (and I'm bored)....

Here's a shortish video from a few months ago on the NPU in Qualcomm's Snapdragon chips.



There is a real and growing focus on the edge now from the big players in the industry. MediaTec released their latest chip today, the Dimensity 9500 featuring a beefed up NPU capable of 100TOPS and the Snapdragon 8 gen5 has rumors of similar specs.

The Snapdragon Summit starts overnight our time. I have no expectations of us being involved but hopefully all the buzz that is being generated around AI at the edge atm might just spill over to us.

Not holding my breath.
 
  • Like
  • Fire
Reactions: 7 users

Frangipani

Top 20
BrainChip will be exhibiting at EDHPC25, the European Data Handling & Processing Conference for space applications organised by ESA, which will take place in Elche, Spain in mid-October (https://indico.esa.int/event/552/overview).
Our partners Frontgrade Gaisler (one of the event sponsors) and EDGX will be exhibiting as well.


View attachment 88890

EDHPC25, the European Data Handling & Processing Conference for space applications organised by ESA, is less than three weeks away.

Turns out BrainChip will not only be exhibiting, but that Gilles Bézard and Alf Kuchenbuch will also be presenting a two-part tutorial on 13 October:



B00BB0B9-0E9C-4C0A-AE0D-22137F7C2352.jpeg



Adam Taylor from Adiuvo Engineering, who recently posted about ORA,“a comprehensive sandbox platform for edge AI deployment and benchmarking” (whose edge devices available for evaluation also include Akida) will be presenting concurrently.

https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-474283


Three days later, on 16 October, ESA’s Laurent Hili will be chairing a session on AI Engines & Neuromorphic, which will include a presentation on Frontgrade Gaisler’s GRAIN product line, whose first device, the Gr801 SoC - as we know - will combine Frontgrade Gaisler’s NOEL RISC-V processor and Akida.

EBA829DB-3FA5-462E-A5C8-4128D3862E91.jpeg




8CD0FEB1-89A3-4F15-9303-E84CB518BBFF.jpeg




The session’s last speaker will be AI vision expert Roland Brochard from Airbus Defense & Space Toulouse, who has been involved in the NEURAVIS proposal with us for the past 18 months or so:


https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-429615

DE94DD91-AB5C-4815-B887-97CBD6456325.jpeg



In April, Kenneth Östberg’s poster on GRAIN had revealed what NEURAVIS actually stands for: Neuromorphic Evaluation of Ultra-low-power Rad-hard Acceleration for Vision Inferences in Space.

https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-459275

0FA25A95-4F9E-44BB-A0E5-87253B9C6945.jpeg



It appears highly likely that Roland Brochard will be giving a presentation on said NEURAVIS proposal in his session, given that a paper co-authored by 15 researchers from all consortium partners involved (Airbus Toulouse, Airbus Ottobrunn, Frontgrade Gaisler, BrainChip, Neurobus, plus ESA) was uploaded to the conference website.

This paper has the exact same title as Roland Brochard’s EDHPC25 presentation, namely “Evaluation of Neuromorphic computing technologies for very low power AI/ML applications” (which is also almost identical to the title of the original ESA ITT, except that the words “…in space” are missing, cf. my July 2024 post above):


A3F84837-F609-4D63-AD5D-F311F575C68F.jpeg



Full paper in next post due to the upload limitation of 10 files per post…
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 33 users

Frangipani

Top 20
EDHPC25, the European Data Handling & Processing Conference for space applications organised by ESA, is less than three weeks away.

Turns out BrainChip will not only be exhibiting, but that Gilles Bézard and Alf Kuchenbuch will also be presenting a two-part tutorial on 13 October:



View attachment 91405


Adam Taylor from Adiuvo Engineering, who recently posted about ORA,“a comprehensive sandbox platform for edge AI deployment and benchmarking” (whose edge devices available for evaluation also include Akida) will be presenting concurrently.

https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-474283


Three days later, on 16 October, ESA’s Laurent Hili will be chairing a session on AI Engines & Neuromorphic, which will include a presentation on Frontgrade Gaisler’s GRAIN product line, whose first device, the Gr801 SoC - as we know - will combine Frontgrade Gaisler’s NOEL RISC-V processor and Akida.

View attachment 91406



View attachment 91408



The session’s last speaker will be AI vision expert Roland Brochard from Airbus Defense & Space Toulouse, who has been involved in the NEURAVIS proposal with us for the past 18 months or so:


https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-429615

View attachment 91409


In April, Kenneth Östberg’s poster on GRAIN had revealed what NEURAVIS actually stands for: Neuromorphic Evaluation of Ultra-low-power Rad-hard Acceleration for Vision Inferences in Space.

https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-459275

View attachment 91410


It now appears likely that Roland Brochard will be giving a presentation on said NEURAVIS proposal in his session, given that a paper by all parties involved (Airbus Toulouse, Airbus Ottobrunn, Frontgrade Gaisler, BrainChip, Neurobus, ESA) has just been published on the event website:


View attachment 91411


Full paper in next post due to the upload limitation of 10 files per post…

Here is the full paper on the NEURAVIS proposal that used both Akida COTS (AKD1500) as well as Akida IP:

“Two implementations of Akida are evaluated: a COTS (Commercial Off The Shelf) track for low criticality missions, and an embedded IP track involving rad-hard ASIC/FPGA technologies for higher criticality missions.”

(…)

“VII. CONCLUSION

We successfully tested three different kinds of neural networks on Akida v1 accelerator. The acceleration factor with respect to a regular CPU is interesting for highly demanding computer vision workloads. The Akida v1 is well suited for classification tasks but imposes limitations for regression tasks such as dense optical flow. We demonstrated that we need at least 8bits quantization to address this class of problems which means it will be fully possible with Akida v2. Execution time for AI models are usually deterministic with dense tensor processors, but since SNN rely on sparse processing capabilities, it introduces a dependency to processed data. In addition, it was not clear that SNN could be wrapped into a dense, layer by layer, API like GIGA but we demonstrated it is possible. This is important for portability as the choice of a platform depends on many factors, not only processing performance or efficiency.


ACKNOWLEDGMENT
This study was carried out by Airbus for the European Space Agency under ESA Contract No. 4000144912/24/NL/GLC”




BA853316-2F2F-4D36-AB28-E226B806AF3E.jpeg
6E05C745-3C74-4E33-B599-C5CB904D89A4.jpeg
FF25DDD5-069D-473F-AA74-3EF166B4FFC2.jpeg
5A646D27-6D58-41EC-974E-77E128854BAE.jpeg
989686F6-C89E-42A3-81F9-FFDA21B6BBA0.jpeg
BB0F5465-E8B0-4107-BFB9-11C11D97D8C7.jpeg
5784269B-C09D-409B-BD8E-22A52539D1CE.jpeg
60B76EF3-B7E8-4522-8ECC-5D7B28E6B980.jpeg
 

Attachments

  • A2CA9FAB-39D1-4A9E-A4AD-46FFE6981ECA.jpeg
    A2CA9FAB-39D1-4A9E-A4AD-46FFE6981ECA.jpeg
    203.1 KB · Views: 36
Last edited:
  • Like
  • Love
  • Fire
Reactions: 39 users

Esq.111

Fascinatingly Intuitive.
Afternoon Chippers ,

Just tuned in so the following may have been posted already......

Only Australian Company My Arse.

Australian chipmaker Morse Micro secures $88 million series C capital https://share.google/WejbJCKfa0kL1mh1X

Behind a pay wall unfortunately, but you get the jist.

Regards,
Esq.
 
  • Like
  • Love
  • Thinking
Reactions: 11 users

CHIPS

Regular
Afternoon Chippers ,

Just tuned in so the following may have been posted already......

Only Australian Company My Arse.

Australian chipmaker Morse Micro secures $88 million series C capital https://share.google/WejbJCKfa0kL1mh1X

Behind a pay wall unfortunately, but you get the jist.

Regards,
Esq.


WHY THEM AND NOT US?

1758610027049.png



1758610074953.png
 
  • Thinking
  • Fire
  • Wow
Reactions: 4 users
  • Like
  • Love
  • Fire
Reactions: 17 users

miaeffect

Oat latte lover
  • Haha
  • Like
Reactions: 8 users

Frangipani

Top 20
Just another reminder that we are not alone in the Edge AI market…


4EB8AFBD-1BD2-4C71-A472-10D40D676436.jpeg



Neurala's journey

Neurala's journey

From Mars rovers to factory floors, from NASA grants to commercial success, leading Neurala has been an amazing ride!​

Massimiliano Versace

Massimiliano Versace


Vice President, Emergent AI at Analog Devices



September 23, 2025

In 2006, AI was not mainstream.

One of my earlier memories of Neurala is pitching the idea of using Neural Networks to solve problems in robotics, computer vision, defense, biology, and more, and be met with chuckles and eventually being shown the door. When we launched Neurala, we were lonely, but we had a vision: build AI that learns like the human brain, and put it where it matters and has impact for our lives, not necessarily the cloud, but in the real world. Edge-native, brain-inspired, but more importantly, practical: like its counterpart biological intelligence, AI is designed to solve problems in ways traditional machine learning could not.

Not a popular idea back then, we stuck to it. And 100M Neurala-powered devices and counting (and lots of sweat and tears…) later, we were proven right.
And what a journey it has been!

Since we left Boston University in 2013 (Neurala was stealth for a few years), we invented Lifelong DNN (L-DNN), an AI inspired by the way our brains learn and compute, an AI that adapts continuously with minimal data and minimal compute power. We helped NASA design autonomous systems to navigate the surface of Mars, and DARPA design neuromorphic brains. We brought that same tech down to Earth, powering drones, cameras, mobile devices, and industrial lines with intelligence that adapts in real time, learning from just a simple image, pretty much like humans learn continuously from all our experience, at low power, at the “edge” (before Edge was a thing!)

In the past two years, Neurala achieved something truly remarkable: more than 100% year-over-year revenue growth, launching partnerships with industry leaders (Sony, Lattice, and more), making our product Neurala VIA modular, scalable, and insanely simple for manufacturers. Commercial success proved that our vision of edge-native, efficient, privacy-respecting AI was not only a revolutionary AI technology, but profitable, scalable, and desperately needed.
More importantly, I am proud of “Neuraliens”, the most talented minds I have ever had the privilege to work with. We built technology, products, deployed our AI in tens of millions of devices, from cell phones to drones to cameras inspecting everyday products we all buy in stores.

But we did not just build AI-powered products. We built a culture of openness, pursuit of scientific truth (ideas, not people or their hierarchies, were always the protagonist at Neurala) translated into useful AI, a company, and a future for AI beyond the massive hype that started to surround AI over the years.
One of my early investors, Warren Kats, once told me:
“Remember Max, building a company is 1% inspiration, and 99% perspiration!”.
How right he was! A few buckets of sweat later, and with Neurala breaking records in its revenues, it is time for my next leap. Today, I am leading the Emergent AI initiative at Analog Devices. The reason is simple: the world is ready for what’s next.

What’s next in AI?​

My journey in AI has been long. I still remember my first 5-neuron simulation back at the University of Trieste. It was 1996, I was an undergrad tinkering with multi-layer perceptrons. Five neurons, five hours, on a “state-of-the-art” Mac.
Almost 30 years later (no, AI is not an overnight sensation), something fundamental is changing, with AI ready to finally leave datacenters. Intelligence is moving to its next frontier, from the digital into the physical, embedded in the sensors, chips, and devices that shape our lives, from cars to robots to medical devices.

But to make this jump, the challenges for AI are one order of magnitude harder than creating AI for datacenters. Brains operate with ~86 billion neurons and over 100 trillion synapses, all on just ~20 watts. In other words, they can compute in minutes what today’s AI hardware does in megawatts. State-of-the-art GPUs draw 400–1200 watts per unit. When training modern AI models, clusters easily hit 10 megawatts or more. That makes biological brains roughly 10,000 to 1,000,000 times more energy efficient, a poor approximation, of course, since brains are still not fully understood by anyone. But we do know some of that efficiency comes from integrating memory and compute, and from using sparse, spike-driven signaling. I know that since it was my PhD thesis!

For the next wave of intelligent machines to enter our world, we need to bring this biology-inspired efficiency to life. Practically, this means collapsing the boundary between sensing and thinking by designing low-power, low-latency, physically embedded AI systems that can operate in real time, ingesting directly sensory data from the physical world.

This is my mission at ADI.

As the world’s leading analog and mixed-signal semiconductor company, with amazing capabilities in sensing, signal conditioning, and edge compute, Analog Devices is in pole position to make this shift happen and transition AI from massive data centers, straining power infrastructure, to the real world. The future of AI lies in enabling intelligent computation to live closer to where it matters: in your phone, in your car’s battery management system, in healthcare devices, and, why not, even in humanoid robots!

In this new paradigm, we blur the line between sensing, computing, and acting. Take biological vision or touch, for example: when a photon hits our eye or our fingers touch a surface, there’s no clear division between sensing and processing. What’s happening is computation. Sensing is an exposed nervous system. Which means…
AI is sensing.
At ADI, we know this well, with billions of devices already sensing and acting on the physical world, we are in the front seat of the next revolution, building novel AI compute frameworks that merge sensing and intelligence at a fraction of the power and latency possible today. Without this shift, the move from server-bound AI to edge-native AI will not happen.

A practical example clarifies why. Take humanoid robots: much of their power is burned by AI algorithms running on GPUs just to keep them balanced, aware, and responsive. But nature does this better: in biology, intelligence is fused with sensors and actuators, it is fast and power-efficient, and this is incompatible with the dominant AI paradigm, which is offloading data to centralized, power-hungry processors.

This new AI is needed for the next trillion devices, for a new generation of artificial intelligence that does not simply classify pixels, but learns from them, in real time, on-device, and within the constraints of our physical world and its unforgiving laws of energy and latency.

Neurala was born from the belief that AI should be useful. At ADI, we’re taking that idea to the next level.

To the entire Neurala team, partners, and customers, thank you for believing in the mission before it was cool. We built something real that touched millions of lives. And of course, without our investors, Benjamin Lambert, Cesare Maifredi , Julien Mialaret , Tony Palcheck, Tim Draper, Katie Rae, and many more, Neurala could have not existed.

Now it’s time to build what comes next.

Max Versace
 
  • Like
  • Thinking
  • Wow
Reactions: 7 users

Frangipani

Top 20
Early access abstract only but some interesting authors using Akida and Raspberry Pi.



Fernando Sevilla Martínez
e-Health Center, Universitat Oberta de Catalunya UOC, Barcelona, Spain
Volkswagen AG, Wolfsburg, Germany

Jordi Casas-Roma
Computer Vision Center, Universitat Autònoma de Barcelona, Barcelona, Spain

Laia Subirats
e-Health Center, Universitat Oberta de Catalunya UOC, Barcelona, Spain

Raúl Parada
Centre Tecnològic de Telecomunicacions de Catalunya CTTC/CERCA, Barcelona, Spain


Eco-Efficient Deployment of Spiking Neural Networks on Low-Cost Edge Hardware

Abstract:​

This letter presents a practical and energy-aware framework for deploying Spiking Neural Networks on low-cost hardware for edge computing on existing software and hardware components. We detail a reproducible pipeline that integrates neuromorphic processing with secure remote access and distributed intelligence. Using Raspberry Pi and the BrainChip Akida PCIe accelerator, we demonstrate a lightweight deployment process including model training, quantization, and conversion. Our experiments validate the eco-efficiency and networking potential of neuromorphic AI systems, providing key insights for sustainable distributed intelligence. This letter offers a blueprint for scalable and secure neuromorphic deployments across edge networks, highlighting the novelty of providing a reproducible integration pipeline that brings together existing components into a practical, energy-efficient framework for real-world use.

Great find, @Fullmoonfever!

It appears, though, that you haven’t yet made the connection between the paywalled IEEE Networking Letter titled “Eco-Efficient Deployment of Spiking Neural Networks on Low-Cost Edge Hardware” and the GitHub repository named “SevillaFe/SNN_Akida_RPI5” by first author Fernando Sevilla Martínez, which you had already discovered back in July.


AEDC4DCA-FA5A-4113-BD8B-6DAA052F03D4.jpeg



At the time, the content freely accessible via GitHub enabled us to gather quite a bit of info on the use cases Fernando Sevilla Martínez and his fellow researchers (which as predicted includes Raúl Parada Medina) had had in mind, when they set out to “validate the eco-efficiency and networking potential of neuromorphic AI systems, providing key insights for sustainable distributed intelligence”.

The GitHub repository concluded with the acknowledgment that “This implementation is part of a broader effort to demonstrate low-cost, energy-efficient neuromorphic AI for distributed and networked edge environments, particularly leveraging the BrainChip Akida PCIe board and Raspberry Pi 5 hardware.” Nevertheless, one focus was evidently on V2X (= Vehicle-to-Everything) communication systems.

So I am reposting some of the July posts on this topic here to refresh our memory:

Maybe one step closer though.

Just up on GitHub.

Suggest readers absorb the whole post to understand the intent of this repository.

Especially terms such as federated learning, scalable, V2X, MQTT, prototype level and distributed.

From what I can find, if correct, the author is as below and doesn't necessarily mean VW involved but suspect would be aware of this work in some division / department.



Fernando Sevilla Martínez​




SevillaFe/SNN_Akida_RPI5

Fernando Sevilla MartínezSevillaFe​



SevillaFe/SNN_Akida_RPI5Public

Eco-Efficient Deployment of Spiking Neural Networks on Low-Cost Edge Hardware


SNN_Akida_RPI5​

Eco-Efficient Deployment of Spiking Neural Networks on Low-Cost Edge Hardware

This work presents a practical and energy-aware framework for deploying Spiking Neural Networks on low-cost hardware for edge computing. We detail a reproducible pipeline that integrates neuromorphic processing with secure remote access and distributed intelligence. Using Raspberry Pi and the BrainChip Akida PCIe accelerator, we demonstrate a lightweight deployment process including model training, quantization, and conversion. Our experiments validate the eco-efficiency and networking potential of neuromorphic AI systems, providing key insights for sustainable distributed intelligence. This letter offers a blueprint for scalable and secure neuromorphic deployments across edge networks.

1. Hardware and Software Setup​

The proposed deployment platform integrates two key hardware components: the RPI5 and the Akida board. Together, they enable a power-efficient, cost-effective N-S suitable for real-world edge AI applications.

2. Enabling Secure Remote Access and Distributed Neuromorphic Edge Networks​

The deployment of low-power N-H in networked environments requires reliable, secure, and lightweight communication frameworks. Our system enables full remote operability of the RPI5 and Akida board via SSH, complemented by protocol layers (Message Queuing Telemetry Transport (MQTT), WebSockets, Vehicle-to-Everything (V2X)) that support real-time, event-driven intelligence across edge networks.

3. Training and Running Spiking Neural Networks​

The training pipeline begins with building an ANN using TensorFlow 2.x, which will later be mapped to a spike-compatible format for neuromorphic inference. Because Akida board runs models using low-bitwidth integer arithmetic (4–8 bits), it is critical to align the training phase with these constraints to avoid significant post-training performance degradation.

4. Use case validation: Networked neuromorphic AI for distributed intelligence​

4.1 Use Case: If multiple RPI5 nodes or remote clients need to receive the classification results in real-time, MQTT can be used to broadcast inference outputs​

MQTT-Based Akida Inference Broadcasting​

This project demonstrates how to perform real-time classification broadcasting using BrainChip Akida on Raspberry Pi 5 with MQTT.

Project Structure​

mqtt-akida-inference/
├── config/ # MQTT broker and topic configuration
├── scripts/ # MQTT publisher/subscriber scripts
├── sample_data/ # Sample input data for inference
├── requirements.txt # Required Python packages


Usage​

  1. Install Mosquitto on RPI5
sudo apt update
sudo apt install mosquitto mosquitto-clients -y
sudo systemctl enable mosquitto
sudo systemctl start mosquitto

  1. Run Publisher (on RPI5)
python3 scripts/mqtt_publisher.py


  1. Run Subscriber (on remote device)
python3 scripts/mqtt_subscriber.py


  1. Optional: Monitor from CLI
mosquitto_sub -h <BROKER_IP> -t "akida/inference" -v

Akida Compatibility

python3 outputs = model_akida.predict(sample_image)


Real-Time Edge AI This use case supports event-based edge AI and real-time feedback in smart environments, such as surveillance, mobility, and robotics.

Configurations Set your broker IP and topic in config/config.py

4.2 Use Case: If the Akida accelerator is deployed in an autonomous driving system, V2X communication allows other vehicles or infrastructure to receive AI alerts based on neuromorphic-based vision​

This Use Cases simulates a lightweight V2X (Vehicle-to-Everything) communication system using Python. It demonstrates how neuromorphic AI event results, such as pedestrian detection, can be broadcast over a network and received by nearby infrastructure or vehicles.

Folder Structure​

V2X/
├── config.py # V2X settings
├── v2x_transmitter.py # Simulated Akida alert broadcaster
├── v2x_receiver.py # Listens for incoming V2X alerts
└── README.md


Use Case​

If the Akida accelerator is deployed in an autonomous driving system, this setup allows:

  • Broadcasting high-confidence AI alerts (e.g., "pedestrian detected")
  • Receiving alerts on nearby systems for real-time awareness

Usage​

1. Start the V2X Receiver (on vehicle or infrastructure node)​

python3 receiver/v2x_receiver.py

2. Run the Alert Transmitter (on an RPI5 + Akida node)​

python3 transmitter/v2x_transmitter.py

Notes​

  • Ensure that devices are on the same LAN or wireless network
  • UDP broadcast mode is used for simplicity
  • This is a prototype for real-time event-based message sharing between intelligent nodes

4.3 Use Case: If multiple RPI5-Akida nodes are deployed for federated learning, updates to neuromorphic models must be synchronized between devices​

Federated Learning Setup with Akida on Raspberry Pi 5​

This repository demonstrates a lightweight Federated Learning (FL) setup using neuromorphic AI models deployed on BrainChip Akida PCIe accelerators paired with Raspberry Pi 5 devices. It provides scripts for a centralized Flask server to receive model weight updates and a client script to upload Akida model weights via HTTP.

Overview​

Neuromorphic models trained on individual RPI5-Akida nodes can contribute updates to a shared model hosted on a central server. This setup simulates a federated learning architecture for edge AI applications that require privacy, low latency, and energy efficiency.

Repository Structure​

federated_learning/
├── federated_learning_server.py # Flask server to receive model weights
├── federated_learning_client.py # Client script to upload Akida model weights
├── model_utils.py # (Optional) Placeholder for weight handling utilities
├── model_training.py # (Optional) Placeholder for training-related code
└── README.md


Requirements​

  • Python 3.7+
  • Flask
  • NumPy
  • Requests
  • Akida Python SDK (required on client device)
Install the dependencies using:

pip install flask numpy requests

Getting Started​

1. Launch the Federated Learning Server​

On a device intended to act as the central server:

python3 federated_learning_server.py

The server will listen for HTTP POST requests on port 5000 and respond to updates sent to the /upload endpoint.

2. Configure and Run the Client​

On each RPI5-Akida node:

  • Ensure the Akida model has been trained.
  • Replace the SERVER_IP variable inside federated_learning_client.py with the IP address of the server.
  • Run the script:
python3 federated_learning_client.py

This will extract the weights from the Akida model and transmit them to the server in JSON format.

Example Response​

After a successful POST:

Model weights uploaded successfully.


If an error occurs (e.g., connection refused or malformed weights), you will see an appropriate status message.

Security Considerations​

This is a prototype-level setup for research. For real-world deployment:

  • Use HTTPS instead of HTTP.
  • Authenticate clients using tokens or API keys.
  • Validate the format and shape of model weights before acceptance.

Acknowledgements​

This implementation is part of a broader effort to demonstrate low-cost, energy-efficient neuromorphic AI for distributed and networked edge environments, particularly leveraging the BrainChip Akida PCIe board and Raspberry Pi 5 hardware.

Great find, as per usual @Fullmoonfever!

I just discovered this research paper titled "Spiking neural networks for autonomous driving" which Fernando Sevilla Martinez (Data Science Specialist, Volkswagen) co-authored. The paper was published in December 2024.


View attachment 88417




Fernando Sevilla Martínez's Github activity, which FMF just uncovered, demonstrates Akida-powered neuromorphic processing for V2X and federated learning prototypes.

CARIAD, Volkswagen's software company, have been working on developing and implementing V2X.



View attachment 88414

View attachment 88416

Great find, @Fullmoonfever!

This is his LinkedIn Profile:


View attachment 88420

View attachment 88421


I came across the name Fernando Sevilla Martínez before, in connection with Raúl Parada Medina, whom I first noticed liking BrainChip LinkedIn posts more than a year ago (and there have been many more since… 😊).

Given that Raúl Parada Medina describes himself as an “IoT research specialist within the connected car project at a Spanish automobile manufacturer”, I had already suggested a connection to the Volkswagen Group via SEAT or CUPRA at the time.

https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-424590

View attachment 88422
View attachment 88424





View attachment 88425

Extremely likely the same Raúl Parada Medina whom you recently spotted asking for help with Akida in the DeGirum Community - very disappointingly, no one from our company appears to have been willing to help solve this problem for more than 3 months!

(…)

Anyway, as you had already noticed in your first post on this DeGirum enquiry, Raúl Parada Medina (assuming it is the same person, which I have no doubt about) and Fernando Sevilla Martínez are both co-authors of a paper on autonomous driving:

https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-450543

View attachment 88429

In fact, they have co-published two papers on autonomous driving, together with another researcher: Jordi Casas-Roma. He is director of the Master in Data Science at the Barcelona-based private online university Universitat Oberta de Catalunya, the same department where Fernando Sevilla Martínez got his Master’s degree in 2022 before moving to Wolfsburg the following year, where he now works as a data scientist at the headquarters of the Volkswagen Group.


View attachment 88430


View attachment 88426 View attachment 88427

By the way, Akida does get a mention in the above paper “Spiking neural networks for autonomous driving: A review”, which was first submitted to Elsevier in May 2024 (around the same time when I first noticed Raúl Parada Medina liking BrainChip posts) and then resubmitted in a revised version in August 2024. It was then published online on 21 October 2024.

Apparently the three co-authors of first author Fernando Sevilla Martínez didn’t contribute much themselves to the paper; instead their role was a supervisory one.

“CRediT authorship contribution statement
Fernando S. Martínez: Writing – review & editing, Writing – original draft, Investigation.
Jordi Casas-Roma: Supervision.
Laia Subirats: Supervision.
Raúl Parada: Supervision”





View attachment 88457
(…)

View attachment 88458
(…)
View attachment 88459



Note that Fernando Sevilla Martínez not only works at the Volkswagen Group headquarters in Wolfsburg/Germany, but is also affiliated with the e-Health Center at his alma mater Universitat Oberta de Catalunya (UOC) in Barcelona.

So IMO there is a fair chance that he or his colleagues there will also experiment with Akida in the field of digital health, eg. in a hospital setting.

From the GitHub @Fullmoonfever had discovered:

“Acknowledgements​

This implementation is part of a broader effort to demonstrate low-cost, energy-efficient neuromorphic AI for distributed and networked edge environments, particularly leveraging the BrainChip Akida PCIe board and Raspberry Pi 5 hardware.”




View attachment 88460




View attachment 88462



After digging a little further, it looks more and more likely to me that CUPRA is the Spanish automobile manufacturer with the connected car project Raúl Parada Medina is currently involved in (cf. his LinkedIn profile).

Which in turn greatly heightens the probability that he and Fernando Sevilla Martínez (who works for the Volkswagen Group as a data scientist in the Volkswagen logistics data lake) have been collaborating once again, this time jointly experimenting on “networked neuromorphic AI for distributed intelligence” with the help of an Akida PCIe board paired with a Raspberry Pi 5. (https://github.com/SevillaFe/SNN_Akida_RPI5)

While the GitHub repository SNN_Akida_RPI5 is described very generally as “Eco-Efficient Deployment of Spiking Neural Networks on Low-Cost Edge Hardware”, and one of the use cases (involving MQTT - Message Queuing Telemetry Transport) “supports event-based edge AI and real-time feedback in smart environments, such as surveillance, mobility, and robotics” - and hence a very broad range of applications - one focus is evidently on V2X (= Vehicle-to-Everything) communication systems: Use case 4.2 in the GitHub repository demonstrates how “neuromorphic AI event results, such as pedestrian detection, can be broadcast over a network and received by nearby infrastructure or vehicles.”

View attachment 88507


CUPRA is nowadays a standalone car brand owned by SEAT, a Spanish (Catalonian to be precise) automobile manufacturer headquartered in Martorell near Barcelona. In fact, CUPRA originated from SEAT’s motorsport division Cupra Racing. Both car brands are part of the Volkswagen Group.

CUPRA’s first EV, the CUPRA Born, introduced in 2021 and named after a Barcelona neighbourhood, is already equipped with Car2X technology as standard (see video below). Two more CUPRA models with Car2X functions have since been released: CUPRA Tavascan and CUPRA Terramar.

Broadly speaking, Car2X/V2X (often, but not always used interchangeably, see the Telekom article below) stands for technologies that enable vehicles to communicate with one another and their environment in real time. They help to prevent accidents by warning other nearby vehicles of hazards ahead without visibility, as V2X can “see” around corners and through obstacles in a radius of several hundred meters, connecting all users who have activated V2X (obviously provided their vehicles support it) to a safety network.

“B. V2X technology

I. Principles

Your vehicle is equipped with V2X technology. If you activate the V2X technology, your vehicle can exchange important road traffic information, for example about accidents or traffic jams, with other road users or traffic infrastructure if they also support V2X technology. This makes your participation in road traffic even safer. When you log into the vehicle for the first time, you must check whether the V2X setting is right for you and you can deactivate V2X manually as needed.

Communication takes place directly between your vehicle and other road users or the traffic infrastructure within a close range of approximately 200 m to 800 m. This range can vary depending on the environment, such as in tunnels or in the city.

(…)


III. V2X functionalities

V2X can assist you in the following situations:

1. Warning of local hazards

The V2X function scans the range described above around your vehicle in order to inform you of relevant local hazards. To do this, driving information from other V2X users is received and analysed. For example, if a vehicle travelling in front initiates emergency braking and sends this information via V2X, your vehicle can display a warning message. Please note that your vehicle does not perform automatic driving interventions due to such warnings. In other words, it does not automatically initiate emergency braking, for example.

2. Supplement to adaptive cruise control

The V2X technology can supplement your vehicle's predictive sensor system (e.g. radar and camera systems) and detect traffic situations even more quickly to give you more time to react to them. With more precise information about a traffic situation, adaptive cruise control, for example, can respond to a tail end of a traffic jam in conjunction with the cruise control system and automatically adjust the speed. Other functions, such as manual lane change assistance, are also improved.

3. Other functionalities

Further V2X functions may be developed in future. We will inform you separately about data processing in connection with new V2X functions.

IV. Data exchange

If you activate the V2X technology, it continuously sends general traffic information to other V2X users (e.g. other vehicles, infrastructure) and allows them to evaluate the current traffic situation. The following data is transmitted for this: information about the V2X transmitter (temporary ID, type), vehicle information (vehicle dimensions), driving information (acceleration, geographical position, direction of movement, speed), information from vehicle sensors (yaw rate, cornering, light status, pedal status and steering angle) and route (waypoints, i.e. positioning data, of the last 200 m to 500 m driven).

The activated V2X technology also transmits additional data to other V2X users when certain events occur. In particular, these events include a vehicle stopping, breakdowns, accidents, interventions by an active safety system and the tail end of traffic jams. The data is only transmitted when these events occur. The following data is additionally transmitted: event information (type of event, time of event and time of message, geographical position, event area, direction of movement) and route (waypoints, i.e. positioning data, of the last 600 m to 1,000 m driven).

The data sent to other V2X users is pseudonymised. This means that you are not displayed as the sender of the information to other V2X users.

Volkswagen AG does not have access to this data and does not store it.”




Here is a February 2024 overview of Car2X hazard warning dashboard symbols in Volkswagen Group automobiles, which also shows 8 models that were already equipped with this innovative technology early last year: 7 VW models as well as the CUPRA Born. Meanwhile the number has increased to at least 13 - see the screenshot of a Motoreport video uploaded earlier this month.


View attachment 88487


And here is an informative article on Car2X/V2X by a well-know German telecommunications provider that - just as numerous competitors in the field - has a vested interest in the expansion of this technology, especially relating to the “development towards nationwide 5G Car2X communications”.

(…)
 
  • Like
  • Love
  • Fire
Reactions: 18 users

TECH

Regular
Free Spirits...any individual can quit at any time within any organization if they so desire, would there be financial
implications? yes, in some cases depending on what contractual obligations were signed off on at the time of said
agreement.

We do seem to have established a revolving door over the last few years under Sean's leadership, now the question is,
is there some unrest within the company? are there demands that just can't be met? very intelligent individuals don't just
up and leave without a reason, I may be way off the mark here, but it sends out a message of possible frustration, let's all be
realistic here, despite what we think is solid progress, maybe just maybe, this period of treading water has finally exhausted
even the brightest minds, hopefully I'm 100% off the mark, does Sean have such high expectations that the reality is, certain
staff see a personality that doesn't gel well with them in the workplace.

Money isn't always the driving force behind individual's decisions to join a company in the first place, especially the older
more mature worker, believe it or not, I personally think that pressures and demands can cause friction within companies,
and maybe Sean rubs people up the wrong way, keeping an open mind isn't a sin.

Some may say, well, shareholders come and go, so what's the difference when staff do the same thing?

Nothing sinister in my comments, just floating and rambling.

☮️ Tech.
 
  • Like
Reactions: 10 users

Frangipani

Top 20
Great find, @Fullmoonfever!

It appears, though, that you haven’t yet made the connection between the paywalled IEEE Networking Letter titled “Eco-Efficient Deployment of Spiking Neural Networks on Low-Cost Edge Hardware” and the GitHub repository named “SevillaFe/SNN_Akida_RPI5” by first author Fernando Sevilla Martínez, which you had already discovered back in July.


View attachment 91440


At the time, the content freely accessible via GitHub enabled us to gather quite a bit of info on the use cases Fernando Sevilla Martínez and his fellow researchers (which as predicted includes Raúl Parada Medina) had had in mind, when they set out to “validate the eco-efficiency and networking potential of neuromorphic AI systems, providing key insights for sustainable distributed intelligence”.

The GitHub repository concluded with the acknowledgment that “This implementation is part of a broader effort to demonstrate low-cost, energy-efficient neuromorphic AI for distributed and networked edge environments, particularly leveraging the BrainChip Akida PCIe board and Raspberry Pi 5 hardware.” Nevertheless, one focus was evidently on V2X (= Vehicle-to-Everything) communication systems.

So I am reposting some of the July posts on this topic here to refresh our memory:

I just noticed on LinkedIn that the person behind Neuromorphiccore.AI (highly likely Bradley Susser, who also writes about neuromorphic topics on Medium) referred to that paper co-authored by Fernando Sevilla Martínez, Jordi Casas-Roma, Laia Subirats and Raúl Parada Medina earlier today:


F1F23EB7-AF00-4D9C-BD7B-C2636060B9D8.jpeg
 
  • Like
  • Love
  • Fire
Reactions: 20 users
Top Bottom