BRN Discussion Ongoing

Asking AI..... could Morse micro and brainchip be working together .

This suggests that while they are both partners with MegaChips, their individual technologies address different aspects of the evolving IoT and AI landscape. Morse Micro focuses on the connectivity layer with Wi-Fi HaLow, while BrainChip focuses on efficient AI processing at the edge. Therefore, a direct collaboration between them would likely involve integrating their distinct technologies to create a more comprehensive solution, such as a Wi-Fi HaLow enabled device with integrated neuromorphic processing capabilities
 
  • Like
  • Thinking
Reactions: 5 users

schuey

Regular
Super day all.....
 
  • Like
Reactions: 6 users
The quietest week on tse to date.
 
  • Like
Reactions: 4 users

FJ-215

Regular
Only because it is so quiet (and I'm bored)....

Here's a shortish video from a few months ago on the NPU in Qualcomm's Snapdragon chips.



There is a real and growing focus on the edge now from the big players in the industry. MediaTec released their latest chip today, the Dimensity 9500 featuring a beefed up NPU capable of 100TOPS and the Snapdragon 8 gen5 has rumors of similar specs.

The Snapdragon Summit starts overnight our time. I have no expectations of us being involved but hopefully all the buzz that is being generated around AI at the edge atm might just spill over to us.

Not holding my breath.
 
  • Like
  • Fire
Reactions: 7 users

Frangipani

Top 20
BrainChip will be exhibiting at EDHPC25, the European Data Handling & Processing Conference for space applications organised by ESA, which will take place in Elche, Spain in mid-October (https://indico.esa.int/event/552/overview).
Our partners Frontgrade Gaisler (one of the event sponsors) and EDGX will be exhibiting as well.


View attachment 88890

EDHPC25, the European Data Handling & Processing Conference for space applications organised by ESA, is less than three weeks away.

Turns out BrainChip will not only be exhibiting, but that Gilles Bézard and Alf Kuchenbuch will also be presenting a two-part tutorial on 13 October:



B00BB0B9-0E9C-4C0A-AE0D-22137F7C2352.jpeg



Adam Taylor from Adiuvo Engineering, who recently posted about ORA,“a comprehensive sandbox platform for edge AI deployment and benchmarking” (whose edge devices available for evaluation also include Akida) will be presenting concurrently.

https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-474283


Three days later, on 16 October, ESA’s Laurent Hili will be chairing a session on AI Engines & Neuromorphic, which will include a presentation on Frontgrade Gaisler’s GRAIN product line, whose first device, the Gr801 SoC - as we know - will combine Frontgrade Gaisler’s NOEL RISC-V processor and Akida.

EBA829DB-3FA5-462E-A5C8-4128D3862E91.jpeg




8CD0FEB1-89A3-4F15-9303-E84CB518BBFF.jpeg




The session’s last speaker will be AI vision expert Roland Brochard from Airbus Defense & Space Toulouse, who has been involved in the NEURAVIS proposal with us for the past 18 months or so:


https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-429615

DE94DD91-AB5C-4815-B887-97CBD6456325.jpeg



In April, Kenneth Östberg’s poster on GRAIN had revealed what NEURAVIS actually stands for: Neuromorphic Evaluation of Ultra-low-power Rad-hard Acceleration for Vision Inferences in Space.

https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-459275

0FA25A95-4F9E-44BB-A0E5-87253B9C6945.jpeg



It appears highly likely that Roland Brochard will be giving a presentation on said NEURAVIS proposal in his session, given that a paper co-authored by 15 researchers from all consortium partners involved (Airbus Toulouse, Airbus Ottobrunn, Frontgrade Gaisler, BrainChip, Neurobus, plus ESA) was uploaded to the conference website.

This paper has the exact same title as Roland Brochard’s EDHPC25 presentation, namely “Evaluation of Neuromorphic computing technologies for very low power AI/ML applications” (which is also almost identical to the title of the original ESA ITT, except that the words “…in space” are missing, cf. my July 2024 post above):


A3F84837-F609-4D63-AD5D-F311F575C68F.jpeg



Full paper in next post due to the upload limitation of 10 files per post…
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 28 users

Frangipani

Top 20
EDHPC25, the European Data Handling & Processing Conference for space applications organised by ESA, is less than three weeks away.

Turns out BrainChip will not only be exhibiting, but that Gilles Bézard and Alf Kuchenbuch will also be presenting a two-part tutorial on 13 October:



View attachment 91405


Adam Taylor from Adiuvo Engineering, who recently posted about ORA,“a comprehensive sandbox platform for edge AI deployment and benchmarking” (whose edge devices available for evaluation also include Akida) will be presenting concurrently.

https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-474283


Three days later, on 16 October, ESA’s Laurent Hili will be chairing a session on AI Engines & Neuromorphic, which will include a presentation on Frontgrade Gaisler’s GRAIN product line, whose first device, the Gr801 SoC - as we know - will combine Frontgrade Gaisler’s NOEL RISC-V processor and Akida.

View attachment 91406



View attachment 91408



The session’s last speaker will be AI vision expert Roland Brochard from Airbus Defense & Space Toulouse, who has been involved in the NEURAVIS proposal with us for the past 18 months or so:


https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-429615

View attachment 91409


In April, Kenneth Östberg’s poster on GRAIN had revealed what NEURAVIS actually stands for: Neuromorphic Evaluation of Ultra-low-power Rad-hard Acceleration for Vision Inferences in Space.

https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-459275

View attachment 91410


It now appears likely that Roland Brochard will be giving a presentation on said NEURAVIS proposal in his session, given that a paper by all parties involved (Airbus Toulouse, Airbus Ottobrunn, Frontgrade Gaisler, BrainChip, Neurobus, ESA) has just been published on the event website:


View attachment 91411


Full paper in next post due to the upload limitation of 10 files per post…

Here is the full paper on the NEURAVIS proposal that used both Akida COTS (AKD1500) as well as Akida IP:

“Two implementations of Akida are evaluated: a COTS (Commercial Off The Shelf) track for low criticality missions, and an embedded IP track involving rad-hard ASIC/FPGA technologies for higher criticality missions.”

(…)

“VII. CONCLUSION

We successfully tested three different kinds of neural networks on Akida v1 accelerator. The acceleration factor with respect to a regular CPU is interesting for highly demanding computer vision workloads. The Akida v1 is well suited for classification tasks but imposes limitations for regression tasks such as dense optical flow. We demonstrated that we need at least 8bits quantization to address this class of problems which means it will be fully possible with Akida v2. Execution time for AI models are usually deterministic with dense tensor processors, but since SNN rely on sparse processing capabilities, it introduces a dependency to processed data. In addition, it was not clear that SNN could be wrapped into a dense, layer by layer, API like GIGA but we demonstrated it is possible. This is important for portability as the choice of a platform depends on many factors, not only processing performance or efficiency.


ACKNOWLEDGMENT
This study was carried out by Airbus for the European Space Agency under ESA Contract No. 4000144912/24/NL/GLC”




BA853316-2F2F-4D36-AB28-E226B806AF3E.jpeg
6E05C745-3C74-4E33-B599-C5CB904D89A4.jpeg
FF25DDD5-069D-473F-AA74-3EF166B4FFC2.jpeg
5A646D27-6D58-41EC-974E-77E128854BAE.jpeg
989686F6-C89E-42A3-81F9-FFDA21B6BBA0.jpeg
BB0F5465-E8B0-4107-BFB9-11C11D97D8C7.jpeg
5784269B-C09D-409B-BD8E-22A52539D1CE.jpeg
60B76EF3-B7E8-4522-8ECC-5D7B28E6B980.jpeg
 

Attachments

  • A2CA9FAB-39D1-4A9E-A4AD-46FFE6981ECA.jpeg
    A2CA9FAB-39D1-4A9E-A4AD-46FFE6981ECA.jpeg
    203.1 KB · Views: 12
Last edited:
  • Like
  • Love
  • Fire
Reactions: 30 users

Esq.111

Fascinatingly Intuitive.
Afternoon Chippers ,

Just tuned in so the following may have been posted already......

Only Australian Company My Arse.

Australian chipmaker Morse Micro secures $88 million series C capital https://share.google/WejbJCKfa0kL1mh1X

Behind a pay wall unfortunately, but you get the jist.

Regards,
Esq.
 
  • Like
  • Love
  • Thinking
Reactions: 10 users

CHIPS

Regular
Afternoon Chippers ,

Just tuned in so the following may have been posted already......

Only Australian Company My Arse.

Australian chipmaker Morse Micro secures $88 million series C capital https://share.google/WejbJCKfa0kL1mh1X

Behind a pay wall unfortunately, but you get the jist.

Regards,
Esq.


WHY THEM AND NOT US?

1758610027049.png



1758610074953.png
 
  • Thinking
  • Fire
  • Wow
Reactions: 4 users
  • Like
  • Love
  • Fire
Reactions: 15 users

miaeffect

Oat latte lover
  • Haha
  • Like
Reactions: 8 users

Frangipani

Top 20
Just another reminder that we are not alone in the Edge AI market…


4EB8AFBD-1BD2-4C71-A472-10D40D676436.jpeg



Neurala's journey

Neurala's journey

From Mars rovers to factory floors, from NASA grants to commercial success, leading Neurala has been an amazing ride!​

Massimiliano Versace

Massimiliano Versace


Vice President, Emergent AI at Analog Devices



September 23, 2025

In 2006, AI was not mainstream.

One of my earlier memories of Neurala is pitching the idea of using Neural Networks to solve problems in robotics, computer vision, defense, biology, and more, and be met with chuckles and eventually being shown the door. When we launched Neurala, we were lonely, but we had a vision: build AI that learns like the human brain, and put it where it matters and has impact for our lives, not necessarily the cloud, but in the real world. Edge-native, brain-inspired, but more importantly, practical: like its counterpart biological intelligence, AI is designed to solve problems in ways traditional machine learning could not.

Not a popular idea back then, we stuck to it. And 100M Neurala-powered devices and counting (and lots of sweat and tears…) later, we were proven right.
And what a journey it has been!

Since we left Boston University in 2013 (Neurala was stealth for a few years), we invented Lifelong DNN (L-DNN), an AI inspired by the way our brains learn and compute, an AI that adapts continuously with minimal data and minimal compute power. We helped NASA design autonomous systems to navigate the surface of Mars, and DARPA design neuromorphic brains. We brought that same tech down to Earth, powering drones, cameras, mobile devices, and industrial lines with intelligence that adapts in real time, learning from just a simple image, pretty much like humans learn continuously from all our experience, at low power, at the “edge” (before Edge was a thing!)

In the past two years, Neurala achieved something truly remarkable: more than 100% year-over-year revenue growth, launching partnerships with industry leaders (Sony, Lattice, and more), making our product Neurala VIA modular, scalable, and insanely simple for manufacturers. Commercial success proved that our vision of edge-native, efficient, privacy-respecting AI was not only a revolutionary AI technology, but profitable, scalable, and desperately needed.
More importantly, I am proud of “Neuraliens”, the most talented minds I have ever had the privilege to work with. We built technology, products, deployed our AI in tens of millions of devices, from cell phones to drones to cameras inspecting everyday products we all buy in stores.

But we did not just build AI-powered products. We built a culture of openness, pursuit of scientific truth (ideas, not people or their hierarchies, were always the protagonist at Neurala) translated into useful AI, a company, and a future for AI beyond the massive hype that started to surround AI over the years.
One of my early investors, Warren Kats, once told me:
“Remember Max, building a company is 1% inspiration, and 99% perspiration!”.
How right he was! A few buckets of sweat later, and with Neurala breaking records in its revenues, it is time for my next leap. Today, I am leading the Emergent AI initiative at Analog Devices. The reason is simple: the world is ready for what’s next.

What’s next in AI?​

My journey in AI has been long. I still remember my first 5-neuron simulation back at the University of Trieste. It was 1996, I was an undergrad tinkering with multi-layer perceptrons. Five neurons, five hours, on a “state-of-the-art” Mac.
Almost 30 years later (no, AI is not an overnight sensation), something fundamental is changing, with AI ready to finally leave datacenters. Intelligence is moving to its next frontier, from the digital into the physical, embedded in the sensors, chips, and devices that shape our lives, from cars to robots to medical devices.

But to make this jump, the challenges for AI are one order of magnitude harder than creating AI for datacenters. Brains operate with ~86 billion neurons and over 100 trillion synapses, all on just ~20 watts. In other words, they can compute in minutes what today’s AI hardware does in megawatts. State-of-the-art GPUs draw 400–1200 watts per unit. When training modern AI models, clusters easily hit 10 megawatts or more. That makes biological brains roughly 10,000 to 1,000,000 times more energy efficient, a poor approximation, of course, since brains are still not fully understood by anyone. But we do know some of that efficiency comes from integrating memory and compute, and from using sparse, spike-driven signaling. I know that since it was my PhD thesis!

For the next wave of intelligent machines to enter our world, we need to bring this biology-inspired efficiency to life. Practically, this means collapsing the boundary between sensing and thinking by designing low-power, low-latency, physically embedded AI systems that can operate in real time, ingesting directly sensory data from the physical world.

This is my mission at ADI.

As the world’s leading analog and mixed-signal semiconductor company, with amazing capabilities in sensing, signal conditioning, and edge compute, Analog Devices is in pole position to make this shift happen and transition AI from massive data centers, straining power infrastructure, to the real world. The future of AI lies in enabling intelligent computation to live closer to where it matters: in your phone, in your car’s battery management system, in healthcare devices, and, why not, even in humanoid robots!

In this new paradigm, we blur the line between sensing, computing, and acting. Take biological vision or touch, for example: when a photon hits our eye or our fingers touch a surface, there’s no clear division between sensing and processing. What’s happening is computation. Sensing is an exposed nervous system. Which means…
AI is sensing.
At ADI, we know this well, with billions of devices already sensing and acting on the physical world, we are in the front seat of the next revolution, building novel AI compute frameworks that merge sensing and intelligence at a fraction of the power and latency possible today. Without this shift, the move from server-bound AI to edge-native AI will not happen.

A practical example clarifies why. Take humanoid robots: much of their power is burned by AI algorithms running on GPUs just to keep them balanced, aware, and responsive. But nature does this better: in biology, intelligence is fused with sensors and actuators, it is fast and power-efficient, and this is incompatible with the dominant AI paradigm, which is offloading data to centralized, power-hungry processors.

This new AI is needed for the next trillion devices, for a new generation of artificial intelligence that does not simply classify pixels, but learns from them, in real time, on-device, and within the constraints of our physical world and its unforgiving laws of energy and latency.

Neurala was born from the belief that AI should be useful. At ADI, we’re taking that idea to the next level.

To the entire Neurala team, partners, and customers, thank you for believing in the mission before it was cool. We built something real that touched millions of lives. And of course, without our investors, Benjamin Lambert, Cesare Maifredi , Julien Mialaret , Tony Palcheck, Tim Draper, Katie Rae, and many more, Neurala could have not existed.

Now it’s time to build what comes next.

Max Versace
 
  • Like
  • Thinking
  • Wow
Reactions: 6 users

Frangipani

Top 20
Early access abstract only but some interesting authors using Akida and Raspberry Pi.



Fernando Sevilla Martínez
e-Health Center, Universitat Oberta de Catalunya UOC, Barcelona, Spain
Volkswagen AG, Wolfsburg, Germany

Jordi Casas-Roma
Computer Vision Center, Universitat Autònoma de Barcelona, Barcelona, Spain

Laia Subirats
e-Health Center, Universitat Oberta de Catalunya UOC, Barcelona, Spain

Raúl Parada
Centre Tecnològic de Telecomunicacions de Catalunya CTTC/CERCA, Barcelona, Spain


Eco-Efficient Deployment of Spiking Neural Networks on Low-Cost Edge Hardware

Abstract:​

This letter presents a practical and energy-aware framework for deploying Spiking Neural Networks on low-cost hardware for edge computing on existing software and hardware components. We detail a reproducible pipeline that integrates neuromorphic processing with secure remote access and distributed intelligence. Using Raspberry Pi and the BrainChip Akida PCIe accelerator, we demonstrate a lightweight deployment process including model training, quantization, and conversion. Our experiments validate the eco-efficiency and networking potential of neuromorphic AI systems, providing key insights for sustainable distributed intelligence. This letter offers a blueprint for scalable and secure neuromorphic deployments across edge networks, highlighting the novelty of providing a reproducible integration pipeline that brings together existing components into a practical, energy-efficient framework for real-world use.

Great find, @Fullmoonfever!

It appears, though, that you haven’t yet made the connection between the paywalled IEEE Networking Letter titled “Eco-Efficient Deployment of Spiking Neural Networks on Low-Cost Edge Hardware” and the GitHub repository named “SevillaFe/SNN_Akida_RPI5” by first author Fernando Sevilla Martínez, which you had already discovered back in July.


AEDC4DCA-FA5A-4113-BD8B-6DAA052F03D4.jpeg



At the time, the content freely accessible via GitHub enabled us to gather quite a bit of info on the use cases Fernando Sevilla Martínez and his fellow researchers (which as predicted includes Raúl Parada Medina) had had in mind, when they set out to “validate the eco-efficiency and networking potential of neuromorphic AI systems, providing key insights for sustainable distributed intelligence”.

The GitHub repository concluded with the acknowledgment that “This implementation is part of a broader effort to demonstrate low-cost, energy-efficient neuromorphic AI for distributed and networked edge environments, particularly leveraging the BrainChip Akida PCIe board and Raspberry Pi 5 hardware.” Nevertheless, one focus was evidently on V2X (= Vehicle-to-Everything) communication systems.

So I am reposting some of the July posts on this topic here to refresh our memory:

Maybe one step closer though.

Just up on GitHub.

Suggest readers absorb the whole post to understand the intent of this repository.

Especially terms such as federated learning, scalable, V2X, MQTT, prototype level and distributed.

From what I can find, if correct, the author is as below and doesn't necessarily mean VW involved but suspect would be aware of this work in some division / department.



Fernando Sevilla Martínez​




SevillaFe/SNN_Akida_RPI5

Fernando Sevilla MartínezSevillaFe​



SevillaFe/SNN_Akida_RPI5Public

Eco-Efficient Deployment of Spiking Neural Networks on Low-Cost Edge Hardware


SNN_Akida_RPI5​

Eco-Efficient Deployment of Spiking Neural Networks on Low-Cost Edge Hardware

This work presents a practical and energy-aware framework for deploying Spiking Neural Networks on low-cost hardware for edge computing. We detail a reproducible pipeline that integrates neuromorphic processing with secure remote access and distributed intelligence. Using Raspberry Pi and the BrainChip Akida PCIe accelerator, we demonstrate a lightweight deployment process including model training, quantization, and conversion. Our experiments validate the eco-efficiency and networking potential of neuromorphic AI systems, providing key insights for sustainable distributed intelligence. This letter offers a blueprint for scalable and secure neuromorphic deployments across edge networks.

1. Hardware and Software Setup​

The proposed deployment platform integrates two key hardware components: the RPI5 and the Akida board. Together, they enable a power-efficient, cost-effective N-S suitable for real-world edge AI applications.

2. Enabling Secure Remote Access and Distributed Neuromorphic Edge Networks​

The deployment of low-power N-H in networked environments requires reliable, secure, and lightweight communication frameworks. Our system enables full remote operability of the RPI5 and Akida board via SSH, complemented by protocol layers (Message Queuing Telemetry Transport (MQTT), WebSockets, Vehicle-to-Everything (V2X)) that support real-time, event-driven intelligence across edge networks.

3. Training and Running Spiking Neural Networks​

The training pipeline begins with building an ANN using TensorFlow 2.x, which will later be mapped to a spike-compatible format for neuromorphic inference. Because Akida board runs models using low-bitwidth integer arithmetic (4–8 bits), it is critical to align the training phase with these constraints to avoid significant post-training performance degradation.

4. Use case validation: Networked neuromorphic AI for distributed intelligence​

4.1 Use Case: If multiple RPI5 nodes or remote clients need to receive the classification results in real-time, MQTT can be used to broadcast inference outputs​

MQTT-Based Akida Inference Broadcasting​

This project demonstrates how to perform real-time classification broadcasting using BrainChip Akida on Raspberry Pi 5 with MQTT.

Project Structure​

mqtt-akida-inference/
├── config/ # MQTT broker and topic configuration
├── scripts/ # MQTT publisher/subscriber scripts
├── sample_data/ # Sample input data for inference
├── requirements.txt # Required Python packages


Usage​

  1. Install Mosquitto on RPI5
sudo apt update
sudo apt install mosquitto mosquitto-clients -y
sudo systemctl enable mosquitto
sudo systemctl start mosquitto

  1. Run Publisher (on RPI5)
python3 scripts/mqtt_publisher.py


  1. Run Subscriber (on remote device)
python3 scripts/mqtt_subscriber.py


  1. Optional: Monitor from CLI
mosquitto_sub -h <BROKER_IP> -t "akida/inference" -v

Akida Compatibility

python3 outputs = model_akida.predict(sample_image)


Real-Time Edge AI This use case supports event-based edge AI and real-time feedback in smart environments, such as surveillance, mobility, and robotics.

Configurations Set your broker IP and topic in config/config.py

4.2 Use Case: If the Akida accelerator is deployed in an autonomous driving system, V2X communication allows other vehicles or infrastructure to receive AI alerts based on neuromorphic-based vision​

This Use Cases simulates a lightweight V2X (Vehicle-to-Everything) communication system using Python. It demonstrates how neuromorphic AI event results, such as pedestrian detection, can be broadcast over a network and received by nearby infrastructure or vehicles.

Folder Structure​

V2X/
├── config.py # V2X settings
├── v2x_transmitter.py # Simulated Akida alert broadcaster
├── v2x_receiver.py # Listens for incoming V2X alerts
└── README.md


Use Case​

If the Akida accelerator is deployed in an autonomous driving system, this setup allows:

  • Broadcasting high-confidence AI alerts (e.g., "pedestrian detected")
  • Receiving alerts on nearby systems for real-time awareness

Usage​

1. Start the V2X Receiver (on vehicle or infrastructure node)​

python3 receiver/v2x_receiver.py

2. Run the Alert Transmitter (on an RPI5 + Akida node)​

python3 transmitter/v2x_transmitter.py

Notes​

  • Ensure that devices are on the same LAN or wireless network
  • UDP broadcast mode is used for simplicity
  • This is a prototype for real-time event-based message sharing between intelligent nodes

4.3 Use Case: If multiple RPI5-Akida nodes are deployed for federated learning, updates to neuromorphic models must be synchronized between devices​

Federated Learning Setup with Akida on Raspberry Pi 5​

This repository demonstrates a lightweight Federated Learning (FL) setup using neuromorphic AI models deployed on BrainChip Akida PCIe accelerators paired with Raspberry Pi 5 devices. It provides scripts for a centralized Flask server to receive model weight updates and a client script to upload Akida model weights via HTTP.

Overview​

Neuromorphic models trained on individual RPI5-Akida nodes can contribute updates to a shared model hosted on a central server. This setup simulates a federated learning architecture for edge AI applications that require privacy, low latency, and energy efficiency.

Repository Structure​

federated_learning/
├── federated_learning_server.py # Flask server to receive model weights
├── federated_learning_client.py # Client script to upload Akida model weights
├── model_utils.py # (Optional) Placeholder for weight handling utilities
├── model_training.py # (Optional) Placeholder for training-related code
└── README.md


Requirements​

  • Python 3.7+
  • Flask
  • NumPy
  • Requests
  • Akida Python SDK (required on client device)
Install the dependencies using:

pip install flask numpy requests

Getting Started​

1. Launch the Federated Learning Server​

On a device intended to act as the central server:

python3 federated_learning_server.py

The server will listen for HTTP POST requests on port 5000 and respond to updates sent to the /upload endpoint.

2. Configure and Run the Client​

On each RPI5-Akida node:

  • Ensure the Akida model has been trained.
  • Replace the SERVER_IP variable inside federated_learning_client.py with the IP address of the server.
  • Run the script:
python3 federated_learning_client.py

This will extract the weights from the Akida model and transmit them to the server in JSON format.

Example Response​

After a successful POST:

Model weights uploaded successfully.


If an error occurs (e.g., connection refused or malformed weights), you will see an appropriate status message.

Security Considerations​

This is a prototype-level setup for research. For real-world deployment:

  • Use HTTPS instead of HTTP.
  • Authenticate clients using tokens or API keys.
  • Validate the format and shape of model weights before acceptance.

Acknowledgements​

This implementation is part of a broader effort to demonstrate low-cost, energy-efficient neuromorphic AI for distributed and networked edge environments, particularly leveraging the BrainChip Akida PCIe board and Raspberry Pi 5 hardware.

Great find, as per usual @Fullmoonfever!

I just discovered this research paper titled "Spiking neural networks for autonomous driving" which Fernando Sevilla Martinez (Data Science Specialist, Volkswagen) co-authored. The paper was published in December 2024.


View attachment 88417




Fernando Sevilla Martínez's Github activity, which FMF just uncovered, demonstrates Akida-powered neuromorphic processing for V2X and federated learning prototypes.

CARIAD, Volkswagen's software company, have been working on developing and implementing V2X.



View attachment 88414

View attachment 88416

Great find, @Fullmoonfever!

This is his LinkedIn Profile:


View attachment 88420

View attachment 88421


I came across the name Fernando Sevilla Martínez before, in connection with Raúl Parada Medina, whom I first noticed liking BrainChip LinkedIn posts more than a year ago (and there have been many more since… 😊).

Given that Raúl Parada Medina describes himself as an “IoT research specialist within the connected car project at a Spanish automobile manufacturer”, I had already suggested a connection to the Volkswagen Group via SEAT or CUPRA at the time.

https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-424590

View attachment 88422
View attachment 88424





View attachment 88425

Extremely likely the same Raúl Parada Medina whom you recently spotted asking for help with Akida in the DeGirum Community - very disappointingly, no one from our company appears to have been willing to help solve this problem for more than 3 months!

(…)

Anyway, as you had already noticed in your first post on this DeGirum enquiry, Raúl Parada Medina (assuming it is the same person, which I have no doubt about) and Fernando Sevilla Martínez are both co-authors of a paper on autonomous driving:

https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-450543

View attachment 88429

In fact, they have co-published two papers on autonomous driving, together with another researcher: Jordi Casas-Roma. He is director of the Master in Data Science at the Barcelona-based private online university Universitat Oberta de Catalunya, the same department where Fernando Sevilla Martínez got his Master’s degree in 2022 before moving to Wolfsburg the following year, where he now works as a data scientist at the headquarters of the Volkswagen Group.


View attachment 88430


View attachment 88426 View attachment 88427

By the way, Akida does get a mention in the above paper “Spiking neural networks for autonomous driving: A review”, which was first submitted to Elsevier in May 2024 (around the same time when I first noticed Raúl Parada Medina liking BrainChip posts) and then resubmitted in a revised version in August 2024. It was then published online on 21 October 2024.

Apparently the three co-authors of first author Fernando Sevilla Martínez didn’t contribute much themselves to the paper; instead their role was a supervisory one.

“CRediT authorship contribution statement
Fernando S. Martínez: Writing – review & editing, Writing – original draft, Investigation.
Jordi Casas-Roma: Supervision.
Laia Subirats: Supervision.
Raúl Parada: Supervision”





View attachment 88457
(…)

View attachment 88458
(…)
View attachment 88459



Note that Fernando Sevilla Martínez not only works at the Volkswagen Group headquarters in Wolfsburg/Germany, but is also affiliated with the e-Health Center at his alma mater Universitat Oberta de Catalunya (UOC) in Barcelona.

So IMO there is a fair chance that he or his colleagues there will also experiment with Akida in the field of digital health, eg. in a hospital setting.

From the GitHub @Fullmoonfever had discovered:

“Acknowledgements​

This implementation is part of a broader effort to demonstrate low-cost, energy-efficient neuromorphic AI for distributed and networked edge environments, particularly leveraging the BrainChip Akida PCIe board and Raspberry Pi 5 hardware.”




View attachment 88460




View attachment 88462



After digging a little further, it looks more and more likely to me that CUPRA is the Spanish automobile manufacturer with the connected car project Raúl Parada Medina is currently involved in (cf. his LinkedIn profile).

Which in turn greatly heightens the probability that he and Fernando Sevilla Martínez (who works for the Volkswagen Group as a data scientist in the Volkswagen logistics data lake) have been collaborating once again, this time jointly experimenting on “networked neuromorphic AI for distributed intelligence” with the help of an Akida PCIe board paired with a Raspberry Pi 5. (https://github.com/SevillaFe/SNN_Akida_RPI5)

While the GitHub repository SNN_Akida_RPI5 is described very generally as “Eco-Efficient Deployment of Spiking Neural Networks on Low-Cost Edge Hardware”, and one of the use cases (involving MQTT - Message Queuing Telemetry Transport) “supports event-based edge AI and real-time feedback in smart environments, such as surveillance, mobility, and robotics” - and hence a very broad range of applications - one focus is evidently on V2X (= Vehicle-to-Everything) communication systems: Use case 4.2 in the GitHub repository demonstrates how “neuromorphic AI event results, such as pedestrian detection, can be broadcast over a network and received by nearby infrastructure or vehicles.”

View attachment 88507


CUPRA is nowadays a standalone car brand owned by SEAT, a Spanish (Catalonian to be precise) automobile manufacturer headquartered in Martorell near Barcelona. In fact, CUPRA originated from SEAT’s motorsport division Cupra Racing. Both car brands are part of the Volkswagen Group.

CUPRA’s first EV, the CUPRA Born, introduced in 2021 and named after a Barcelona neighbourhood, is already equipped with Car2X technology as standard (see video below). Two more CUPRA models with Car2X functions have since been released: CUPRA Tavascan and CUPRA Terramar.

Broadly speaking, Car2X/V2X (often, but not always used interchangeably, see the Telekom article below) stands for technologies that enable vehicles to communicate with one another and their environment in real time. They help to prevent accidents by warning other nearby vehicles of hazards ahead without visibility, as V2X can “see” around corners and through obstacles in a radius of several hundred meters, connecting all users who have activated V2X (obviously provided their vehicles support it) to a safety network.

“B. V2X technology

I. Principles

Your vehicle is equipped with V2X technology. If you activate the V2X technology, your vehicle can exchange important road traffic information, for example about accidents or traffic jams, with other road users or traffic infrastructure if they also support V2X technology. This makes your participation in road traffic even safer. When you log into the vehicle for the first time, you must check whether the V2X setting is right for you and you can deactivate V2X manually as needed.

Communication takes place directly between your vehicle and other road users or the traffic infrastructure within a close range of approximately 200 m to 800 m. This range can vary depending on the environment, such as in tunnels or in the city.

(…)


III. V2X functionalities

V2X can assist you in the following situations:

1. Warning of local hazards

The V2X function scans the range described above around your vehicle in order to inform you of relevant local hazards. To do this, driving information from other V2X users is received and analysed. For example, if a vehicle travelling in front initiates emergency braking and sends this information via V2X, your vehicle can display a warning message. Please note that your vehicle does not perform automatic driving interventions due to such warnings. In other words, it does not automatically initiate emergency braking, for example.

2. Supplement to adaptive cruise control

The V2X technology can supplement your vehicle's predictive sensor system (e.g. radar and camera systems) and detect traffic situations even more quickly to give you more time to react to them. With more precise information about a traffic situation, adaptive cruise control, for example, can respond to a tail end of a traffic jam in conjunction with the cruise control system and automatically adjust the speed. Other functions, such as manual lane change assistance, are also improved.

3. Other functionalities

Further V2X functions may be developed in future. We will inform you separately about data processing in connection with new V2X functions.

IV. Data exchange

If you activate the V2X technology, it continuously sends general traffic information to other V2X users (e.g. other vehicles, infrastructure) and allows them to evaluate the current traffic situation. The following data is transmitted for this: information about the V2X transmitter (temporary ID, type), vehicle information (vehicle dimensions), driving information (acceleration, geographical position, direction of movement, speed), information from vehicle sensors (yaw rate, cornering, light status, pedal status and steering angle) and route (waypoints, i.e. positioning data, of the last 200 m to 500 m driven).

The activated V2X technology also transmits additional data to other V2X users when certain events occur. In particular, these events include a vehicle stopping, breakdowns, accidents, interventions by an active safety system and the tail end of traffic jams. The data is only transmitted when these events occur. The following data is additionally transmitted: event information (type of event, time of event and time of message, geographical position, event area, direction of movement) and route (waypoints, i.e. positioning data, of the last 600 m to 1,000 m driven).

The data sent to other V2X users is pseudonymised. This means that you are not displayed as the sender of the information to other V2X users.

Volkswagen AG does not have access to this data and does not store it.”




Here is a February 2024 overview of Car2X hazard warning dashboard symbols in Volkswagen Group automobiles, which also shows 8 models that were already equipped with this innovative technology early last year: 7 VW models as well as the CUPRA Born. Meanwhile the number has increased to at least 13 - see the screenshot of a Motoreport video uploaded earlier this month.


View attachment 88487


And here is an informative article on Car2X/V2X by a well-know German telecommunications provider that - just as numerous competitors in the field - has a vested interest in the expansion of this technology, especially relating to the “development towards nationwide 5G Car2X communications”.

(…)
 
  • Like
  • Love
  • Fire
Reactions: 10 users

TECH

Regular
Free Spirits...any individual can quit at any time within any organization if they so desire, would there be financial
implications? yes, in some cases depending on what contractual obligations were signed off on at the time of said
agreement.

We do seem to have established a revolving door over the last few years under Sean's leadership, now the question is,
is there some unrest within the company? are there demands that just can't be met? very intelligent individuals don't just
up and leave without a reason, I may be way off the mark here, but it sends out a message of possible frustration, let's all be
realistic here, despite what we think is solid progress, maybe just maybe, this period of treading water has finally exhausted
even the brightest minds, hopefully I'm 100% off the mark, does Sean have such high expectations that the reality is, certain
staff see a personality that doesn't gel well with them in the workplace.

Money isn't always the driving force behind individual's decisions to join a company in the first place, especially the older
more mature worker, believe it or not, I personally think that pressures and demands can cause friction within companies,
and maybe Sean rubs people up the wrong way, keeping an open mind isn't a sin.

Some may say, well, shareholders come and go, so what's the difference when staff do the same thing?

Nothing sinister in my comments, just floating and rambling.

☮️ Tech.
 
  • Like
Reactions: 6 users

Frangipani

Top 20
Great find, @Fullmoonfever!

It appears, though, that you haven’t yet made the connection between the paywalled IEEE Networking Letter titled “Eco-Efficient Deployment of Spiking Neural Networks on Low-Cost Edge Hardware” and the GitHub repository named “SevillaFe/SNN_Akida_RPI5” by first author Fernando Sevilla Martínez, which you had already discovered back in July.


View attachment 91440


At the time, the content freely accessible via GitHub enabled us to gather quite a bit of info on the use cases Fernando Sevilla Martínez and his fellow researchers (which as predicted includes Raúl Parada Medina) had had in mind, when they set out to “validate the eco-efficiency and networking potential of neuromorphic AI systems, providing key insights for sustainable distributed intelligence”.

The GitHub repository concluded with the acknowledgment that “This implementation is part of a broader effort to demonstrate low-cost, energy-efficient neuromorphic AI for distributed and networked edge environments, particularly leveraging the BrainChip Akida PCIe board and Raspberry Pi 5 hardware.” Nevertheless, one focus was evidently on V2X (= Vehicle-to-Everything) communication systems.

So I am reposting some of the July posts on this topic here to refresh our memory:

I just noticed on LinkedIn that the person behind Neuromorphiccore.AI (highly likely Bradley Susser, who also writes about neuromorphic topics on Medium) referred to that paper co-authored by Fernando Sevilla Martínez, Jordi Casas-Roma, Laia Subirats and Raúl Parada Medina earlier today:


F1F23EB7-AF00-4D9C-BD7B-C2636060B9D8.jpeg
 
  • Like
  • Love
  • Fire
Reactions: 12 users

Diogenese

Top 20
Free Spirits...any individual can quit at any time within any organization if they so desire, would there be financial
implications? yes, in some cases depending on what contractual obligations were signed off on at the time of said
agreement.

We do seem to have established a revolving door over the last few years under Sean's leadership, now the question is,
is there some unrest within the company? are there demands that just can't be met? very intelligent individuals don't just
up and leave without a reason, I may be way off the mark here, but it sends out a message of possible frustration, let's all be
realistic here, despite what we think is solid progress, maybe just maybe, this period of treading water has finally exhausted
even the brightest minds, hopefully I'm 100% off the mark, does Sean have such high expectations that the reality is, certain
staff see a personality that doesn't gel well with them in the workplace.

Money isn't always the driving force behind individual's decisions to join a company in the first place, especially the older
more mature worker, believe it or not, I personally think that pressures and demands can cause friction within companies,
and maybe Sean rubs people up the wrong way, keeping an open mind isn't a sin.

Some may say, well, shareholders come and go, so what's the difference when staff do the same thing?

Nothing sinister in my comments, just floating and rambling.

☮️ Tech.
Hi Tech,

As you say, people leave for different reasons.

If someone leaves after a short while, they may not have been a good fit, or, like YRP, they got a $50M sign-on bonus.

If a highly skilled tech leaves after a long term to join a competitor, that isn't so good, but if they join a business in a related field, that may be an opportunity to evangelize. The related business company may be seeking to import our technology. For the longer term employees, the vesting of options may be a consideration.

Certainly, the company's culture would have changed since the days of Peter and Anil, but there is still the drive to keep ahead of the competition. But we as shareholders are doing it tough waiting for the commercial breakthrough. The pressure on employees is greatly magnified while their hockey sticks gather dust.
 
  • Like
  • Fire
Reactions: 10 users

MDhere

Top 20
Just another reminder that we are not alone in our thoughts of where the brn price will be -

the BrainChip stock forecast for 2025 from algorithm-based forecasting service Wallet Investor projected that the share price could rise to A$1.777 by the end of the year, up from A$1.071 at the end of 2023. Its BRN stock forecast suggested that the price could continue rising to reach $2.438 by December 2027.
 
  • Like
  • Haha
  • Love
Reactions: 8 users

Frangipani

Top 20
View attachment 73380


Few days ago, I had the amazing opportunity to participate in the Neuromorphic Hackathon organized by fortiss and neuroTUM. Together with my talented teammates, we tackled the challenge of pose estimation for spacecrafts using event-based data and BrainChip neuromorphic hardware.

I’m proud to share that our team won the challenge! 🏆

Over five days of intense brainstorming, coding, and collaboration, we developed a solution leveraging spiking neural networks and implemented it on BrainChip 's Akida platform. Our approach significantly reduced the performance gap between synthetic and real-world satellite data while demonstrating exceptional energy efficiency—essential for future space missions.

This experience was a perfect combination of solving cutting-edge problems in neuromorphic computing and gaining hands-on experience in an emerging field that is shaping the future of AI and hardware innovation.

A huge thank you to our incredible mentors Gregor Lenz (Neurobus), Arunkumar Rathinam (Universität Luxemburg) and Jules (fortiss) for their valuable guidance and expertise.

Excited to keep exploring opportunities at the intersection of neuromorphic computing and real-world challenges 🚀

Last year, a student team using Akida won the Munich Neuromorphic Hackathon, organised by neuroTUM (a student club based at TU München / Technical University of Munich for students interested in the intersection of neuroscience and engineering) and our partner fortiss (who to this day have never officially been acknowledged as a partner from our side, though).

Will Akida again help one of the teams to win this year’s challenge?!
The 2025 Munich Neuromorphic Hackathon will take place from 7-12 November.

“The teams will face interesting industry challenges posed by German Aerospace Center (DLR), Simi Reality Motion Systems and fortiss, working with Brain-inspired computing methods towards the most efficient neuromorphic processor.”

Simi Reality Motion Systems (part of the ZF Group) has been collaborating with fortiss on several projects, such as SpikingBody (“Neuromorphic AI meets tennis. Real-time action recognition implemented on Loihi 2) and EMMANÜELA (AR/VR).


D0CE1B45-6C1A-4BB3-87FC-2DEE91945DE8.jpeg




C1D35420-D31A-4260-B202-61D4724EF1AE.jpeg
491DEE47-9B91-4D92-96C7-4E6BCCF3312F.jpeg
54FE32A9-1C58-4346-AE4F-FC2345334AD2.jpeg


68A79B64-3E9A-4627-B923-47BF3D2E2509.jpeg
A0CDE32B-5BE5-4217-9B02-61C931D76E4C.jpeg




A5612446-2E45-4771-A9B1-1B743D52C2C8.jpeg


5B11A193-B0CE-4E6A-9E52-D5D3F15939C5.jpeg




ED4116B7-593F-4B18-A90F-FB7FD559A56B.jpeg
2B91CCB9-1771-4439-A053-72898CB2E665.jpeg
 
Last edited:
  • Like
  • Fire
Reactions: 6 users

Frangipani

Top 20
Hopefully tomorrow’s panel discussion “From Lab to Deep-Tech Scale-Up, Tech Maturation, Market Adoption, Investment” at the Paris Neuromorphic Symposium 2025 will be recorded and made available for the public, as it sounds as if this round table moderated by Sunny Bains could be quite enlightening for BRN shareholders as well. Unfortunately no one from our company will be a speaker or panelist at this three day-conference co-organised by Spin-Ion Technologies and NYU Paris.


4BD1ABD9-FDCA-439D-BE8A-12BF41D35D5E.jpeg




75F316E8-1EC5-439E-A8BF-107159C5BA06.jpeg
 
  • Like
  • Fire
Reactions: 4 users

Frangipani

Top 20
Ericsson’s Ahsan Javed Awan continues to be enamoured with Intel, although I noticed a slight change in his slide, where “neuromorphic hardware” is no longer followed by “(Loihi 2)”. Interesting, given that “Lava is platform-agnostic, so that applications can be prototyped on conventional CPUs/GPUs and deployed to heterogeneous system architectures spanning both conventional processors as well as a range of neuromorphic chips such as Intel’s Loihi.”

(https://lava-nc.org/)
Could that signify he is also trying out other processors these days?

June 2024
View attachment 65075

https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-418864

compare to July 2023
View attachment 65074

Dylan Muir from SynSense gave a remote presentation on Speck:

View attachment 65067

And I also spotted the SpiNNaker and IBM logos on the opening slide of the online presentation by Jörg Conradt (KTH Stockholm) - no surprise here.

View attachment 65098

Hopefully, researchers in Jörg Conradt’s Neuro Computing Systems lab that moved from Munich (TUM) to Stockholm (KTH), will give Akida another chance one of these days (after the not overly glorious assessment of two KTH Master students in their degree project Neuromorphic Medical Image Analysis at the Edge, which was shared here before: https://www.diva-portal.org/smash/get/diva2:1779206/FULLTEXT01.pdf), trusting the positive feedback by two more advanced researchers Jörg Conradt knows well, who have (resp soon will have) first-hand-experience with AKD1000:

When he was still at TUM, Jörg Conradt was the PhD supervisor of Cristian Axenie (now head of SPICES lab at TH Nürnberg, whose team came runner-up in the 2023 tinyML Pedestrian Detection Hackathon utilising Akida) and co-authored a number of papers with him, and now at Stockholm, he is the PhD supervisor of Jens Egholm Pedersen, who is one of the co-organisers of the topic area Neuromorphic systems for space applications at the upcoming Telluride Neuromorphic Workshop, that will provide participants with neuromorphic hardware, including Akida. (I’d venture a guess that the name Jens on the slide refers to him).



Let’s savour once again the above quote by Rasmus Lundqvist, who is a Senior Researcher in Autonomous Systems at RISE (Sweden’s state-owned research institute and innovation partner), with a focus on drones and innovative aerial mobility.


“And mark my words; there is no more suitable AI tech for low-power low-latency than SNNs and neuromorphic chips to run them.”


RISE’s ongoing project Visual Inspection of airspace for air traffic and SEcuRity (a collaboration with SAAB, https://www.saab.com/) sounds like a perfect use case for Akida:

View attachment 65118
View attachment 65119





View attachment 65121

One of Ericsson’s Stockholm-based neuromorphic researchers, Ahsan Javed Awan, who describes himself on LinkedIn as “Technology Specialist - Emerging Compute Algorithms” is looking for 3 Master students to work on “Brain-Inspired Algorithms for Telecom Networks” relating to RAN (Radio Access Network) workloads:


96429C46-CE49-487C-9EBB-F768A39AB9FB.jpeg



As per my June 2024 post above, Ahsan Javed Awan had been very enamoured with Loihi over the past few years, but may possibly be open to evaluate other neuromorphic processors as well.

The job ad posted on LinkedIn doesn’t specify what specific neuromorphic hardware the algorithms would be implemented on:



Ericsson logo
Ericsson
Share


Show more options

Master Thesis: Brain-Inspired Algorithms for Telecom Networks​



Stockholm, Stockholm County, Sweden · 19 hours ago · 4 people clicked apply
Promoted by hirer · Responses managed off LinkedIn
Full-time

Apply
Master Thesis: Brain-Inspired Algorithms for Telecom Networks at Ericsson


About the job​

Join our Team

About this opportunity:

With the rapid adoption of machine learning in telecommunication networks, the energy consumption associated with the training of cognitive algorithms and inference engines is of increased concern. Bio-inspired computing architectures such as Neuromorphic systems could process cognitive tasks in an energy-efficient manner, thereby yielding the networks sustainable. A variety of tasks such as deep learning inference, dynamic programming, quadratic unconstrained binary optimization etc... can exploit the neuromorphic hardware by reformulating the problem to the brain-inspired neural network architecture. To harness the potential of neuromorphic hardware in the telco networks, it is imperative to understand how brain-inspired neural networks can solve relevant computational problems in energy efficient manner.

This master thesis aims at developing brain-inspire neural networks (SNN, BCPNN, etc..) for certain telco use cases and demonstrating the potential energy efficiency gains.

What you will do:


  • Understand Radio Access Network workloads that need to be energy efficient.
  • Reformulate the RAN workloads into customized brain-inspired neural network architecture and validate the functionality using neuromorphic simulator.
  • Devise technique to estimate the energy efficiency gains of the brain-inspired neural network for the RAN workloads.
  • Documentation of the solution and evaluation.
The skills you bring:

  • MSc Student in Physics/Computer Science/Mathematics/Embedded Systems or other related fields
  • Proficient in Probability Theory, Deep Neural Networks, Spiking Neural Networks.
  • Understanding of Neuromorphic Computing hardware and software stacks
  • Good programing skills are required, knowledge of C++, Python, Linux.
Application:

Your application should include: CV, Cover letter, Transcripts of studies (both B.Sc. and up-to-date M.Sc.).

Why join Ericsson?

At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next.

What happens once you apply?

Click Here to find all you need to know about what our typical hiring process looks like.

Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more.
Primary country and city: Sweden (SE) || Stockholm

Req ID: 773239



I’m of course aware of the December 2023 paper “Towards 6G Zero-Energy Internet of Things: Standards, Trends and Recent Results” paper, in which six Ericsson researchers had experimented with Akida for a ZE-IoT device…(https://d197for5662m48.cloudfront.n...rint_pdf/dfcbe2c260b5426434db681b0f637243.pdf)

…but this is a different area of research, in which Ericsson has been collaborating with Intel for years:

https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-446092

9048D76F-E91B-463B-BC8E-3C9D56D3B099.jpeg
 

Attachments

  • 9A68C214-F687-4240-A478-09D087E3615C.jpeg
    9A68C214-F687-4240-A478-09D087E3615C.jpeg
    423 KB · Views: 8
Last edited:
  • Like
  • Love
Reactions: 4 users
Top Bottom