BRN Discussion Ongoing

Diogenese

Top 20
Hi @JB49,

Here's the link #89,847 to Diogenese last post about this patent.

Valeo have publicly stated that SCALA 3 is capable of 3D object detection, prediction, and fusion of Lidar, radar, camera, and ultrasonic data. This kind of multi-modal sensor fusion would definitely benefit from state-space models like TENNs that can model long-range temporal dependencies efficiently.

Diogenese has also previously said that "SCALA 3 is just the lidar electro-optical transceiver. They use software to process the signals. TENNS could be in the software."



View attachment 88413
Yes. Unfortunately, the SDV (software defined vehicle) is the Pretender for the moment while we wait in the interregnum for the coronation of NN SoC. Although Akida has pulled the sword from the stone, they're still waiting for the release of the video.
 
  • Like
  • Love
  • Fire
Reactions: 7 users
Yes. Unfortunately, the SDV (software defined vehicle) is the Pretender for the moment while we wait in the interregnum for the coronation of NN SoC. Although Akida has pulled the sword from the stone, they're still waiting for the release of the video.

Maybe one step closer though.

Just up on GitHub.

Suggest readers absorb the whole post to understand the intent of this repository.

Especially terms such as federated learning, scalable, V2X, MQTT, prototype level and distributed.

From what I can find, if correct, the author is as below and doesn't necessarily mean VW involved but suspect would be aware of this work in some division / department.



Fernando Sevilla Martínez​




SevillaFe/SNN_Akida_RPI5

Fernando Sevilla MartínezSevillaFe​



SevillaFe/SNN_Akida_RPI5Public

Eco-Efficient Deployment of Spiking Neural Networks on Low-Cost Edge Hardware


SNN_Akida_RPI5​

Eco-Efficient Deployment of Spiking Neural Networks on Low-Cost Edge Hardware

This work presents a practical and energy-aware framework for deploying Spiking Neural Networks on low-cost hardware for edge computing. We detail a reproducible pipeline that integrates neuromorphic processing with secure remote access and distributed intelligence. Using Raspberry Pi and the BrainChip Akida PCIe accelerator, we demonstrate a lightweight deployment process including model training, quantization, and conversion. Our experiments validate the eco-efficiency and networking potential of neuromorphic AI systems, providing key insights for sustainable distributed intelligence. This letter offers a blueprint for scalable and secure neuromorphic deployments across edge networks.

1. Hardware and Software Setup​

The proposed deployment platform integrates two key hardware components: the RPI5 and the Akida board. Together, they enable a power-efficient, cost-effective N-S suitable for real-world edge AI applications.

2. Enabling Secure Remote Access and Distributed Neuromorphic Edge Networks​

The deployment of low-power N-H in networked environments requires reliable, secure, and lightweight communication frameworks. Our system enables full remote operability of the RPI5 and Akida board via SSH, complemented by protocol layers (Message Queuing Telemetry Transport (MQTT), WebSockets, Vehicle-to-Everything (V2X)) that support real-time, event-driven intelligence across edge networks.

3. Training and Running Spiking Neural Networks​

The training pipeline begins with building an ANN using TensorFlow 2.x, which will later be mapped to a spike-compatible format for neuromorphic inference. Because Akida board runs models using low-bitwidth integer arithmetic (4–8 bits), it is critical to align the training phase with these constraints to avoid significant post-training performance degradation.

4. Use case validation: Networked neuromorphic AI for distributed intelligence​

4.1 Use Case: If multiple RPI5 nodes or remote clients need to receive the classification results in real-time, MQTT can be used to broadcast inference outputs​

MQTT-Based Akida Inference Broadcasting​

This project demonstrates how to perform real-time classification broadcasting using BrainChip Akida on Raspberry Pi 5 with MQTT.

Project Structure​

mqtt-akida-inference/
├── config/ # MQTT broker and topic configuration
├── scripts/ # MQTT publisher/subscriber scripts
├── sample_data/ # Sample input data for inference
├── requirements.txt # Required Python packages


Usage​

  1. Install Mosquitto on RPI5
sudo apt update
sudo apt install mosquitto mosquitto-clients -y
sudo systemctl enable mosquitto
sudo systemctl start mosquitto

  1. Run Publisher (on RPI5)
python3 scripts/mqtt_publisher.py


  1. Run Subscriber (on remote device)
python3 scripts/mqtt_subscriber.py


  1. Optional: Monitor from CLI
mosquitto_sub -h <BROKER_IP> -t "akida/inference" -v

Akida Compatibility

python3 outputs = model_akida.predict(sample_image)


Real-Time Edge AI This use case supports event-based edge AI and real-time feedback in smart environments, such as surveillance, mobility, and robotics.

Configurations Set your broker IP and topic in config/config.py

4.2 Use Case: If the Akida accelerator is deployed in an autonomous driving system, V2X communication allows other vehicles or infrastructure to receive AI alerts based on neuromorphic-based vision​

This Use Cases simulates a lightweight V2X (Vehicle-to-Everything) communication system using Python. It demonstrates how neuromorphic AI event results, such as pedestrian detection, can be broadcast over a network and received by nearby infrastructure or vehicles.

Folder Structure​

V2X/
├── config.py # V2X settings
├── v2x_transmitter.py # Simulated Akida alert broadcaster
├── v2x_receiver.py # Listens for incoming V2X alerts
└── README.md


Use Case​

If the Akida accelerator is deployed in an autonomous driving system, this setup allows:

  • Broadcasting high-confidence AI alerts (e.g., "pedestrian detected")
  • Receiving alerts on nearby systems for real-time awareness

Usage​

1. Start the V2X Receiver (on vehicle or infrastructure node)​

python3 receiver/v2x_receiver.py

2. Run the Alert Transmitter (on an RPI5 + Akida node)​

python3 transmitter/v2x_transmitter.py

Notes​

  • Ensure that devices are on the same LAN or wireless network
  • UDP broadcast mode is used for simplicity
  • This is a prototype for real-time event-based message sharing between intelligent nodes

4.3 Use Case: If multiple RPI5-Akida nodes are deployed for federated learning, updates to neuromorphic models must be synchronized between devices​

Federated Learning Setup with Akida on Raspberry Pi 5​

This repository demonstrates a lightweight Federated Learning (FL) setup using neuromorphic AI models deployed on BrainChip Akida PCIe accelerators paired with Raspberry Pi 5 devices. It provides scripts for a centralized Flask server to receive model weight updates and a client script to upload Akida model weights via HTTP.

Overview​

Neuromorphic models trained on individual RPI5-Akida nodes can contribute updates to a shared model hosted on a central server. This setup simulates a federated learning architecture for edge AI applications that require privacy, low latency, and energy efficiency.

Repository Structure​

federated_learning/
├── federated_learning_server.py # Flask server to receive model weights
├── federated_learning_client.py # Client script to upload Akida model weights
├── model_utils.py # (Optional) Placeholder for weight handling utilities
├── model_training.py # (Optional) Placeholder for training-related code
└── README.md


Requirements​

  • Python 3.7+
  • Flask
  • NumPy
  • Requests
  • Akida Python SDK (required on client device)
Install the dependencies using:

pip install flask numpy requests

Getting Started​

1. Launch the Federated Learning Server​

On a device intended to act as the central server:

python3 federated_learning_server.py

The server will listen for HTTP POST requests on port 5000 and respond to updates sent to the /upload endpoint.

2. Configure and Run the Client​

On each RPI5-Akida node:

  • Ensure the Akida model has been trained.
  • Replace the SERVER_IP variable inside federated_learning_client.py with the IP address of the server.
  • Run the script:
python3 federated_learning_client.py

This will extract the weights from the Akida model and transmit them to the server in JSON format.

Example Response​

After a successful POST:

Model weights uploaded successfully.


If an error occurs (e.g., connection refused or malformed weights), you will see an appropriate status message.

Security Considerations​

This is a prototype-level setup for research. For real-world deployment:

  • Use HTTPS instead of HTTP.
  • Authenticate clients using tokens or API keys.
  • Validate the format and shape of model weights before acceptance.

Acknowledgements​

This implementation is part of a broader effort to demonstrate low-cost, energy-efficient neuromorphic AI for distributed and networked edge environments, particularly leveraging the BrainChip Akida PCIe board and Raspberry Pi 5 hardware.
 
  • Like
  • Fire
  • Love
Reactions: 26 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Great find, as per usual @Fullmoonfever!

I just discovered this research paper titled "Spiking neural networks for autonomous driving" which Fernando Sevilla Martinez (Data Science Specialist, Volkswagen) co-authored. The paper was published in December 2024.


Screenshot 2025-07-12 at 4.49.51 pm.png





Fernando Sevilla Martínez's Github activity, which FMF just uncovered, demonstrates Akida-powered neuromorphic processing for V2X and federated learning prototypes.

CARIAD, Volkswagen's software company, have been working on developing and implementing V2X.



Screenshot 2025-07-12 at 4.31.50 pm.png


Screenshot 2025-07-12 at 4.35.16 pm.png
 

Attachments

  • Screenshot 2025-07-12 at 4.35.01 pm.png
    Screenshot 2025-07-12 at 4.35.01 pm.png
    2.6 MB · Views: 5
Last edited:
  • Like
  • Fire
  • Love
Reactions: 15 users

Cardpro

Regular
Remember the last 4C they noted we just missed out on 540K on engineering revenue. That 540K will appear on the upcoming 4C.

What I'm really watching for in the upcoming 4C is whether that $540K is accompanied by additional engineering revenue, ideally another $500K+. Thats would be a good sign things are ramping up.
thanks for the reminder!!! I totally forgot about those, feels much better!!!
 
Top Bottom