cosors
👀
Never before has my ignore list grown as quickly as it has in the weeks since May.Hang on a sec, Felix - I haven't got the contact details yet.
Never before has my ignore list grown as quickly as it has in the weeks since May.Hang on a sec, Felix - I haven't got the contact details yet.
Thanks for you post Esq 111.Morning Chippers ,
USA, Pentagon just gave the ALL GO on drones.
Pentagon Just Made A Massive, Long Overdue Shift To Arm Its Troops With Thousands Of Drones https://share.google/csWNyipwLyXPcsNZt
Regards,
Esq.
Remember the last 4C they noted we just missed out on 540K on engineering revenue. That 540K will appear on the upcoming 4C.IF we are actively engaged and working with them, how come our revenues from providing engineering support is so tiny? Is that how development work where small tech firms don't charge much for their work for years and years with no promises?
This is actually insane, amazing, and ridiculous all at the same time…. just look at the companies we’re listed alongside! And then you look at our share price… If you were a new potential investor, you’d probably think: ‘What? I’m going all in before it explodes…. Wohioo jackpot!!!’” But somehow…BrainChip gets a mention here.
Neuromorphic Computing Market to Reach US$ 20.4 Billion by 2031 Fueled by AI and Edge Intelligence - Persistence Market Research
07-11-2025 09:12 AM CET | IT, New Media & Software
Press release from: Persistence Market Research
![]()
Neuromorphic Computing Market
Market Set to Expand at 20.9% CAGR Amid Rising Demand for Brain-Inspired Computing in AI and Robotics
According to the latest study by Persistence Market Research, the global neuromorphic computing market is projected to grow significantly from US$ 5.4 billion in 2024 to US$ 20.4 billion by 2031, exhibiting a robust CAGR of 20.9% over the forecast period. This surge is primarily driven by the growing demand for low-power, high-speed computing systems, especially in artificial intelligence (AI), edge computing, and advanced robotics. Neuromorphic systems mimic the architecture and operational principles of the human brain, enabling real-time data processing and cognitive functions with significantly reduced energy consumption.
Neuromorphic computing systems are rapidly gaining traction in industries that require ultra-efficient data processing such as defense, healthcare, automotive, and industrial automation. The market is being propelled by advances in hardware design, particularly the integration of neuromorphic chips with AI frameworks, enabling improved pattern recognition, decision-making, and adaptation in uncertain environments. The edge computing segment leads the market due to the rising need for real-time inference at the device level without relying heavily on cloud infrastructure. Geographically, North America dominates the neuromorphic computing landscape, largely due to its early adoption of AI technologies, strong presence of tech giants, and heavy R&D investments in next-generation computing platforms.
Get a Sample PDF Brochure of the Report (Use Corporate Email ID for a Quick Response): https://www.persistencemarketresearch.com/samples/34726
Key Market Insights
➤ The neuromorphic computing market is set to grow more than 3.7x by 2031, driven by AI and IoT integration.
➤ Edge computing is the dominant segment due to its synergy with neuromorphic hardware for real-time decision-making.
➤ North America leads globally, thanks to extensive research initiatives and a strong base of semiconductor innovation.
➤ Increasing deployment of neuromorphic processors in autonomous vehicles and robotic systems boosts market expansion.
➤ Startups and tech giants alike are investing in neuromorphic chip development for next-gen intelligent applications.
What is the future of neuromorphic computing in AI applications?
Neuromorphic computing is expected to play a pivotal role in advancing AI applications by offering real-time learning, adaptability, and energy efficiency, which traditional Von Neumann architectures cannot match. With AI models becoming more complex, neuromorphic systems provide faster processing at lower power, especially suitable for applications in autonomous driving, smart surveillance, robotics, and edge analytics. Their brain-inspired structure enables efficient handling of unstructured data, noise tolerance, and dynamic decision-making-traits essential for developing human-like machine intelligence. As demand for smarter, faster, and greener AI grows, neuromorphic computing will become central to the next wave of AI innovation.
Market Dynamics
Drivers: The surge in AI-driven applications, the growing use of smart sensors and devices, and the push toward efficient edge computing are key drivers for the neuromorphic computing market. These systems offer unparalleled processing efficiency and are ideal for real-time applications like smart surveillance, industrial robotics, and autonomous vehicles, where latency and power efficiency are critical.
Market Restraining Factor: Despite the promise, the market faces significant challenges in terms of scalability, software compatibility, and lack of standardized programming frameworks. Neuromorphic systems require a different approach than traditional computing models, which can slow down adoption among enterprises not equipped to handle such architectural shifts.
Key Market Opportunity: The most promising opportunity lies in the integration of neuromorphic hardware in edge AI systems, particularly in sectors such as defense, automotive, and healthcare. As edge devices become more autonomous, neuromorphic processors can offer localized intelligence with minimal power draw, enabling new applications in wearable tech, drones, and real-time diagnostics.
Market Segmentation
The neuromorphic computing market is segmented by component, application, end-use industry, and deployment. By component, the market is categorized into hardware (neuromorphic chips and sensors) and software (learning algorithms and frameworks). Hardware holds the dominant share due to ongoing innovations in neuromorphic chips and their adoption in edge and AI devices. Neuromorphic sensors, capable of capturing data in a brain-like manner, are also seeing rapid uptake in autonomous navigation and surveillance applications.
Based on application, the market includes image recognition, signal processing, data mining, object detection, and others. Among these, image and signal recognition applications lead, particularly in automotive and healthcare sectors where real-time interpretation is crucial. In terms of end-use, automotive, aerospace & defense, healthcare, consumer electronics, and industrial automation are key sectors. Automotive, particularly in self-driving and driver-assistance systems, is leading due to the need for rapid, low-latency decision-making. Software advancements that enable real-time learning and self-optimization are also fostering deeper neuromorphic adoption across new AI use cases.
Regional Insights
North America remains the most dominant region in the neuromorphic computing market, supported by the presence of major semiconductor companies, high research investments, and rapid deployment of AI applications. The U.S. is leading the region with substantial funding for defense and AI-focused neuromorphic innovations, including partnerships between universities, startups, and government bodies.
Europe is showing strong momentum, particularly in automotive and robotics applications, with countries like Germany and the U.K. leading in research and development. Asia Pacific is an emerging hotspot, driven by rising tech adoption in China, Japan, and South Korea. These countries are heavily investing in AI, IoT, and next-gen chip manufacturing. Additionally, increased demand for smart electronics and autonomous technologies in the region presents significant future growth potential.
Competitive Landscape
The competitive landscape in the neuromorphic computing market is characterized by strategic collaborations, patent-driven innovation, and a race to commercialize neuromorphic hardware. Companies are also focusing on hybrid architectures that combine traditional and neuromorphic elements to ease the transition and expand use cases.
Company Insights
✦ Intel Corporation
✦ IBM Corporation
✦ BrainChip Holdings Ltd.
✦ Qualcomm Inc.
✦ Hewlett Packard Enterprise Development LP
✦ Samsung Electronics Co., Ltd.
✦ HRL Laboratories, LLC
✦ Nepes Corporation
✦ General Vision Inc.
✦ SynSense AG
✦ Applied Brain Research, Inc.
For Customized Insights on Segments, Regions, or Competitors, Request Personalized Purchase Options @ https://www.persistencemarketresearch.com/request-customization/34726
Key Industry Developments
Recent years have witnessed several key developments shaping the neuromorphic computing market. Intel's Loihi neuromorphic chip has been central to various research collaborations, showing promise in edge AI and robotics. IBM, through its TrueNorth chip project, continues to pioneer in neural-inspired computing, aiming to scale commercial applications. Meanwhile, BrainChip's Akida chip is gaining traction across smart home and security device manufacturers for its energy-efficient inference capabilities.
The market is also experiencing growing academic-industry collaborations. Universities and research labs are partnering with tech companies to develop programmable neuromorphic platforms, making it easier for developers to create applications that utilize spiking neural networks (SNNs). These developments are gradually making neuromorphic computing accessible to a broader tech community.
Innovation and Future Trends
Innovation in neuromorphic computing is centered on spiking neural networks (SNNs), 3D chip stacking, and on-device learning. Researchers are exploring how to develop chips that can learn from their environment in real time-much like biological brains. Unlike traditional AI models that require vast data centers, these innovations focus on real-world interaction and decision-making at ultra-low power, making neuromorphic devices ideal for mobile and wearable technologies.
Future trends indicate a shift toward neuromorphic-as-a-service platforms, where enterprises can access neuromorphic capabilities without building custom hardware. Furthermore, as AI grows increasingly decentralized, neuromorphic chips will be pivotal in enabling autonomous decision-making at the edge, reducing dependence on the cloud. Industries such as smart cities, healthcare, defense, and aerospace are expected to be early beneficiaries of these trends, paving the way for more human-like cognition in machines.
Explore the Latest Trending "Exclusive Article" @
Hi @JB49,Has this patent from Valeo been shared before?
Yes. Unfortunately, the SDV (software defined vehicle) is the Pretender for the moment while we wait in the interregnum for the coronation of NN SoC. Although Akida has pulled the sword from the stone, they're still waiting for the release of the video.Hi @JB49,
Here's the link #89,847 to Diogenese last post about this patent.
Valeo have publicly stated that SCALA 3 is capable of 3D object detection, prediction, and fusion of Lidar, radar, camera, and ultrasonic data. This kind of multi-modal sensor fusion would definitely benefit from state-space models like TENNs that can model long-range temporal dependencies efficiently.
Diogenese has also previously said that "SCALA 3 is just the lidar electro-optical transceiver. They use software to process the signals. TENNS could be in the software."
View attachment 88413
Yes. Unfortunately, the SDV (software defined vehicle) is the Pretender for the moment while we wait in the interregnum for the coronation of NN SoC. Although Akida has pulled the sword from the stone, they're still waiting for the release of the video.
thanks for the reminder!!! I totally forgot about those, feels much better!!!Remember the last 4C they noted we just missed out on 540K on engineering revenue. That 540K will appear on the upcoming 4C.
What I'm really watching for in the upcoming 4C is whether that $540K is accompanied by additional engineering revenue, ideally another $500K+. Thats would be a good sign things are ramping up.
Maybe one step closer though.
Just up on GitHub.
Suggest readers absorb the whole post to understand the intent of this repository.
Especially terms such as federated learning, scalable, V2X, MQTT, prototype level and distributed.
From what I can find, if correct, the author is as below and doesn't necessarily mean VW involved but suspect would be aware of this work in some division / department.
Fernando Sevilla Martínez
![]()
GitHub - SevillaFe/SNN_Akida_RPI5: Eco-Efficient Deployment of Spiking Neural Networks on Low-Cost Edge Hardware
Eco-Efficient Deployment of Spiking Neural Networks on Low-Cost Edge Hardware - SevillaFe/SNN_Akida_RPI5github.com
SevillaFe/SNN_Akida_RPI5
Fernando Sevilla MartínezSevillaFe
SevillaFe/SNN_Akida_RPI5Public
Eco-Efficient Deployment of Spiking Neural Networks on Low-Cost Edge Hardware
SNN_Akida_RPI5
Eco-Efficient Deployment of Spiking Neural Networks on Low-Cost Edge Hardware
This work presents a practical and energy-aware framework for deploying Spiking Neural Networks on low-cost hardware for edge computing. We detail a reproducible pipeline that integrates neuromorphic processing with secure remote access and distributed intelligence. Using Raspberry Pi and the BrainChip Akida PCIe accelerator, we demonstrate a lightweight deployment process including model training, quantization, and conversion. Our experiments validate the eco-efficiency and networking potential of neuromorphic AI systems, providing key insights for sustainable distributed intelligence. This letter offers a blueprint for scalable and secure neuromorphic deployments across edge networks.
1. Hardware and Software Setup
The proposed deployment platform integrates two key hardware components: the RPI5 and the Akida board. Together, they enable a power-efficient, cost-effective N-S suitable for real-world edge AI applications.
2. Enabling Secure Remote Access and Distributed Neuromorphic Edge Networks
The deployment of low-power N-H in networked environments requires reliable, secure, and lightweight communication frameworks. Our system enables full remote operability of the RPI5 and Akida board via SSH, complemented by protocol layers (Message Queuing Telemetry Transport (MQTT), WebSockets, Vehicle-to-Everything (V2X)) that support real-time, event-driven intelligence across edge networks.
3. Training and Running Spiking Neural Networks
The training pipeline begins with building an ANN using TensorFlow 2.x, which will later be mapped to a spike-compatible format for neuromorphic inference. Because Akida board runs models using low-bitwidth integer arithmetic (4–8 bits), it is critical to align the training phase with these constraints to avoid significant post-training performance degradation.
4. Use case validation: Networked neuromorphic AI for distributed intelligence
4.1 Use Case: If multiple RPI5 nodes or remote clients need to receive the classification results in real-time, MQTT can be used to broadcast inference outputs
MQTT-Based Akida Inference Broadcasting
This project demonstrates how to perform real-time classification broadcasting using BrainChip Akida on Raspberry Pi 5 with MQTT.
Project Structure
mqtt-akida-inference/
├── config/ # MQTT broker and topic configuration
├── scripts/ # MQTT publisher/subscriber scripts
├── sample_data/ # Sample input data for inference
├── requirements.txt # Required Python packages
Usage
sudo apt update
- Install Mosquitto on RPI5
sudo apt install mosquitto mosquitto-clients -y
sudo systemctl enable mosquitto
sudo systemctl start mosquitto
python3 scripts/mqtt_publisher.py
- Run Publisher (on RPI5)
python3 scripts/mqtt_subscriber.py
- Run Subscriber (on remote device)
mosquitto_sub -h <BROKER_IP> -t "akida/inference" -v
- Optional: Monitor from CLI
Akida Compatibility
python3 outputs = model_akida.predict(sample_image)
Real-Time Edge AI This use case supports event-based edge AI and real-time feedback in smart environments, such as surveillance, mobility, and robotics.
Configurations Set your broker IP and topic in config/config.py
4.2 Use Case: If the Akida accelerator is deployed in an autonomous driving system, V2X communication allows other vehicles or infrastructure to receive AI alerts based on neuromorphic-based vision
This Use Cases simulates a lightweight V2X (Vehicle-to-Everything) communication system using Python. It demonstrates how neuromorphic AI event results, such as pedestrian detection, can be broadcast over a network and received by nearby infrastructure or vehicles.
Folder Structure
V2X/
├── config.py # V2X settings
├── v2x_transmitter.py # Simulated Akida alert broadcaster
├── v2x_receiver.py # Listens for incoming V2X alerts
└── README.md
Use Case
If the Akida accelerator is deployed in an autonomous driving system, this setup allows:
- Broadcasting high-confidence AI alerts (e.g., "pedestrian detected")
- Receiving alerts on nearby systems for real-time awareness
Usage
1. Start the V2X Receiver (on vehicle or infrastructure node)
python3 receiver/v2x_receiver.py
2. Run the Alert Transmitter (on an RPI5 + Akida node)
python3 transmitter/v2x_transmitter.py
Notes
- Ensure that devices are on the same LAN or wireless network
- UDP broadcast mode is used for simplicity
- This is a prototype for real-time event-based message sharing between intelligent nodes
4.3 Use Case: If multiple RPI5-Akida nodes are deployed for federated learning, updates to neuromorphic models must be synchronized between devices
Federated Learning Setup with Akida on Raspberry Pi 5
This repository demonstrates a lightweight Federated Learning (FL) setup using neuromorphic AI models deployed on BrainChip Akida PCIe accelerators paired with Raspberry Pi 5 devices. It provides scripts for a centralized Flask server to receive model weight updates and a client script to upload Akida model weights via HTTP.
Overview
Neuromorphic models trained on individual RPI5-Akida nodes can contribute updates to a shared model hosted on a central server. This setup simulates a federated learning architecture for edge AI applications that require privacy, low latency, and energy efficiency.
Repository Structure
federated_learning/
├── federated_learning_server.py # Flask server to receive model weights
├── federated_learning_client.py # Client script to upload Akida model weights
├── model_utils.py # (Optional) Placeholder for weight handling utilities
├── model_training.py # (Optional) Placeholder for training-related code
└── README.md
Requirements
Install the dependencies using:
- Python 3.7+
- Flask
- NumPy
- Requests
- Akida Python SDK (required on client device)
pip install flask numpy requests
Getting Started
1. Launch the Federated Learning Server
On a device intended to act as the central server:
python3 federated_learning_server.py
The server will listen for HTTP POST requests on port 5000 and respond to updates sent to the /upload endpoint.
2. Configure and Run the Client
On each RPI5-Akida node:
python3 federated_learning_client.py
- Ensure the Akida model has been trained.
- Replace the SERVER_IP variable inside federated_learning_client.py with the IP address of the server.
- Run the script:
This will extract the weights from the Akida model and transmit them to the server in JSON format.
Example Response
After a successful POST:
Model weights uploaded successfully.
If an error occurs (e.g., connection refused or malformed weights), you will see an appropriate status message.
Security Considerations
This is a prototype-level setup for research. For real-world deployment:
- Use HTTPS instead of HTTP.
- Authenticate clients using tokens or API keys.
- Validate the format and shape of model weights before acceptance.
Acknowledgements
This implementation is part of a broader effort to demonstrate low-cost, energy-efficient neuromorphic AI for distributed and networked edge environments, particularly leveraging the BrainChip Akida PCIe board and Raspberry Pi 5 hardware.
NoAnyone seen Tony’s new post on linked in?
Maybe one step closer though.
Just up on GitHub.
Suggest readers absorb the whole post to understand the intent of this repository.
Especially terms such as federated learning, scalable, V2X, MQTT, prototype level and distributed.
From what I can find, if correct, the author is as below and doesn't necessarily mean VW involved but suspect would be aware of this work in some division / department.
Fernando Sevilla Martínez
![]()
GitHub - SevillaFe/SNN_Akida_RPI5: Eco-Efficient Deployment of Spiking Neural Networks on Low-Cost Edge Hardware
Eco-Efficient Deployment of Spiking Neural Networks on Low-Cost Edge Hardware - SevillaFe/SNN_Akida_RPI5github.com
SevillaFe/SNN_Akida_RPI5
Fernando Sevilla MartínezSevillaFe
SevillaFe/SNN_Akida_RPI5Public
Eco-Efficient Deployment of Spiking Neural Networks on Low-Cost Edge Hardware
SNN_Akida_RPI5
Eco-Efficient Deployment of Spiking Neural Networks on Low-Cost Edge Hardware
This work presents a practical and energy-aware framework for deploying Spiking Neural Networks on low-cost hardware for edge computing. We detail a reproducible pipeline that integrates neuromorphic processing with secure remote access and distributed intelligence. Using Raspberry Pi and the BrainChip Akida PCIe accelerator, we demonstrate a lightweight deployment process including model training, quantization, and conversion. Our experiments validate the eco-efficiency and networking potential of neuromorphic AI systems, providing key insights for sustainable distributed intelligence. This letter offers a blueprint for scalable and secure neuromorphic deployments across edge networks.
1. Hardware and Software Setup
The proposed deployment platform integrates two key hardware components: the RPI5 and the Akida board. Together, they enable a power-efficient, cost-effective N-S suitable for real-world edge AI applications.
2. Enabling Secure Remote Access and Distributed Neuromorphic Edge Networks
The deployment of low-power N-H in networked environments requires reliable, secure, and lightweight communication frameworks. Our system enables full remote operability of the RPI5 and Akida board via SSH, complemented by protocol layers (Message Queuing Telemetry Transport (MQTT), WebSockets, Vehicle-to-Everything (V2X)) that support real-time, event-driven intelligence across edge networks.
3. Training and Running Spiking Neural Networks
The training pipeline begins with building an ANN using TensorFlow 2.x, which will later be mapped to a spike-compatible format for neuromorphic inference. Because Akida board runs models using low-bitwidth integer arithmetic (4–8 bits), it is critical to align the training phase with these constraints to avoid significant post-training performance degradation.
4. Use case validation: Networked neuromorphic AI for distributed intelligence
4.1 Use Case: If multiple RPI5 nodes or remote clients need to receive the classification results in real-time, MQTT can be used to broadcast inference outputs
MQTT-Based Akida Inference Broadcasting
This project demonstrates how to perform real-time classification broadcasting using BrainChip Akida on Raspberry Pi 5 with MQTT.
Project Structure
mqtt-akida-inference/
├── config/ # MQTT broker and topic configuration
├── scripts/ # MQTT publisher/subscriber scripts
├── sample_data/ # Sample input data for inference
├── requirements.txt # Required Python packages
Usage
sudo apt update
- Install Mosquitto on RPI5
sudo apt install mosquitto mosquitto-clients -y
sudo systemctl enable mosquitto
sudo systemctl start mosquitto
python3 scripts/mqtt_publisher.py
- Run Publisher (on RPI5)
python3 scripts/mqtt_subscriber.py
- Run Subscriber (on remote device)
mosquitto_sub -h <BROKER_IP> -t "akida/inference" -v
- Optional: Monitor from CLI
Akida Compatibility
python3 outputs = model_akida.predict(sample_image)
Real-Time Edge AI This use case supports event-based edge AI and real-time feedback in smart environments, such as surveillance, mobility, and robotics.
Configurations Set your broker IP and topic in config/config.py
4.2 Use Case: If the Akida accelerator is deployed in an autonomous driving system, V2X communication allows other vehicles or infrastructure to receive AI alerts based on neuromorphic-based vision
This Use Cases simulates a lightweight V2X (Vehicle-to-Everything) communication system using Python. It demonstrates how neuromorphic AI event results, such as pedestrian detection, can be broadcast over a network and received by nearby infrastructure or vehicles.
Folder Structure
V2X/
├── config.py # V2X settings
├── v2x_transmitter.py # Simulated Akida alert broadcaster
├── v2x_receiver.py # Listens for incoming V2X alerts
└── README.md
Use Case
If the Akida accelerator is deployed in an autonomous driving system, this setup allows:
- Broadcasting high-confidence AI alerts (e.g., "pedestrian detected")
- Receiving alerts on nearby systems for real-time awareness
Usage
1. Start the V2X Receiver (on vehicle or infrastructure node)
python3 receiver/v2x_receiver.py
2. Run the Alert Transmitter (on an RPI5 + Akida node)
python3 transmitter/v2x_transmitter.py
Notes
- Ensure that devices are on the same LAN or wireless network
- UDP broadcast mode is used for simplicity
- This is a prototype for real-time event-based message sharing between intelligent nodes
4.3 Use Case: If multiple RPI5-Akida nodes are deployed for federated learning, updates to neuromorphic models must be synchronized between devices
Federated Learning Setup with Akida on Raspberry Pi 5
This repository demonstrates a lightweight Federated Learning (FL) setup using neuromorphic AI models deployed on BrainChip Akida PCIe accelerators paired with Raspberry Pi 5 devices. It provides scripts for a centralized Flask server to receive model weight updates and a client script to upload Akida model weights via HTTP.
Overview
Neuromorphic models trained on individual RPI5-Akida nodes can contribute updates to a shared model hosted on a central server. This setup simulates a federated learning architecture for edge AI applications that require privacy, low latency, and energy efficiency.
Repository Structure
federated_learning/
├── federated_learning_server.py # Flask server to receive model weights
├── federated_learning_client.py # Client script to upload Akida model weights
├── model_utils.py # (Optional) Placeholder for weight handling utilities
├── model_training.py # (Optional) Placeholder for training-related code
└── README.md
Requirements
Install the dependencies using:
- Python 3.7+
- Flask
- NumPy
- Requests
- Akida Python SDK (required on client device)
pip install flask numpy requests
Getting Started
1. Launch the Federated Learning Server
On a device intended to act as the central server:
python3 federated_learning_server.py
The server will listen for HTTP POST requests on port 5000 and respond to updates sent to the /upload endpoint.
2. Configure and Run the Client
On each RPI5-Akida node:
python3 federated_learning_client.py
- Ensure the Akida model has been trained.
- Replace the SERVER_IP variable inside federated_learning_client.py with the IP address of the server.
- Run the script:
This will extract the weights from the Akida model and transmit them to the server in JSON format.
Example Response
After a successful POST:
Model weights uploaded successfully.
If an error occurs (e.g., connection refused or malformed weights), you will see an appropriate status message.
Security Considerations
This is a prototype-level setup for research. For real-world deployment:
- Use HTTPS instead of HTTP.
- Authenticate clients using tokens or API keys.
- Validate the format and shape of model weights before acceptance.
Acknowledgements
This implementation is part of a broader effort to demonstrate low-cost, energy-efficient neuromorphic AI for distributed and networked edge environments, particularly leveraging the BrainChip Akida PCIe board and Raspberry Pi 5 hardware.
Anyone seen Tony’s new post on linked in?