BRN Discussion Ongoing

7für7

Top 20

It recognises what kind of figures you build and makes noises… it even recognise if you suppose to make a car crash with your lego car and makes car crash noises … with AI…

Yes but some companies already announced their new tech… even Lego announced a new smart Lego product… so…
 
  • Like
  • Thinking
Reactions: 2 users
I already mentioned Long time ago that Benz would never implement a tech of an absolute newcomer for production ready vehicles… it’s too risky and also it’s not good for the image… implementing products from NVIDIA sounds more glamorous…. I don’t expect much
Just on vehicles, ADAS, AV etc.

I know Infosys has been mentioned here previously but this is a blog from about Oct 25 explaining a bit about where neuromorphic will most likely initially fit.

I agree, brand names will work with brand names however there is the acknowledgement that neuromorphic will most likely slot in the background as a complementary tech.

And yes, we are also identified in the blog amongst the other key players.




Brains on Wheels- The Role of Neuromorphic Computing in Next-Gen Mobility​

Executive Summary​

  1. AV compute today is power-hungry and latency-limited. Current stacks rely heavily on GPUs drawing 200–300 W per vehicle, with perception-to-decision latencies often in the tens of milliseconds.
  2. Neuromorphic computing offers a complementary approach. Event-driven spiking neural networks (SNNs) process only changes in data, delivering sub-watt operation and microsecond-to-millisecond-level responsiveness.
  3. 2025 has been a breakthrough year. Intel scaled neuromorphic systems past 1 billion neurons, Innatera launched the first mass-market neuromorphic MCU, and BrainChip, SynSense, and others delivered commercial edge-ready platforms.
  4. Market adoption is shifting from research to pilots. Edge and sensor-level deployments are already feasible; safety-critical roles in AV decision-making will require 3–7 years for certification.
  5. Fleet-level intelligence is the next frontier. Neuromorphic modules enable local WLAN hazard sharing, real-time route adaptation, and collective efficiency gains — giving early adopters measurable advantages.

Introduction​

Autonomous vehicles (AVs) are often described as “data centers on wheels.” They need to see the road, understand what is happening, and make decisions in real time. To do this, they process huge amounts of data from multiple sensors – cameras, LiDAR, radar, and ultrasonics. Current systems rely heavily on GPUs and CPUs, which are powerful but also consume large amounts of energy and generate significant heat. This limits efficiency and creates delays when responding to sudden changes on the road.
Neuromorphic computing, inspired by how the human brain works, is now being explored as a complementary approach. Instead of processing every single piece of data, neuromorphic chips are event-driven – they focus only on changes, like motion or new objects entering the scene. This makes them faster and far more energy efficient. For fleet operators, the attraction is lower running costs and longer battery range. For engineers, the appeal lies in faster responses and the ability to run always-on monitoring at incredibly low power.

The Current AV Compute Stack​

To understand the role neuromorphic computing could play, it helps to first look at the existing setup inside an autonomous vehicle.
1. Data volumes: A single Level 4 vehicle generates around 4 terabytes of sensor data per day. Cameras typically range from 8–16 units per vehicle, while a single LiDAR sensor can generate 1–5 million points per second.
2. Processing hardware:
  • GPUs and high-performance SoCs like NVIDIA’s DRIVE Orin provide up to 254 TOPS (trillion operations per second) to handle perception and decision-making.
  • CPUs manage control logic and orchestration.
  • ASICs and FPGAs are sometimes added for sensor-specific acceleration.
3. Power draw: A full AV compute stack typically consumes 200–300 watts of power per vehicle. This is a significant fraction of an EV’s battery, directly reducing driving range.
4. Latency: Even advanced GPU pipelines introduce delays. Processing perception data and turning it into an action (braking, steering) usually takes tens of milliseconds. That might not sound long, but at 100 km/h, a vehicle covers almost 3 meters in 100 ms – enough to affect safety.
While this architecture has enabled pilot deployments, it is now approaching limits in energy efficiency, thermal management, and reaction time.

Neuromorphic Computing: The Concept​

Neuromorphic computing takes its inspiration from biology, specifically the human brain. Instead of handling data in large frames or datasets, it processes information as a series of spikes or events. If nothing changes in the input, the system stays quiet and consumes almost no energy.
The core building blocks are spiking neural networks (SNNs). These networks mimic how biological neurons fire only when stimulated. For autonomous driving, this means:
  1. Event-driven processing: Rather than analyzing every pixel of every frame, neuromorphic processors focus only on motion or new objects appearing.
  2. Low power operation: Because they ignore static or repetitive data, neuromorphic systems can achieve 10–30 times better energy efficiency for certain workloads than conventional chips.
  3. Fast responsiveness: Paired with event-based cameras, neuromorphic chips can react to sudden changes in microseconds to milliseconds, compared with tens of milliseconds for GPU-based systems.
  4. Edge feasibility: Many neuromorphic devices work at sub-watt or even microwatt levels, allowing them to sit directly in sensors, staying “always on” without draining the battery.
In practical terms, neuromorphic processors do not aim to replace GPUs entirely. Instead, they serve as lightweight, ultra-responsive companions, ideal for tasks such as continuous hazard detection, filtering out irrelevant data, and sharing compact alerts across a fleet.

Advantages of Neuromorphic Computing for AVs​

Neuromorphic computing brings several practical advantages that directly address the weaknesses of today’s GPU-heavy AV stacks.
1. Energy Efficiency
Traditional AV compute systems draw 200–300 watts continuously, reducing EV driving range. Neuromorphic processors, by contrast, operate on event-driven workloads and often consume sub-watt or even microwatt levels. For example, Innatera’s Pulsar MCU, launched in 2025, demonstrated radar presence detection at ~600 µW and audio classification at ~400 µW – hundreds of times more efficient than typical microcontrollers.
This makes it feasible to run “always-on” monitoring without draining the battery, a game-changer for fleets operating around the clock.
2. Responsiveness
In driving, milliseconds matter. Neuromorphic processors paired with event-based cameras have been shown to detect changes in microseconds to milliseconds, compared with GPU pipelines that often take tens of milliseconds. At highway speeds, this difference can be the margin between reacting in time or overshooting a hazard.
3. Compact Data Handling
AVs generate around 4 TB of raw sensor data per day. Transmitting all of this to the cloud is impractical. Neuromorphic modules pre-process data into compact “event packets,” reducing transmission to kilobytes rather than gigabytes. This not only lowers communication costs but also enables local sharing over WLAN between vehicles in the same fleet.
4. Sensor-Edge Integration
Neuromorphic chips are small and efficient enough to be placed directly inside sensors like cameras or radar units. SynSense’s Speck and Xylo chips, for example, process motion events with latencies as low as 3 µs while operating in the milliwatt range.
This enables real-time anomaly detection right at the sensor level, filtering irrelevant data before it even reaches the central compute.

Limitations and Challenges​

Despite these advantages, neuromorphic computing is not yet ready to take over the full AV compute stack. Several limitations remain.
1. Immature Toolchains
Training and deploying spiking neural networks (SNNs) is still more complex than working with conventional AI models. Tools like Intel’s Lava, BrainChip’s SDK, and SynSense’s Rockpool are improving, but they lack the maturity, developer base, and ecosystem support of established platforms like NVIDIA CUDA or TensorRT.
This slows adoption, as engineers face higher integration costs and limited standardization.
2. Narrow Task Suitability
Neuromorphic chips excel at sparse, time-sensitive tasks – such as motion detection or anomaly recognition. However, they have not yet matched GPUs in handling dense perception tasks like full-scene semantic segmentation, which are essential for AV planning and navigation.
3. Certification Gap
For any technology to be used in safety-critical vehicle functions, it must meet ISO 26262 / ASIL standards. As of 2025, no neuromorphic processor has achieved this certification. Without it, neuromorphic chips can only be deployed in non-safety-critical supporting roles until further validation is complete.
4. Limited Real-World Testing
Many of the strongest results come from lab demonstrations – gesture recognition, event-based vision datasets, or controlled environments. Scaling these successes into full autonomous driving stacks with varied lighting, weather, and traffic conditions remains an open challenge.

Current Status of Technology​

2025 has been a milestone year for neuromorphic computing. The field has moved from promising research to tangible products, ranging from billion-neuron research platforms to microwatt-level chips that can sit directly inside sensors.
1. Intel’s large-scale systems. Intel continues to lead on the research side. Its Loihi 2 chip supports roughly one million neurons per processor, and in April 2025 the company unveiled Hala Point, the world’s largest neuromorphic system. Built from 1,152 Loihi 2 chips, it simulates 1.15 billion neurons and 128 billion synapses, running at ~2.6 kW. For comparison, traditional GPU clusters require significantly more energy to achieve similar scale.. Intel reports up to 15 trillion operations per second per watt (TOPS/W) for certain workloads – orders of magnitude better efficiency than GPUs.
2. Innatera and edge devices. At Computex 2025, Dutch startup Innatera launched Pulsar, a neuromorphic microcontroller (MCU) designed for always-on sensor tasks. Pulsar integrates spiking neural networks with conventional RISC-V cores and digital accelerators. Benchmarks show radar presence detection at ~600 µW and audio classification at ~400 µW, delivering about 100× lower latency and 500× lower energy use compared to traditional MCUs. Its recognition as “Best of Show” at Computex reflects strong industry attention.
3. BrainChip and accessible platforms. BrainChip, one of the earliest commercial players, expanded its Akida product line in 2025 with the Akida Edge AI Box, a Linux-based neuromorphic system designed for edge vision tasks, and Akida Cloud, which gives developers remote access to Akida 2 models without needing hardware. In partnership with Prophesee, BrainChip demonstrated microsecond-class gesture recognition at Embedded World 2025 using an Akida 2 processor with an event-based vision sensor.
4. SynSense and compact SoCs. Swiss startup SynSense has built chips specifically for embedding inside sensors. Its Speck SoC integrates ~328,000 neurons and delivers ~3 µs latency per spike at milliwatt power levels. Its smaller Xylo chip runs about 1,000 neurons at even lower power, enabling ultra-low-power anomaly detection or wake-word type functions directly at the sensor.
5. Academic innovations. Universities continue to experiment with new approaches. In 2025, a memristive neuromorphic chip achieved 93% accuracy on the DVS128 gesture dataset with latencies of ~30 µs per sample and efficiencies above 100 TOPS/W. Meanwhile, UC San Diego’s HiAER-Spike system demonstrated modular scaling up to 160 million neurons and 40 billion synapses, highlighting the potential for scalable and modular neuromorphic designs.
6. Remaining gaps. Despite these achievements, neuromorphic processors are still limited in scope. No chip has yet been certified under ISO 26262 for automotive safety. Developer tools remain less mature than GPU ecosystems. And most proven applications are narrow – gesture recognition, anomaly detection, or event-based vision – not yet the full sensor fusion pipelines needed for AVs.

Market Outlook & Use Cases​

The growth of neuromorphic computing is no longer theoretical. According to Research and Markets’ Global Neuromorphic Computing and Sensing Market Report (Feb 2025), more than 140 companies are actively developing neuromorphic chips and sensors. The report projects the global market will reach the tens of billions USD by 2030, with demand driven by edge AI in autonomous mobility, IoT, and robotics.
Why this matters for AVs. Unlike GPU and ASIC markets, where growth comes from raw throughput, neuromorphic adoption is being pulled by efficiency under power constraints. This is a direct fit with AV operations, where every watt consumed by computing reduces EV driving range.
Adoption timeline. Analysts expect a phased path:
  1. Near term (now–3 years): Edge deployments for non-safety-critical tasks such as pre-processing, anomaly detection, and compact data sharing.
  2. Mid-term (3–7 years): Gradual integration into decision-making roles once ISO 26262/ASIL certification is achieved and large-scale validation is complete.
Emerging use cases already in pilots include:
  1. Sensor intelligence: Embedding neuromorphic MCUs like Innatera Pulsar or SynSense Xylo in cameras or radars to filter noise.
  2. Fleet coordination: Sharing compact hazard packets (kilobytes) instead of raw frames, enabling WLAN-based updates in seconds.
  3. Always-on monitoring: Using sub-watt modules for continuous pedestrian or obstacle detection without draining the main battery.
  4. Hybrid orchestration: Deploying neuromorphic processors as pre-filters to wake the GPU only when heavy computation is needed, reducing duty cycles and extending hardware life.
For fleet operators and OEM executives, the opportunity lies in starting low-risk pilots now. Neuromorphic chips will not replace GPUs, but they can extend battery range, cut operating costs, and improve responsiveness. The risk is competitive: GPU and ASIC makers like NVIDIA, Qualcomm, and Tesla are aggressively improving efficiency. If those gains outpace neuromorphic adoption, the relative advantage could narrow.

Fleet-Level Use Cases: Efficiency, Responsiveness, and Collective Intelligence​

The fleet is where neuromorphic computing shows its greatest potential. By enabling vehicles to sense, share, and adapt locally, neuromorphic chips transform fleets into distributed, cooperative systems rather than isolated units.
1) Fleet Efficiency and Responsiveness
Today, GPUs run at full vigilance, draining 200–300 W per vehicle. Neuromorphic modules can handle “always-on” tasks at sub-watt power, freeing GPUs for heavy lifting only when needed. Their microsecond-to-millisecond reaction times mean hazards like sudden lane changes or debris can be detected and acted upon faster than GPU pipelines allow. For fleets, this translates into extended EV range, less downtime from overheating, and improved safety margins.
2) WLAN-Based Local Data Sharing
Instead of sending large data streams to the cloud, neuromorphic processors create compact event packets: a pothole detection, for example, may be just a few kilobytes. These packets can be shared via WLAN or vehicle-to-vehicle mesh networks, reaching other vehicles in seconds. This avoids cloud latency, reduces bandwidth costs, and keeps fleets resilient even in areas with poor connectivity.
3) Real-Time Collective Intelligence
When vehicles share events locally, the fleet can adapt together:
  1. One vehicle detects an event (pothole, debris, lane closure).
  2. The event is encoded and broadcast over WLAN.
  3. Neighboring vehicles corroborate and adjust planning.
  4. Updates propagate fleet-wide within the affected zone.
  5. This enables fleets to adapt in real time, with seconds-level propagation instead of hours-long cloud retraining cycles.
4) Route Planning and Ride Settings
Fleet-wide updates can dynamically optimize:
  1. Route costs for affected road segments.
  2. Speed profiles in zones with frequent hazards.
  3. Suspension settings for rough surfaces.
  4. Fleet positioning in response to demand surges (e.g., pedestrian exits after events).
5) Illustrative Scenarios
  1. Pothole detection: A car flags a pothole; within seconds, 10 vehicles reduce speed before reaching it.
  2. Temporary work zone: Multiple reports trigger automatic rerouting of shuttles on a campus.
  3. Ride-hailing demand: Pedestrian surges after a stadium event shift fleet distribution in real time.
  4. Ride comfort: Vehicles adjust suspension collectively when entering cobblestone streets.
6) Strategic Impact for Operators
The benefits are tangible:
  1. Efficiency: Lower GPU use reduces fleet-wide energy costs.
  2. Responsiveness: Hazards are detected and shared in milliseconds to seconds.
  3. Resilience: WLAN sharing reduces cloud dependence, lowering costs and risks.
Crucially, these are low-risk pilots. Because they involve conservative local parameter updates rather than safety-critical decisions, they can be trialed now without waiting for ISO 26262 certification.
Neuromorphic fleet intelligence is a pragmatic entry point – safe enough for pilots, valuable enough to improve operations, and scalable enough to prepare fleets for a future where distributed intelligence is the norm.

Conclusion​

Neuromorphic computing is no longer just an academic experiment – in 2025 it has become a tangible technology with products ranging from billion-neuron research platforms to microwatt-scale sensor chips. For autonomous vehicles, its value lies not in replacing the GPU stack but in complementing it: reducing power draw, shrinking latency to microseconds, and enabling fleets to act as adaptive networks rather than isolated machines. The road to full safety-critical deployment will take years, as ISO 26262 certification and large-scale validation are still ahead. But the opportunity for operators and OEMs is clear today. By piloting neuromorphic modules in low-risk roles – from always-on monitoring to WLAN-based hazard sharing – fleets can cut costs, improve responsiveness, and build the foundation for collective intelligence. Those who move early will not only gain operational efficiency but also shape the standards and practices that will define the next era of autonomous mobility.
 
  • Like
  • Fire
  • Love
Reactions: 23 users
Mr Lewis likes this
 

Attachments

  • Screenshot_20260106_133846_LinkedIn.jpg
    Screenshot_20260106_133846_LinkedIn.jpg
    355.5 KB · Views: 130
  • Like
  • Thinking
  • Wow
Reactions: 9 users

7für7

Top 20
Just on vehicles, ADAS, AV etc.

I know Infosys has been mentioned here previously but this is a blog from about Oct 25 explaining a bit about where neuromorphic will most likely initially fit.

I agree, brand names will work with brand names however there is the acknowledgement that neuromorphic will most likely slot in the background as a complementary tech.

And yes, we are also identified in the blog amongst the other key players.




Brains on Wheels- The Role of Neuromorphic Computing in Next-Gen Mobility​

Executive Summary​

  1. AV compute today is power-hungry and latency-limited. Current stacks rely heavily on GPUs drawing 200–300 W per vehicle, with perception-to-decision latencies often in the tens of milliseconds.
  2. Neuromorphic computing offers a complementary approach. Event-driven spiking neural networks (SNNs) process only changes in data, delivering sub-watt operation and microsecond-to-millisecond-level responsiveness.
  3. 2025 has been a breakthrough year. Intel scaled neuromorphic systems past 1 billion neurons, Innatera launched the first mass-market neuromorphic MCU, and BrainChip, SynSense, and others delivered commercial edge-ready platforms.
  4. Market adoption is shifting from research to pilots. Edge and sensor-level deployments are already feasible; safety-critical roles in AV decision-making will require 3–7 years for certification.
  5. Fleet-level intelligence is the next frontier. Neuromorphic modules enable local WLAN hazard sharing, real-time route adaptation, and collective efficiency gains — giving early adopters measurable advantages.

Introduction​

Autonomous vehicles (AVs) are often described as “data centers on wheels.” They need to see the road, understand what is happening, and make decisions in real time. To do this, they process huge amounts of data from multiple sensors – cameras, LiDAR, radar, and ultrasonics. Current systems rely heavily on GPUs and CPUs, which are powerful but also consume large amounts of energy and generate significant heat. This limits efficiency and creates delays when responding to sudden changes on the road.
Neuromorphic computing, inspired by how the human brain works, is now being explored as a complementary approach. Instead of processing every single piece of data, neuromorphic chips are event-driven – they focus only on changes, like motion or new objects entering the scene. This makes them faster and far more energy efficient. For fleet operators, the attraction is lower running costs and longer battery range. For engineers, the appeal lies in faster responses and the ability to run always-on monitoring at incredibly low power.

The Current AV Compute Stack​

To understand the role neuromorphic computing could play, it helps to first look at the existing setup inside an autonomous vehicle.
1. Data volumes: A single Level 4 vehicle generates around 4 terabytes of sensor data per day. Cameras typically range from 8–16 units per vehicle, while a single LiDAR sensor can generate 1–5 million points per second.
2. Processing hardware:
  • GPUs and high-performance SoCs like NVIDIA’s DRIVE Orin provide up to 254 TOPS (trillion operations per second) to handle perception and decision-making.
  • CPUs manage control logic and orchestration.
  • ASICs and FPGAs are sometimes added for sensor-specific acceleration.
3. Power draw: A full AV compute stack typically consumes 200–300 watts of power per vehicle. This is a significant fraction of an EV’s battery, directly reducing driving range.
4. Latency: Even advanced GPU pipelines introduce delays. Processing perception data and turning it into an action (braking, steering) usually takes tens of milliseconds. That might not sound long, but at 100 km/h, a vehicle covers almost 3 meters in 100 ms – enough to affect safety.
While this architecture has enabled pilot deployments, it is now approaching limits in energy efficiency, thermal management, and reaction time.

Neuromorphic Computing: The Concept​

Neuromorphic computing takes its inspiration from biology, specifically the human brain. Instead of handling data in large frames or datasets, it processes information as a series of spikes or events. If nothing changes in the input, the system stays quiet and consumes almost no energy.
The core building blocks are spiking neural networks (SNNs). These networks mimic how biological neurons fire only when stimulated. For autonomous driving, this means:
  1. Event-driven processing: Rather than analyzing every pixel of every frame, neuromorphic processors focus only on motion or new objects appearing.
  2. Low power operation: Because they ignore static or repetitive data, neuromorphic systems can achieve 10–30 times better energy efficiency for certain workloads than conventional chips.
  3. Fast responsiveness: Paired with event-based cameras, neuromorphic chips can react to sudden changes in microseconds to milliseconds, compared with tens of milliseconds for GPU-based systems.
  4. Edge feasibility: Many neuromorphic devices work at sub-watt or even microwatt levels, allowing them to sit directly in sensors, staying “always on” without draining the battery.
In practical terms, neuromorphic processors do not aim to replace GPUs entirely. Instead, they serve as lightweight, ultra-responsive companions, ideal for tasks such as continuous hazard detection, filtering out irrelevant data, and sharing compact alerts across a fleet.

Advantages of Neuromorphic Computing for AVs​

Neuromorphic computing brings several practical advantages that directly address the weaknesses of today’s GPU-heavy AV stacks.
1. Energy Efficiency
Traditional AV compute systems draw 200–300 watts continuously, reducing EV driving range. Neuromorphic processors, by contrast, operate on event-driven workloads and often consume sub-watt or even microwatt levels. For example, Innatera’s Pulsar MCU, launched in 2025, demonstrated radar presence detection at ~600 µW and audio classification at ~400 µW – hundreds of times more efficient than typical microcontrollers.
This makes it feasible to run “always-on” monitoring without draining the battery, a game-changer for fleets operating around the clock.
2. Responsiveness
In driving, milliseconds matter. Neuromorphic processors paired with event-based cameras have been shown to detect changes in microseconds to milliseconds, compared with GPU pipelines that often take tens of milliseconds. At highway speeds, this difference can be the margin between reacting in time or overshooting a hazard.
3. Compact Data Handling
AVs generate around 4 TB of raw sensor data per day. Transmitting all of this to the cloud is impractical. Neuromorphic modules pre-process data into compact “event packets,” reducing transmission to kilobytes rather than gigabytes. This not only lowers communication costs but also enables local sharing over WLAN between vehicles in the same fleet.
4. Sensor-Edge Integration
Neuromorphic chips are small and efficient enough to be placed directly inside sensors like cameras or radar units. SynSense’s Speck and Xylo chips, for example, process motion events with latencies as low as 3 µs while operating in the milliwatt range.
This enables real-time anomaly detection right at the sensor level, filtering irrelevant data before it even reaches the central compute.

Limitations and Challenges​

Despite these advantages, neuromorphic computing is not yet ready to take over the full AV compute stack. Several limitations remain.
1. Immature Toolchains
Training and deploying spiking neural networks (SNNs) is still more complex than working with conventional AI models. Tools like Intel’s Lava, BrainChip’s SDK, and SynSense’s Rockpool are improving, but they lack the maturity, developer base, and ecosystem support of established platforms like NVIDIA CUDA or TensorRT.
This slows adoption, as engineers face higher integration costs and limited standardization.
2. Narrow Task Suitability
Neuromorphic chips excel at sparse, time-sensitive tasks – such as motion detection or anomaly recognition. However, they have not yet matched GPUs in handling dense perception tasks like full-scene semantic segmentation, which are essential for AV planning and navigation.
3. Certification Gap
For any technology to be used in safety-critical vehicle functions, it must meet ISO 26262 / ASIL standards. As of 2025, no neuromorphic processor has achieved this certification. Without it, neuromorphic chips can only be deployed in non-safety-critical supporting roles until further validation is complete.
4. Limited Real-World Testing
Many of the strongest results come from lab demonstrations – gesture recognition, event-based vision datasets, or controlled environments. Scaling these successes into full autonomous driving stacks with varied lighting, weather, and traffic conditions remains an open challenge.

Current Status of Technology​

2025 has been a milestone year for neuromorphic computing. The field has moved from promising research to tangible products, ranging from billion-neuron research platforms to microwatt-level chips that can sit directly inside sensors.
1. Intel’s large-scale systems. Intel continues to lead on the research side. Its Loihi 2 chip supports roughly one million neurons per processor, and in April 2025 the company unveiled Hala Point, the world’s largest neuromorphic system. Built from 1,152 Loihi 2 chips, it simulates 1.15 billion neurons and 128 billion synapses, running at ~2.6 kW. For comparison, traditional GPU clusters require significantly more energy to achieve similar scale.. Intel reports up to 15 trillion operations per second per watt (TOPS/W) for certain workloads – orders of magnitude better efficiency than GPUs.
2. Innatera and edge devices. At Computex 2025, Dutch startup Innatera launched Pulsar, a neuromorphic microcontroller (MCU) designed for always-on sensor tasks. Pulsar integrates spiking neural networks with conventional RISC-V cores and digital accelerators. Benchmarks show radar presence detection at ~600 µW and audio classification at ~400 µW, delivering about 100× lower latency and 500× lower energy use compared to traditional MCUs. Its recognition as “Best of Show” at Computex reflects strong industry attention.
3. BrainChip and accessible platforms. BrainChip, one of the earliest commercial players, expanded its Akida product line in 2025 with the Akida Edge AI Box, a Linux-based neuromorphic system designed for edge vision tasks, and Akida Cloud, which gives developers remote access to Akida 2 models without needing hardware. In partnership with Prophesee, BrainChip demonstrated microsecond-class gesture recognition at Embedded World 2025 using an Akida 2 processor with an event-based vision sensor.
4. SynSense and compact SoCs. Swiss startup SynSense has built chips specifically for embedding inside sensors. Its Speck SoC integrates ~328,000 neurons and delivers ~3 µs latency per spike at milliwatt power levels. Its smaller Xylo chip runs about 1,000 neurons at even lower power, enabling ultra-low-power anomaly detection or wake-word type functions directly at the sensor.
5. Academic innovations. Universities continue to experiment with new approaches. In 2025, a memristive neuromorphic chip achieved 93% accuracy on the DVS128 gesture dataset with latencies of ~30 µs per sample and efficiencies above 100 TOPS/W. Meanwhile, UC San Diego’s HiAER-Spike system demonstrated modular scaling up to 160 million neurons and 40 billion synapses, highlighting the potential for scalable and modular neuromorphic designs.
6. Remaining gaps. Despite these achievements, neuromorphic processors are still limited in scope. No chip has yet been certified under ISO 26262 for automotive safety. Developer tools remain less mature than GPU ecosystems. And most proven applications are narrow – gesture recognition, anomaly detection, or event-based vision – not yet the full sensor fusion pipelines needed for AVs.

Market Outlook & Use Cases​

The growth of neuromorphic computing is no longer theoretical. According to Research and Markets’ Global Neuromorphic Computing and Sensing Market Report (Feb 2025), more than 140 companies are actively developing neuromorphic chips and sensors. The report projects the global market will reach the tens of billions USD by 2030, with demand driven by edge AI in autonomous mobility, IoT, and robotics.
Why this matters for AVs. Unlike GPU and ASIC markets, where growth comes from raw throughput, neuromorphic adoption is being pulled by efficiency under power constraints. This is a direct fit with AV operations, where every watt consumed by computing reduces EV driving range.
Adoption timeline. Analysts expect a phased path:
  1. Near term (now–3 years): Edge deployments for non-safety-critical tasks such as pre-processing, anomaly detection, and compact data sharing.
  2. Mid-term (3–7 years): Gradual integration into decision-making roles once ISO 26262/ASIL certification is achieved and large-scale validation is complete.
Emerging use cases already in pilots include:
  1. Sensor intelligence: Embedding neuromorphic MCUs like Innatera Pulsar or SynSense Xylo in cameras or radars to filter noise.
  2. Fleet coordination: Sharing compact hazard packets (kilobytes) instead of raw frames, enabling WLAN-based updates in seconds.
  3. Always-on monitoring: Using sub-watt modules for continuous pedestrian or obstacle detection without draining the main battery.
  4. Hybrid orchestration: Deploying neuromorphic processors as pre-filters to wake the GPU only when heavy computation is needed, reducing duty cycles and extending hardware life.
For fleet operators and OEM executives, the opportunity lies in starting low-risk pilots now. Neuromorphic chips will not replace GPUs, but they can extend battery range, cut operating costs, and improve responsiveness. The risk is competitive: GPU and ASIC makers like NVIDIA, Qualcomm, and Tesla are aggressively improving efficiency. If those gains outpace neuromorphic adoption, the relative advantage could narrow.

Fleet-Level Use Cases: Efficiency, Responsiveness, and Collective Intelligence​

The fleet is where neuromorphic computing shows its greatest potential. By enabling vehicles to sense, share, and adapt locally, neuromorphic chips transform fleets into distributed, cooperative systems rather than isolated units.
1) Fleet Efficiency and Responsiveness
Today, GPUs run at full vigilance, draining 200–300 W per vehicle. Neuromorphic modules can handle “always-on” tasks at sub-watt power, freeing GPUs for heavy lifting only when needed. Their microsecond-to-millisecond reaction times mean hazards like sudden lane changes or debris can be detected and acted upon faster than GPU pipelines allow. For fleets, this translates into extended EV range, less downtime from overheating, and improved safety margins.
2) WLAN-Based Local Data Sharing
Instead of sending large data streams to the cloud, neuromorphic processors create compact event packets: a pothole detection, for example, may be just a few kilobytes. These packets can be shared via WLAN or vehicle-to-vehicle mesh networks, reaching other vehicles in seconds. This avoids cloud latency, reduces bandwidth costs, and keeps fleets resilient even in areas with poor connectivity.
3) Real-Time Collective Intelligence
When vehicles share events locally, the fleet can adapt together:
  1. One vehicle detects an event (pothole, debris, lane closure).
  2. The event is encoded and broadcast over WLAN.
  3. Neighboring vehicles corroborate and adjust planning.
  4. Updates propagate fleet-wide within the affected zone.
  5. This enables fleets to adapt in real time, with seconds-level propagation instead of hours-long cloud retraining cycles.
4) Route Planning and Ride Settings
Fleet-wide updates can dynamically optimize:
  1. Route costs for affected road segments.
  2. Speed profiles in zones with frequent hazards.
  3. Suspension settings for rough surfaces.
  4. Fleet positioning in response to demand surges (e.g., pedestrian exits after events).
5) Illustrative Scenarios
  1. Pothole detection: A car flags a pothole; within seconds, 10 vehicles reduce speed before reaching it.
  2. Temporary work zone: Multiple reports trigger automatic rerouting of shuttles on a campus.
  3. Ride-hailing demand: Pedestrian surges after a stadium event shift fleet distribution in real time.
  4. Ride comfort: Vehicles adjust suspension collectively when entering cobblestone streets.
6) Strategic Impact for Operators
The benefits are tangible:
  1. Efficiency: Lower GPU use reduces fleet-wide energy costs.
  2. Responsiveness: Hazards are detected and shared in milliseconds to seconds.
  3. Resilience: WLAN sharing reduces cloud dependence, lowering costs and risks.
Crucially, these are low-risk pilots. Because they involve conservative local parameter updates rather than safety-critical decisions, they can be trialed now without waiting for ISO 26262 certification.
Neuromorphic fleet intelligence is a pragmatic entry point – safe enough for pilots, valuable enough to improve operations, and scalable enough to prepare fleets for a future where distributed intelligence is the norm.

Conclusion​

Neuromorphic computing is no longer just an academic experiment – in 2025 it has become a tangible technology with products ranging from billion-neuron research platforms to microwatt-scale sensor chips. For autonomous vehicles, its value lies not in replacing the GPU stack but in complementing it: reducing power draw, shrinking latency to microseconds, and enabling fleets to act as adaptive networks rather than isolated machines. The road to full safety-critical deployment will take years, as ISO 26262 certification and large-scale validation are still ahead. But the opportunity for operators and OEMs is clear today. By piloting neuromorphic modules in low-risk roles – from always-on monitoring to WLAN-based hazard sharing – fleets can cut costs, improve responsiveness, and build the foundation for collective intelligence. Those who move early will not only gain operational efficiency but also shape the standards and practices that will define the next era of autonomous mobility.

Yes, I know — and you’re right: neuromorphic isn’t just a research tool.
The real question is whether large companies are willing to take the risk of integrating this kind of technology into mass production (I’m not referring to Onsor, which is also a startup).
We’re already seeing new “smart” products enter the market — but it looks like many of them can reach their target functionality with today’s conventional solutions.
And since these are newly launched product generations, that reality likely won’t change quickly.

Maybe I’m just impatient, but I was expecting more momentum early in the year (CES included).
Right now it still feels more like “capability showcasing” than consistent deal execution. Like all the other years before….
I’m staying optimistic …just hoping to see clearer commercial milestones soon.

It’s not about getting rich now… yes would be nice… but I would be already happy if the share price would at least go back to 25 cent…30 cent …. This 17.5 …18…. 17.3…. 17.8…17.5 is just ridiculous
 
Last edited:
  • Like
  • Thinking
  • Sad
Reactions: 14 users
If anyone is going to CES can you ask quantum ventures if their bringing out a USB with brainchip inside anytime this century.... please.😃
 
  • Haha
  • Fire
Reactions: 5 users
Good to see BRN have registered the TENN trademark back 2023 with it Live / Pending currently.



Screenshot_2026-01-06-12-25-57-26_4641ebc0df1485bf6b47ebd018b5ee76.jpg
Screenshot_2026-01-06-12-26-19-76_4641ebc0df1485bf6b47ebd018b5ee76.jpg
 
  • Like
  • Fire
  • Thinking
Reactions: 24 users

TopCat

Regular
It’s nice they’re on speaking terms 🙂

IMG_0901.jpeg
 
  • Like
  • Fire
Reactions: 11 users

7für7

Top 20
It’s nice they’re on speaking terms 🙂

View attachment 94061
Would be nice if they would communicate with their shareholder as well… kind of update what they are doing … some pics form CES or anything… even a “yo we are so drunk we overslept” then we at least would know… ok that’s why hey are quiet…


Les ooo rainship

Comedy Reaction GIF by Bayerischer Rundfunk
 
  • Haha
Reactions: 3 users

BigDonger101

Founding Member
Would be nice if they would communicate with their shareholder as well… kind of update what they are doing … some pics form CES or anything… even a “yo we are so drunk we overslept” then we at least would know… ok that’s why hey are quiet…


Les ooo rainship

Comedy Reaction GIF by Bayerischer Rundfunk
Their booth is tomorrow. Jan 7th US time at 7pm.

1767681233649.png


They also have collaborations with two other partners at the CES. Not exactly sure when they will be but it can't be hard to find.
 
  • Like
  • Fire
Reactions: 3 users

7für7

Top 20
Their booth is tomorrow. Jan 7th US time at 7pm.

View attachment 94063

They also have collaborations with two other partners at the CES. Not exactly sure when they will be but it can't be hard to find.

Thanks…I know that… but as I said, other companies communicate already via social media about progress or taking pictures of the display with words like “looking forward” etc
 
  • Like
Reactions: 3 users
Thanks…I know that… but as I said, other companies communicate already via social media about progress or taking pictures of the display with words like “looking forward” etc
Awww...c'mon mate. Gotta give em time to finish up first and get organised :LOL:

BRN HQ.png
 
  • Haha
Reactions: 6 users

7für7

Top 20
  • Haha
  • Thinking
Reactions: 4 users

miaeffect

Oat latte lover
Thanks…I know that… but as I said, other companies communicate already via social media about progress or taking pictures of the display with words like “looking forward” etc
Not far away
ironing-robot.gif
 
  • Haha
  • Love
  • Like
Reactions: 14 users

CHIPS

Regular
Unfortunately no neuromorphic compute I don't believe.



1767686850921.png
 
  • Like
  • Sad
Reactions: 5 users


View attachment 94067
Ask Ai says no, yet one never knows


No, Intel Core Ultra processors do not directly use neuromorphic compute. They are designed for conventional computing tasks, albeit with specialized AI acceleration capabilities through their integrated Neural Processing Unit (NPU) and other architectural enhancements. Neuromorphic computing, as exemplified by Intel's Loihi series, represents a fundamentally different, brain-inspired approach to AI processing that is currently in the research and development phase for specialized applications.[1
 
  • Like
Reactions: 3 users
Thanks…I know that… but as I said, other companies communicate already via social media about progress or taking pictures of the display with words like “looking forward” etc
Maybe the reason that we have no up agreements currently asleep at the wheel while everyone else goes flying passed.

But has anyone seen this


1. The Award "Validation"


You asked about awards: Naqi Logix officially received the 2026 CES Best of Innovation Award today for their Neural Earbuds.


• Why this isn't just "same old": This is the highest honor at CES. These earbuds are the flagship showcase for Akida. While Naqi gets the trophy, the "Intel Inside" story is what BrainChip is using in the suite right now to pitch to other manufacturers. It proves that a "Best in Show" product literally cannot work without BrainChip's ultra-low-power processing.
 
  • Like
  • Fire
  • Love
Reactions: 18 users

7für7

Top 20
Maybe already postet but yeah… just for the positivity

 
  • Like
Reactions: 3 users

CHIPS

Regular


1767693865635.png


1767693947645.png
 
  • Like
Reactions: 2 users

gilti

Regular
Maybe the reason that we have no up agreements currently asleep at the wheel while everyone else goes flying passed.

But has anyone seen this


1. The Award "Validation"


You asked about awards: Naqi Logix officially received the 2026 CES Best of Innovation Award today for their Neural Earbuds.


• Why this isn't just "same old": This is the highest honor at CES. These earbuds are the flagship showcase for Akida. While Naqi gets the trophy, the "Intel Inside" story is what BrainChip is using in the suite right now to pitch to other manufacturers. It proves that a "Best in Show" product literally cannot work without BrainChip's ultra-low-power processing.
F4T, Can you provide a reference to where Naqi are using Akida. I cant find anything other than a couple of "we would be a good fit" comments early last year when Naqi won the second of their awards at CES. I can see no reference on their wrbsite either.
Prove me wrong and I will be a very happy chappie
 
  • Like
  • Fire
Reactions: 3 users
Top Bottom