BRN Discussion Ongoing

Diogenese

Top 20
There are 6 companies in the BRN front page scroll:

Chelpis for the M2 cybersecurity PCB;

Andes RISC-V AI acceleration;

Prophesee DVS;

Frontgarde Grain space neuromorphic AI;

RTX AFRL radar;

Onsor AI medical epileptic monitor.

The following extracts from the BRN website are informative when seen collectively.

Chelpis -

Laguna Hills, Calif. – April 28th, 2025 BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power, fully digital, event-based, neuromorphic AI, today announced that Chelpis Quantum Corp. has selected its Akida AKD1000 chips to serve as the processor for built-in post-quantum cryptographic security. Chelpis, a chip company leading the Quantum Safe Migration ecosystem in Taiwan, is developing an M.2 card using the AKD1000 that can be inserted into targeted products to support their cryptographic security solutions. The M.2 card is based on a design from BrainChip along with an agreement to purchase a significant number of AKD1000 chips for qualification and deployment. Upon completion of this phase, Chelpis is planning to increase its commitment with additional orders for the AKD1000.

This agreement is the first step in a collaboration that is exploring the development of an AI-PQC robotic chip designed to fulfill both next-generation security and AI computing requirements. This project is a joint development effort with Chelpis partner company Mirle (2464.TW) and has been formally submitted for consideration under Taiwan’s chip innovation program. The funding aims to promote a new system-on-chip (SoC) integrating RISC-V, PQC, and NPU technologies. This SoC will specifically support manufacturing markets that emphasize a Made-in-USA strategy. Mirle plans to build autonomous quadruped robotics that mimic the movement of four-legged animals for industrial/factory environments. To enable this vision, Chelpis is exploring BrainChip’s advanced Akida™ IP to incorporate advanced visual GenAI capabilities in the proposed SoC design
.


I wonder if there is any link to the DoE Quantum Ventura CyberNeuro-RT project which also uses the Akida 1000 chip on an M2 card?


ANDES -
Laguna Hills, Calif. – April 23, 2025 BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power, fully digital, event-based, brain-inspired AI, today announced the integration of its NPUs with RISC-V cores from Andes Technology, the industry leading provider of RISC-V embedded cores. The companies will demonstrate BrainChip’s Akida™ AKD1500 on Andes’ QiLai Voyager Board and AndesCore™ AX45MP 64-bit multicore CPU IP at Andes RISC-V Con 2025 in San Jose, Calif. April 29 and in Hsinchu, Taiwan June 10.

The AKD1500 demonstrates the benefits of Akida’s pure digital, extremely energy-efficient, event-based AI computation for at-sensor or sensor-balanced solutions for AI, application processors, automotive electronics and security markets. The QiLai SoC and the Voyager development board further accelerates the development and porting of large RISC-V applications. Integrating BrainChip’s Akida technology and Andes’ high-performance QiLai SoC and Voyager development board provides a highly efficient solution for integrated edge AI compute and further expands the RISC-V ecosystem.

The BrainChip AKD1500 device is integrated into the Voyager development board using an M.2 card form factor. It delivers over 0.7 TOPS of event-based computing while consuming less than 250mW, achieving performance comparable to conventional CNN processing using 3–10× less compute. This demonstrates a cost and power-efficient path for integrating RISC-V SoCs, operating at a fraction of the power required by traditional AI accelerators. Akida is an event-based technology that is inherently lower power than conventional neural network accelerators, providing energy efficiency with high performance for partners to deliver AI solutions previously not possible on battery-operated or fan-less embedded, edge devices.

The Andes QiLai SoC chip incorporates a high-performance quad-core RISC-V AX45MP cluster. The AndesCore AX45MP is a superscalar, multicore design featuring a shared Level-2 cache, a coherence manager and a Memory Management Unit (MMU) to support Linux-based applications. Equipped with IOCP (I/O Coherency Port) interface, the AX45MP enables external hardware DMA to interact directly with the cache/memory subsystem, facilitating seamless communication between the AX45MP and high-speed modules like NPUs, GPUs and Gigabit Ethernet. Built on TSMC’s 7nm process technology, the AX45MP achieves clock speeds of up to 2.2 GHz on QiLai SoC. With higher Specint2006 performance than Cortex A55, high-clock frequency and multi-core Linux capabilities, the AX45MP has been very popular as Linux AP on various applications
.


ANDES uses the Akida 1500 device also on an M2 card.


Prophesee/Arquimea -

Enhancing Ocean Safety with Efficient AI-Powered Detection


Arquimea has deployed Akida with a Prophesee camera on a drone to detect distressed swimmers and surfers in the ocean helping lifeguards scale their services for large beach areas, opting for an event-based computing solution for its superior efficiency and consistently high-quality results.

BrainChip has teamed up with Prophesee, an event-based vision sensing and AI company, to produce a complete event-based sense and compute solution.

This integration allows vision data to be processed directly by Akida without the need for conversion into traditional frame-based formats, as required by conventional computer vision algorithms.
By computing the naturally sparse data output from the camera efficiently, the system eliminates unnecessary padding, reducing latency for faster detection while minimizing computational demands and power consumption
.

The Details provided with the Prophesee link also refer to the use of Akida with lidar and radar as well as the use of DVS with frame cameras.

Again, while not expressly stated, an M2 board would be suitable for drones because of the size and standardized pinout.


FRONTGRADE -

The Swedish National Space Agency (SNSA) has awarded Frontgrade Gaisler, a leading provider of radiation-hardened microprocessors for space missions, a contract to commercialise the first neuromorphic System on Chip (SoC) device for space applications. The device, which is currently under development at Frontgrade Gaisler, is part of the recently announced GRAIN (Gaisler Research Artificial Intelligence NOEL-V) product line.

The first GRAIN device in the product line is the Gr801 SoC, which integrates Akida™ neuromorphic technology from BrainChip, the first commercial producer of ultra-low power, fully digital, event-based, neuromorphic AI.


The GR801 combines Gaisler’s NOEL-V RISC-V processor with the Akida neuromorphic AI processor into a single integrated circuit, facilitating energy-efficient AI applications in space. Sweden’s Royal Institute of Technology (KTH) is contributing to this initiative by designing a demonstration application that utilises a neuromorphic sensor directly connected to Gaisler’s new GR801 device
.


Frontgrade are using Akida IP to develop a SoC incorporating Akida together with a NOEL-V RISC-V processor. This means that FG will be adapting the Akida architecture to maximize RadHard.


RTX (Raytheon)

Laguna Hills, Calif. – April 1st, 2025 BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power, fully digital, event-based, neuromorphic AI, today announced that it is partnering with Raytheon, an RTX (NYSE: RTX) business, to service a contract for $1.8M from the Air Force Research Laboratory on neuromorphic radar signaling processing.

Raytheon will deliver services and support as a partner with BrainChip for the completion of the contract award. The Air Force Research Labs contract, under the topic number AF242-D015, is titled “Mapping Complex Sensor Signal Processing Algorithms onto Neuromorphic Chips.” The project focuses on a specific type of radar processing known as micro-Doppler signature analysis, which offers unprecedented activity discrimination capabilities.

Neuromorphic hardware represents a low-power solution for edge devices, consuming significantly less energy than traditional computing hardware for signal processing and artificial intelligence tasks. If successful, this project could embed sophisticated radar processing solutions in power-constrained and thermally constrained weapon systems, such as missiles, drones and drone defense systems.

BrainChip’s Akida™ processor is a revolutionary computing architecture that is designed to process neural networks and machine learning algorithms at ultra-low power consumption, making it ideal for edge computing applications. The company’s neuromorphic technology improves the cognitive communication capabilities on size, weight and power & cost (SWaP-C)-constrained platforms such as military, spacecraft and robotics for commercial and government markets.

“Radar signaling processing will be implemented on ever-smaller mobile platforms, so minimizing system SWaP-C is critical,” said Sean Hehir, CEO of BrainChip. “This improved radar signaling performance per watt for the Air Force Research Laboratory showcases how neuromorphic computing can achieve significant benefits in the most mission-critical use cases
.”

I'm guessing that RTX has the data for the models for micro-Doppler signatures, and the project will be to adapt this data into models to run on Akida.

Note that Akida GenAi and Akida 3 are being adapted to avoid he necessity of adapting other models for Akida. Also these prospective Akida designs will also run INT16 and FP 32, which will provide greater accuracy, which may be needed for micro-Doppler.


ONSOR -

Onsor Technologies has revolutionized seizure management with its AI-powered wearable solution, powered by BrainChip’s Akida neuromorphic computing platform. This groundbreaking innovation provides an early warning system for epileptic seizures—empowering users to take control of their health and well-being like never before.


Onsor’s breakthrough comes in the form of AI-enabled smart glasses that continuously analyze real-time sensor data using a custom-trained AI model running on the ultra-efficient Akida neuromorphic processor. This self-contained, edge AI solution operates independently while eliminating the need for network connectivity, subscriptions, or bulky external devices.

When a seizure risk is detected, instant alerts are sent to the user and their caregivers via a mobile device, enabling timely intervention and proactive care
.

The arms of the glasses include electrodes to detect brainwave activity which presages an attack.
 
  • Like
  • Fire
  • Love
Reactions: 31 users
There are 6 companies in the BRN front page scroll:

Chelpis for the M2 cybersecurity PCB;

Andes RISC-V AI acceleration;

Prophesee DVS;

Frontgarde Grain space neuromorphic AI;

RTX AFRL radar;

Onsor AI medical epileptic monitor.

The following extracts from the BRN website are informative when seen collectively.

Chelpis -

Laguna Hills, Calif. – April 28th, 2025 BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power, fully digital, event-based, neuromorphic AI, today announced that Chelpis Quantum Corp. has selected its Akida AKD1000 chips to serve as the processor for built-in post-quantum cryptographic security. Chelpis, a chip company leading the Quantum Safe Migration ecosystem in Taiwan, is developing an M.2 card using the AKD1000 that can be inserted into targeted products to support their cryptographic security solutions. The M.2 card is based on a design from BrainChip along with an agreement to purchase a significant number of AKD1000 chips for qualification and deployment. Upon completion of this phase, Chelpis is planning to increase its commitment with additional orders for the AKD1000.

This agreement is the first step in a collaboration that is exploring the development of an AI-PQC robotic chip designed to fulfill both next-generation security and AI computing requirements. This project is a joint development effort with Chelpis partner company Mirle (2464.TW) and has been formally submitted for consideration under Taiwan’s chip innovation program. The funding aims to promote a new system-on-chip (SoC) integrating RISC-V, PQC, and NPU technologies. This SoC will specifically support manufacturing markets that emphasize a Made-in-USA strategy. Mirle plans to build autonomous quadruped robotics that mimic the movement of four-legged animals for industrial/factory environments. To enable this vision, Chelpis is exploring BrainChip’s advanced Akida™ IP to incorporate advanced visual GenAI capabilities in the proposed SoC design
.


I wonder if there is any link to the DoE Quantum Ventura CyberNeuro-RT project which also uses the Akida 1000 chip on an M2 card?


ANDES -
Laguna Hills, Calif. – April 23, 2025 BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power, fully digital, event-based, brain-inspired AI, today announced the integration of its NPUs with RISC-V cores from Andes Technology, the industry leading provider of RISC-V embedded cores. The companies will demonstrate BrainChip’s Akida™ AKD1500 on Andes’ QiLai Voyager Board and AndesCore™ AX45MP 64-bit multicore CPU IP at Andes RISC-V Con 2025 in San Jose, Calif. April 29 and in Hsinchu, Taiwan June 10.

The AKD1500 demonstrates the benefits of Akida’s pure digital, extremely energy-efficient, event-based AI computation for at-sensor or sensor-balanced solutions for AI, application processors, automotive electronics and security markets. The QiLai SoC and the Voyager development board further accelerates the development and porting of large RISC-V applications. Integrating BrainChip’s Akida technology and Andes’ high-performance QiLai SoC and Voyager development board provides a highly efficient solution for integrated edge AI compute and further expands the RISC-V ecosystem.

The BrainChip AKD1500 device is integrated into the Voyager development board using an M.2 card form factor. It delivers over 0.7 TOPS of event-based computing while consuming less than 250mW, achieving performance comparable to conventional CNN processing using 3–10× less compute. This demonstrates a cost and power-efficient path for integrating RISC-V SoCs, operating at a fraction of the power required by traditional AI accelerators. Akida is an event-based technology that is inherently lower power than conventional neural network accelerators, providing energy efficiency with high performance for partners to deliver AI solutions previously not possible on battery-operated or fan-less embedded, edge devices.

The Andes QiLai SoC chip incorporates a high-performance quad-core RISC-V AX45MP cluster. The AndesCore AX45MP is a superscalar, multicore design featuring a shared Level-2 cache, a coherence manager and a Memory Management Unit (MMU) to support Linux-based applications. Equipped with IOCP (I/O Coherency Port) interface, the AX45MP enables external hardware DMA to interact directly with the cache/memory subsystem, facilitating seamless communication between the AX45MP and high-speed modules like NPUs, GPUs and Gigabit Ethernet. Built on TSMC’s 7nm process technology, the AX45MP achieves clock speeds of up to 2.2 GHz on QiLai SoC. With higher Specint2006 performance than Cortex A55, high-clock frequency and multi-core Linux capabilities, the AX45MP has been very popular as Linux AP on various applications
.


ANDES uses the Akida 1500 device also on an M2 card.


Prophesee/Arquimea -

Enhancing Ocean Safety with Efficient AI-Powered Detection


Arquimea has deployed Akida with a Prophesee camera on a drone to detect distressed swimmers and surfers in the ocean helping lifeguards scale their services for large beach areas, opting for an event-based computing solution for its superior efficiency and consistently high-quality results.

BrainChip has teamed up with Prophesee, an event-based vision sensing and AI company, to produce a complete event-based sense and compute solution.

This integration allows vision data to be processed directly by Akida without the need for conversion into traditional frame-based formats, as required by conventional computer vision algorithms.
By computing the naturally sparse data output from the camera efficiently, the system eliminates unnecessary padding, reducing latency for faster detection while minimizing computational demands and power consumption
.

The Details provided with the Prophesee link also refer to the use of Akida with lidar and radar as well as the use of DVS with frame cameras.

Again, while not expressly stated, an M2 board would be suitable for drones because of the size and standardized pinout.


FRONTGRADE -

The Swedish National Space Agency (SNSA) has awarded Frontgrade Gaisler, a leading provider of radiation-hardened microprocessors for space missions, a contract to commercialise the first neuromorphic System on Chip (SoC) device for space applications. The device, which is currently under development at Frontgrade Gaisler, is part of the recently announced GRAIN (Gaisler Research Artificial Intelligence NOEL-V) product line.

The first GRAIN device in the product line is the Gr801 SoC, which integrates Akida™ neuromorphic technology from BrainChip, the first commercial producer of ultra-low power, fully digital, event-based, neuromorphic AI.


The GR801 combines Gaisler’s NOEL-V RISC-V processor with the Akida neuromorphic AI processor into a single integrated circuit, facilitating energy-efficient AI applications in space. Sweden’s Royal Institute of Technology (KTH) is contributing to this initiative by designing a demonstration application that utilises a neuromorphic sensor directly connected to Gaisler’s new GR801 device
.


Frontgrade are using Akida IP to develop a SoC incorporating Akida together with a NOEL-V RISC-V processor. This means that FG will be adapting the Akida architecture to maximize RadHard.


RTX (Raytheon)

Laguna Hills, Calif. – April 1st, 2025 BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power, fully digital, event-based, neuromorphic AI, today announced that it is partnering with Raytheon, an RTX (NYSE: RTX) business, to service a contract for $1.8M from the Air Force Research Laboratory on neuromorphic radar signaling processing.

Raytheon will deliver services and support as a partner with BrainChip for the completion of the contract award. The Air Force Research Labs contract, under the topic number AF242-D015, is titled “Mapping Complex Sensor Signal Processing Algorithms onto Neuromorphic Chips.” The project focuses on a specific type of radar processing known as micro-Doppler signature analysis, which offers unprecedented activity discrimination capabilities.

Neuromorphic hardware represents a low-power solution for edge devices, consuming significantly less energy than traditional computing hardware for signal processing and artificial intelligence tasks. If successful, this project could embed sophisticated radar processing solutions in power-constrained and thermally constrained weapon systems, such as missiles, drones and drone defense systems.

BrainChip’s Akida™ processor is a revolutionary computing architecture that is designed to process neural networks and machine learning algorithms at ultra-low power consumption, making it ideal for edge computing applications. The company’s neuromorphic technology improves the cognitive communication capabilities on size, weight and power & cost (SWaP-C)-constrained platforms such as military, spacecraft and robotics for commercial and government markets.

“Radar signaling processing will be implemented on ever-smaller mobile platforms, so minimizing system SWaP-C is critical,” said Sean Hehir, CEO of BrainChip. “This improved radar signaling performance per watt for the Air Force Research Laboratory showcases how neuromorphic computing can achieve significant benefits in the most mission-critical use cases
.”

I'm guessing that RTX has the data for the models for micro-Doppler signatures, and the project will be to adapt this data into models to run on Akida.

Note that Akida GenAi and Akida 3 are being adapted to avoid he necessity of adapting other models for Akida. Also these prospective Akida designs will also run INT16 and FP 32, which will provide greater accuracy, which may be needed for micro-Doppler.


ONSOR -

Onsor Technologies has revolutionized seizure management with its AI-powered wearable solution, powered by BrainChip’s Akida neuromorphic computing platform. This groundbreaking innovation provides an early warning system for epileptic seizures—empowering users to take control of their health and well-being like never before.


Onsor’s breakthrough comes in the form of AI-enabled smart glasses that continuously analyze real-time sensor data using a custom-trained AI model running on the ultra-efficient Akida neuromorphic processor. This self-contained, edge AI solution operates independently while eliminating the need for network connectivity, subscriptions, or bulky external devices.

When a seizure risk is detected, instant alerts are sent to the user and their caregivers via a mobile device, enabling timely intervention and proactive care
.

The arms of the glasses include electrodes to detect brainwave activity which presages an attack.
Surely with this game breaking news i see some great publicity for Brainchip,
But with all the secretiveness we'll all have to look at the financials
 
  • Like
  • Sad
Reactions: 6 users

Guzzi62

Regular
Didn't they say at the AGM that they were just about to change the website for a new design?

And the road map added?

It looks the same to me?
 
Last edited:
  • Like
Reactions: 1 users

HopalongPetrovski

I'm Spartacus!
Didn't they say at the AGM that they were just about to chance the website for a new design?

And the road map added?

It looks the same to me?
Yes. They did say the website was going to be updated and that vids of the roadmap and the AGM were also going to be made available.
From memory the videos last year were out 7-10 days after the event.
 
  • Like
Reactions: 3 users

Tothemoon24

Top 20
IMG_1023.jpeg





A new wave of computing is on the horizon, modeled not after silicon logic gates but after the human brain. This brain-like technology, called neuromorphic computing, is quickly maturing into a viable tool for energy-efficient, real-time processing, opening up powerful opportunities for business.

For decades, computer processors have largely followed the same basic design: fast, sequential, and power-hungry. However, as the world increasingly relies on artificial intelligence, edge computing, and real-time data analysis, that traditional approach is beginning to show its limits. Enter neuromorphic computing, a radically different paradigm inspired by how neurons and synapses work in the brain.

Watt Matters in AI banner

A recent article in Nature Communicationsshowcases how this technology is gaining traction beyond research labs. Neuromorphic chips don't rely on conventional binary processing. Instead, they work through networks of artificial neurons, handling information through spikes - short bursts of electrical activity, just like our brains. This allows them to be both incredibly fast and extraordinarily efficient in power usage.

“This shift from a laboratory curiosity to a commercially viable architecture represents a significant milestone for neuromorphic computing,” the authors write, noting that the hardware and software ecosystem is beginning to resemble the early days of the AI-focused GPU revolution.

That’s not just an academic breakthrough. It’s a business opportunity.

Why neuromorphic tech matters for the industry​

Traditional AI models require vast amounts of data and energy, often processed in centralized data centers. Neuromorphic systems promise to flip that model. Their strength lies in performing complex computations at the “edge” - in the device itself - whether it's a factory sensor, a drone, or a wearable device. This kind of local processing greatly reduces the need to constantly send data back and forth to the cloud, which is faster and much more energy-efficient.

Take manufacturing, for example. A neuromorphic chip embedded in a robotic arm could help it instantly adapt to new conditions on the fly, like detecting subtle defects or adjusting grip in response to slippery materials, without having to wait for remote AI instructions. In logistics, the same approach could allow delivery robots or drones to navigate dynamically in complex environments while consuming minimal power.


These chips also excel at handling time-sensitive data like speech, motion, or fluctuating real-world signals. Because they operate in real time and process data only when an event occurs (rather than continuously polling for input), neuromorphic systems can handle such information far more efficiently than standard processors. That means smarter hearing aids, more responsive smart home systems, and real-time anomaly detection in financial systems or cybersecurity applications.

As the researchers note, “Spiking neural networks and in-memory computation provide a natural fit for energy-constrained, real-time applications that dominate edge computing.” In other words, this architecture wasn’t just built for academic curiosity; it’s purpose-built for the kind of practical challenges many industries now face.

Lessons from the GPU revolution​

The article draws an interesting parallel with the rise of GPUs (graphics processing units), which were initially built for rendering video games but became indispensable for training deep learning models. GPUs transformed AI by making it accessible, scalable, and fast.

Similarly, neuromorphic computing may follow a comparable trajectory. But for that to happen, businesses need not just the hardware, but also the software ecosystem to match. Just as NVIDIA’s CUDA platform made GPUs user-friendly for developers, the neuromorphic field is working on building toolkits and platforms that make it easier to program and deploy brain-inspired algorithms.

What does this mean for forward-thinking companies? Early adopters stand to gain a competitive edge. While the technology is not yet plug-and-play, it is maturing rapidly. Major players like Intel, IBM, and startups around the world are actively investing in neuromorphic platforms. As with any emerging tech, the businesses that take the time now to experiment and learn will be better positioned when neuromorphic computing hits mainstream adoption.

Taking the first steps​

So, how should companies get involved? The key is to start small but strategically. Pilot projects - say, embedding a neuromorphic chip into a specific part of a production line or testing it in a wearable prototype - can provide valuable insights into how these systems behave in the wild. Partnering with universities or startups in this space can also help reduce risk and accelerate learning.

Keeping an eye on standardization efforts and software frameworks will also be important. As the field evolves, we’ll see the emergence of platforms and protocols that simplify integration into existing infrastructure, much like what happened with cloud computing a decade ago.

A shift that could redefine smart systems​

In essence, neuromorphic computing is not just another performance boost; it represents a shift in how we think about machine intelligence. Instead of brute-force calculation, it leans toward adaptability, responsiveness, and energy awareness. That’s a major step forward for applications where power, speed, and context-sensitivity matter.

As global industries grapple with the demands of automation, sustainability, and real-time data, neuromorphic systems could offer a smart, lean alternative to today's cloud-heavy AI architectures.

It’s early days, but the momentum is real, and for businesses with the vision to see what's coming next, now’s the time to plug in.
 
  • Like
  • Love
  • Fire
Reactions: 13 users

Diogenese

Top 20
There are 6 companies in the BRN front page scroll:

Chelpis for the M2 cybersecurity PCB;

Andes RISC-V AI acceleration;

Prophesee DVS;

Frontgarde Grain space neuromorphic AI;

RTX AFRL radar;

Onsor AI medical epileptic monitor.

The following extracts from the BRN website are informative when seen collectively.

Chelpis -

Laguna Hills, Calif. – April 28th, 2025 BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power, fully digital, event-based, neuromorphic AI, today announced that Chelpis Quantum Corp. has selected its Akida AKD1000 chips to serve as the processor for built-in post-quantum cryptographic security. Chelpis, a chip company leading the Quantum Safe Migration ecosystem in Taiwan, is developing an M.2 card using the AKD1000 that can be inserted into targeted products to support their cryptographic security solutions. The M.2 card is based on a design from BrainChip along with an agreement to purchase a significant number of AKD1000 chips for qualification and deployment. Upon completion of this phase, Chelpis is planning to increase its commitment with additional orders for the AKD1000.

This agreement is the first step in a collaboration that is exploring the development of an AI-PQC robotic chip designed to fulfill both next-generation security and AI computing requirements. This project is a joint development effort with Chelpis partner company Mirle (2464.TW) and has been formally submitted for consideration under Taiwan’s chip innovation program. The funding aims to promote a new system-on-chip (SoC) integrating RISC-V, PQC, and NPU technologies. This SoC will specifically support manufacturing markets that emphasize a Made-in-USA strategy. Mirle plans to build autonomous quadruped robotics that mimic the movement of four-legged animals for industrial/factory environments. To enable this vision, Chelpis is exploring BrainChip’s advanced Akida™ IP to incorporate advanced visual GenAI capabilities in the proposed SoC design
.


I wonder if there is any link to the DoE Quantum Ventura CyberNeuro-RT project which also uses the Akida 1000 chip on an M2 card?


ANDES -
Laguna Hills, Calif. – April 23, 2025 BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power, fully digital, event-based, brain-inspired AI, today announced the integration of its NPUs with RISC-V cores from Andes Technology, the industry leading provider of RISC-V embedded cores. The companies will demonstrate BrainChip’s Akida™ AKD1500 on Andes’ QiLai Voyager Board and AndesCore™ AX45MP 64-bit multicore CPU IP at Andes RISC-V Con 2025 in San Jose, Calif. April 29 and in Hsinchu, Taiwan June 10.

The AKD1500 demonstrates the benefits of Akida’s pure digital, extremely energy-efficient, event-based AI computation for at-sensor or sensor-balanced solutions for AI, application processors, automotive electronics and security markets. The QiLai SoC and the Voyager development board further accelerates the development and porting of large RISC-V applications. Integrating BrainChip’s Akida technology and Andes’ high-performance QiLai SoC and Voyager development board provides a highly efficient solution for integrated edge AI compute and further expands the RISC-V ecosystem.

The BrainChip AKD1500 device is integrated into the Voyager development board using an M.2 card form factor. It delivers over 0.7 TOPS of event-based computing while consuming less than 250mW, achieving performance comparable to conventional CNN processing using 3–10× less compute. This demonstrates a cost and power-efficient path for integrating RISC-V SoCs, operating at a fraction of the power required by traditional AI accelerators. Akida is an event-based technology that is inherently lower power than conventional neural network accelerators, providing energy efficiency with high performance for partners to deliver AI solutions previously not possible on battery-operated or fan-less embedded, edge devices.

The Andes QiLai SoC chip incorporates a high-performance quad-core RISC-V AX45MP cluster. The AndesCore AX45MP is a superscalar, multicore design featuring a shared Level-2 cache, a coherence manager and a Memory Management Unit (MMU) to support Linux-based applications. Equipped with IOCP (I/O Coherency Port) interface, the AX45MP enables external hardware DMA to interact directly with the cache/memory subsystem, facilitating seamless communication between the AX45MP and high-speed modules like NPUs, GPUs and Gigabit Ethernet. Built on TSMC’s 7nm process technology, the AX45MP achieves clock speeds of up to 2.2 GHz on QiLai SoC. With higher Specint2006 performance than Cortex A55, high-clock frequency and multi-core Linux capabilities, the AX45MP has been very popular as Linux AP on various applications
.


ANDES uses the Akida 1500 device also on an M2 card.


Prophesee/Arquimea -

Enhancing Ocean Safety with Efficient AI-Powered Detection


Arquimea has deployed Akida with a Prophesee camera on a drone to detect distressed swimmers and surfers in the ocean helping lifeguards scale their services for large beach areas, opting for an event-based computing solution for its superior efficiency and consistently high-quality results.

BrainChip has teamed up with Prophesee, an event-based vision sensing and AI company, to produce a complete event-based sense and compute solution.

This integration allows vision data to be processed directly by Akida without the need for conversion into traditional frame-based formats, as required by conventional computer vision algorithms.
By computing the naturally sparse data output from the camera efficiently, the system eliminates unnecessary padding, reducing latency for faster detection while minimizing computational demands and power consumption
.

The Details provided with the Prophesee link also refer to the use of Akida with lidar and radar as well as the use of DVS with frame cameras.

Again, while not expressly stated, an M2 board would be suitable for drones because of the size and standardized pinout.


FRONTGRADE -

The Swedish National Space Agency (SNSA) has awarded Frontgrade Gaisler, a leading provider of radiation-hardened microprocessors for space missions, a contract to commercialise the first neuromorphic System on Chip (SoC) device for space applications. The device, which is currently under development at Frontgrade Gaisler, is part of the recently announced GRAIN (Gaisler Research Artificial Intelligence NOEL-V) product line.

The first GRAIN device in the product line is the Gr801 SoC, which integrates Akida™ neuromorphic technology from BrainChip, the first commercial producer of ultra-low power, fully digital, event-based, neuromorphic AI.


The GR801 combines Gaisler’s NOEL-V RISC-V processor with the Akida neuromorphic AI processor into a single integrated circuit, facilitating energy-efficient AI applications in space. Sweden’s Royal Institute of Technology (KTH) is contributing to this initiative by designing a demonstration application that utilises a neuromorphic sensor directly connected to Gaisler’s new GR801 device
.


Frontgrade are using Akida IP to develop a SoC incorporating Akida together with a NOEL-V RISC-V processor. This means that FG will be adapting the Akida architecture to maximize RadHard.


RTX (Raytheon)

Laguna Hills, Calif. – April 1st, 2025 BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power, fully digital, event-based, neuromorphic AI, today announced that it is partnering with Raytheon, an RTX (NYSE: RTX) business, to service a contract for $1.8M from the Air Force Research Laboratory on neuromorphic radar signaling processing.

Raytheon will deliver services and support as a partner with BrainChip for the completion of the contract award. The Air Force Research Labs contract, under the topic number AF242-D015, is titled “Mapping Complex Sensor Signal Processing Algorithms onto Neuromorphic Chips.” The project focuses on a specific type of radar processing known as micro-Doppler signature analysis, which offers unprecedented activity discrimination capabilities.

Neuromorphic hardware represents a low-power solution for edge devices, consuming significantly less energy than traditional computing hardware for signal processing and artificial intelligence tasks. If successful, this project could embed sophisticated radar processing solutions in power-constrained and thermally constrained weapon systems, such as missiles, drones and drone defense systems.

BrainChip’s Akida™ processor is a revolutionary computing architecture that is designed to process neural networks and machine learning algorithms at ultra-low power consumption, making it ideal for edge computing applications. The company’s neuromorphic technology improves the cognitive communication capabilities on size, weight and power & cost (SWaP-C)-constrained platforms such as military, spacecraft and robotics for commercial and government markets.

“Radar signaling processing will be implemented on ever-smaller mobile platforms, so minimizing system SWaP-C is critical,” said Sean Hehir, CEO of BrainChip. “This improved radar signaling performance per watt for the Air Force Research Laboratory showcases how neuromorphic computing can achieve significant benefits in the most mission-critical use cases
.”

I'm guessing that RTX has the data for the models for micro-Doppler signatures, and the project will be to adapt this data into models to run on Akida.

Note that Akida GenAi and Akida 3 are being adapted to avoid he necessity of adapting other models for Akida. Also these prospective Akida designs will also run INT16 and FP 32, which will provide greater accuracy, which may be needed for micro-Doppler.


ONSOR -

Onsor Technologies has revolutionized seizure management with its AI-powered wearable solution, powered by BrainChip’s Akida neuromorphic computing platform. This groundbreaking innovation provides an early warning system for epileptic seizures—empowering users to take control of their health and well-being like never before.


Onsor’s breakthrough comes in the form of AI-enabled smart glasses that continuously analyze real-time sensor data using a custom-trained AI model running on the ultra-efficient Akida neuromorphic processor. This self-contained, edge AI solution operates independently while eliminating the need for network connectivity, subscriptions, or bulky external devices.

When a seizure risk is detected, instant alerts are sent to the user and their caregivers via a mobile device, enabling timely intervention and proactive care
.

The arms of the glasses include electrodes to detect brainwave activity which presages an attack.
The AFRL project is AF242-D015. It is a direct to Phase II project, so it is really about measuring Akida's performance. That involves creating SNN models and then applying test data, exactly as we would have done for NaNose. So, apart from building the models, they will fine tune the configuration (number of layers, nodes/NPUs per layer/weights per NPU).


https://legacy.www.sbir.gov/node/2585909

Mapping Complex Sensor Signal Processing Algorithms onto Neuromorphic Chips


PHASE II: Using a HWIL (hardware in the loop) approach, awardee(s) will measure the response of the neuromorphic hardware to RF and radar signals in real time. Awardee(s) will validate the performance of the neuromorphic hardware in terms of power consumption and timing latency. Awardee(s) will confirm that the outputs are deterministic and compare favorably to the expected values from the M&S environment.

Given that there was no Phase I, it would be open to conclude that AFRL are already familiar with Akida and want to see how it performs for the particular micro-Doppler radar task.

Doppler is the compression or extension of the reflected wavelength depending on whether the tracked object is approaching or receding. Micro-Doppler is the minor fluctuations in wavelength caused, for example, by a rotating propeller or some other characteristic vibration of the tracked object. So a lot of the work is done by the radar system in separating out the various frequency/wavelength components of the reflected signal. Akida needs to be able to identify the tracked object by comparing these "filtered" reflections with previously recorded reflections, as well as "learning" (fof) new reflection patterns.

Akida can do this on its ear. I reckon BRN will wrap this up in well under a year, and, given the urgency, we can go straight to incorporating the system in micro-Doppler detection systems.
 
  • Like
  • Fire
Reactions: 10 users

Diogenese

Top 20
The AFRL project is AF242-D015. It is a direct to Phase II project, so it is really about measuring Akida's performance. That involves creating SNN models and then applying test data, exactly as we would have done for NaNose. So, apart from building the models, they will fine tune the configuration (number of layers, nodes/NPUs per layer/weights per NPU).


https://legacy.www.sbir.gov/node/2585909

Mapping Complex Sensor Signal Processing Algorithms onto Neuromorphic Chips


PHASE II: Using a HWIL (hardware in the loop) approach, awardee(s) will measure the response of the neuromorphic hardware to RF and radar signals in real time. Awardee(s) will validate the performance of the neuromorphic hardware in terms of power consumption and timing latency. Awardee(s) will confirm that the outputs are deterministic and compare favorably to the expected values from the M&S environment.

Given that there was no Phase I, it would be open to conclude that AFRL are already familiar with Akida and want to see how it performs for the particular micro-Doppler radar task.

Doppler is the compression or extension of the reflected wavelength depending on whether the tracked object is approaching or receding. Micro-Doppler is the minor fluctuations in wavelength caused, for example, by a rotating propeller or some other characteristic vibration of the tracked object. So a lot of the work is done by the radar system in separating out the various frequency/wavelength components of the reflected signal. Akida needs to be able to identify the tracked object by comparing these "filtered" reflections with previously recorded reflections, as well as "learning" (fof) new reflection patterns.

Akida can do this on its ear. I reckon BRN will wrap this up in well under a year, and, given the urgency, we can go straight to incorporating the system in micro-Doppler detection systems.
Well look who's using AI radar:

https://www.rtx.com/news/news-cente...er-ai-ml-powered-radar-warning-receiver-for-4

RTX's Raytheon demonstrates first-ever AI/ML-powered Radar Warning Receiver for 4th generation aircraft​

New technology will enhance aircrew survivability and accelerate AI/ML capability deployment​

February 24, 2025
GOLETA, Calif., Feb. 24, 2025 /PRNewswire/ -- Raytheon, an RTX business (NYSE: RTX), has successfully completed flight testing on the first-ever AI/ML-powered Radar Warning Receiver (RWR) system for a fourth-generation aircraft.
The Cognitive Algorithm Deployment System, known as CADS, combines the latest Embedded Graphics Processing Unit with Deepwave Digital's computing stack, enabling AI models to be integrated into Raytheon's legacy RWR systems for AI/ML processing at the sensor. This integration allows CADS to employ cognitive methods to sense, identify and prioritize threats. With the CADS capability, the enhanced RWR will increase aircrew survivability while facilitating the rapid and cost-effective mass deployment of modern AI/ML capabilities.
"The advantages of AI in defense systems are extensive, and our recent CADS test demonstrates how commercially available products, paired with advanced algorithms and cognitive methods, can help the U.S. and its allies outpace peer threats," said Bryan Rosselli, president of Advanced Products and Solutions at Raytheon. "CADS' ability to quickly process data and run third-party algorithms that prioritize threats, with almost no latency will significantly enhance survivability for military personnel."
Initial CADS hardware and cognitive radar processing capabilities were tested on Raytheon's flight test aircraft. CADS performed successfully during additional flight testing and demonstrations on an F-16 at the Air National Guard's test range near Tucson, Arizona in December. The flight tests incorporate containerized AI/ML techniques from the Georgia Tech Research Institute, Vadum, Inc., and Raytheon's cognitive electronic warfare team.
CADS is expected to begin being procured across multiple platforms in early 2025
.


Well that's not us. It's Vadum Inc:

https://vaduminc.com/elementor-12380/

It's directed to hostile frequency-hopping radar, a different problem from micro-Doppler.

Despite the reference to AI, I couldn't find any Vadum patents which indicate any expertise in NNs:

US10908261B2 Target identification and clutter mitigation in high resolution radar systems 20171109

1747136627773.png



Object detector 116 is an object detection module that identifies solid objects from noisy responses and associates an object centroid location with a current position.

Targets of interest are passed along to object tracker 120 , while clutter objects are rejected. Ideally, the targets of interest only are passed to the expensive tracking module (e.g., object tracker 120 ). Object discriminator 122 , present in some high-resolution radars, determines what type of object is being observed. At object discriminator 122 , a more detailed identification of the nature of the target of interest objects are determined, such as type Q, type R type S or type T. Once objects are identified and possibly ranked for priority they are reported to systems supported by the radar. Object tracker 120 and object discriminator 122 are examples of high-cost signal processing
.


In fact Vadum's tech is software:

https://vaduminc.com/elementor-12380/

...

The really cool part about it, is it’s able to hold third-party apps.” That’s because CADS is built using an “open architecture” meant to accommodate new software packages from different vendors, not just Raytheon, as long as they comply with certain technical standards. (The details of those standards, unlike on most “open source” projects, are classified).

Raytheon and the Air National Guard have already flight-tested an ALR-69A upgraded with CADS running three different radar-analyzing algorithms, he said. One was developed by Raytheon, albeit a different division from Baladjanian’s; one came from Georgia Tech, a frequent partner of the Pentagon; and a third came from a small firm called
Vadum (a spinoff from the University of North Carolina).
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 8 users

Rach2512

Regular

A repost from Anthony Lewis


From the LinkedIn link.

 
  • Love
  • Like
Reactions: 3 users

Rach2512

Regular

Anyone with way more technical know how than me, fancy replying to this link?

Screenshot_20250513_212747_Samsung Internet.jpg
 
  • Like
Reactions: 1 users
Top Bottom