BRN Discussion Ongoing

IloveLamp

Top 20

1000013312.jpg
 
  • Like
  • Love
  • Fire
Reactions: 8 users

Tothemoon24

Top 20

BrainChip Launches AKD1500 Edge AI Co-Processor for Low-Power Devices​


November 5, 2025BY QUANTUM NEWS
BrainChip Launches AKD1500 Edge AI Co-Processor for Low-Power Devices
Forget cloud reliance – the future of artificial intelligence is heading straight to your devices, and BrainChip is powering that shift. The company today unveiled the AKD1500, a groundbreaking co-processor that brings advanced AI capabilities to low-power edge devices like wearables and smart sensors. Achieving an impressive 800 giga operations per second while consuming under 300 milliwatts, the AKD1500 dramatically improves performance and efficiency – a critical step towards truly ubiquitous AI in everything from healthcare and industrial automation to everyday consumer electronics. This isn’t just about faster processing; it’s about enabling adaptive learning and intelligent decision-making directly on the device, unlocking a new era of responsive and private AI experiences.

AKD1500 Performance and Efficiency​

BrainChip’s newly unveiled AKD1500 Edge AI co-processor is poised to redefine efficiency in edge computing, delivering a remarkable 800 giga operations per second (GOPS) while consuming under 300 milliwatts of power. This performance benchmark is particularly crucial for battery-powered devices and thermally constrained environments like wearables, smart sensors, and even defense applications – evidenced by early integrations with companies like Parsons, Bascom Hunter and Onsor Technologies. Unlike traditional AI accelerators reliant on cloud-based training, the AKD1500 leverages BrainChip’s Akida neuromorphic architecture to enable on-chip learning, a key differentiator. Furthermore, compatibility with x86, ARM, and RISC-V platforms through PCIe or serial interfaces, coupled with BrainChip’s MetaTF™ software suite for streamlined TensorFlow/KERAS model deployment, promises rapid adoption and reduced development costs for a wide range of AI-powered solutions.

Seamless Integration and Applications​

BrainChip’s newly launched AKD1500 Edge AI co-processor is designed for seamless integration into existing systems, a key factor for rapid deployment across diverse applications. The chip readily connects with x86, ARM, and RISC-V host processors via PCIe or serial interfaces, avoiding the need for complete system overhauls—a significant advantage for upgrading defense, industrial, and enterprise setups. This flexibility extends to embedded microcontrollers, enabling AI capabilities in healthcare, wearables, and consumer electronics. Supported by BrainChip’s MetaTF™ software suite—which streamlines model conversion from standard TensorFlow/KERAS formats—and boasting on-chip learning capabilities, the AKD1500 differentiates itself from cloud-dependent AI accelerators. Already designed into solutions with companies like Parsons, Bascom Hunter, and Onsor Technologies, the AKD1500 achieves 800 GOPS while consuming under 300 milliwatts, making it ideal for power and thermally constrained environments.

Software and On-Chip Learning​

BrainChip’s newly unveiled AKD1500 Edge AI co-processor represents a significant advancement in on-chip learning capabilities for edge devices. Unlike traditional AI accelerators dependent on cloud-based training, the AKD1500 leverages BrainChip’s Akida neuromorphic architecture to facilitate adaptive learning directly on the chip itself. This is enabled by the MetaTF™ software development tools, which streamline the conversion and deployment of standard TensorFlow/Keras models. Achieving 800 giga operations per second (GOPS) while consuming under 300 milliwatts, the AKD1500 offers exceptional power efficiency, making it ideal for battery-powered wearables, smart sensors, and thermally constrained environments. Furthermore, its compatibility with x86, ARM, and RISC-V platforms via PCIe or Serial interfaces allows for seamless integration and upgrades within existing systems – from industrial and defense applications to healthcare and consumer electronics – without requiring a complete redesign.

 
  • Like
  • Fire
  • Love
Reactions: 16 users

Rach2512

Regular
  • Like
  • Fire
Reactions: 5 users

manny100

Top 20
Hi Manny,

My guess is that it is adapted to handle TENNs (using the MAC-Lite circuits*), and can do regression.

* DAM**
Hi Dio, I asked Chat gpt5. The problem with the chat boxes is that you pretty much have to know the 'guts' of the answer before you ask. The chat box then gives a useful summary you can use.
When you do not have a fair idea of the answer before you cant be sure if its right unless you check thoroughly.
I asked and here is an extract from what.
" Direct answer: The AKD1500 is more advanced in raw performance and integration flexibility, while Akida Gen2 (IP platform) is broader in scope, designed as a licensable architecture for embedding into many SoCs. In terms of chip performance, the AKD1500 is the stronger, mission‑ready device. In terms of design flexibility and ecosystem reach, Akida Gen2 is more versatile."
Its a general statement and probably not much different to what we might have suspected/guessed given its for defense and health
It goes into more detail but not worth reproducing here.
I take it with a great deal of suspicion as apart from what we and chat box know from the recent releases not much is known about the new 1500. Unless there are comprehensive spec sheets out there - which i doubt - why tell the world 9 months out from delivery?
There has to be significant changes or they just would have put the initial 'mask' with minor changes out to production.
 
Last edited:
  • Like
Reactions: 1 users

GStocks123

Regular
  • Like
  • Fire
Reactions: 5 users

manny100

Top 20

BrainChip Launches AKD1500 Edge AI Co-Processor for Low-Power Devices​


November 5, 2025BY QUANTUM NEWS
BrainChip Launches AKD1500 Edge AI Co-Processor for Low-Power Devices
Forget cloud reliance – the future of artificial intelligence is heading straight to your devices, and BrainChip is powering that shift. The company today unveiled the AKD1500, a groundbreaking co-processor that brings advanced AI capabilities to low-power edge devices like wearables and smart sensors. Achieving an impressive 800 giga operations per second while consuming under 300 milliwatts, the AKD1500 dramatically improves performance and efficiency – a critical step towards truly ubiquitous AI in everything from healthcare and industrial automation to everyday consumer electronics. This isn’t just about faster processing; it’s about enabling adaptive learning and intelligent decision-making directly on the device, unlocking a new era of responsive and private AI experiences.

AKD1500 Performance and Efficiency​

BrainChip’s newly unveiled AKD1500 Edge AI co-processor is poised to redefine efficiency in edge computing, delivering a remarkable 800 giga operations per second (GOPS) while consuming under 300 milliwatts of power. This performance benchmark is particularly crucial for battery-powered devices and thermally constrained environments like wearables, smart sensors, and even defense applications – evidenced by early integrations with companies like Parsons, Bascom Hunter and Onsor Technologies. Unlike traditional AI accelerators reliant on cloud-based training, the AKD1500 leverages BrainChip’s Akida neuromorphic architecture to enable on-chip learning, a key differentiator. Furthermore, compatibility with x86, ARM, and RISC-V platforms through PCIe or serial interfaces, coupled with BrainChip’s MetaTF™ software suite for streamlined TensorFlow/KERAS model deployment, promises rapid adoption and reduced development costs for a wide range of AI-powered solutions.

Seamless Integration and Applications​

BrainChip’s newly launched AKD1500 Edge AI co-processor is designed for seamless integration into existing systems, a key factor for rapid deployment across diverse applications. The chip readily connects with x86, ARM, and RISC-V host processors via PCIe or serial interfaces, avoiding the need for complete system overhauls—a significant advantage for upgrading defense, industrial, and enterprise setups. This flexibility extends to embedded microcontrollers, enabling AI capabilities in healthcare, wearables, and consumer electronics. Supported by BrainChip’s MetaTF™ software suite—which streamlines model conversion from standard TensorFlow/KERAS formats—and boasting on-chip learning capabilities, the AKD1500 differentiates itself from cloud-dependent AI accelerators. Already designed into solutions with companies like Parsons, Bascom Hunter, and Onsor Technologies, the AKD1500 achieves 800 GOPS while consuming under 300 milliwatts, making it ideal for power and thermally constrained environments.

Software and On-Chip Learning​

BrainChip’s newly unveiled AKD1500 Edge AI co-processor represents a significant advancement in on-chip learning capabilities for edge devices. Unlike traditional AI accelerators dependent on cloud-based training, the AKD1500 leverages BrainChip’s Akida neuromorphic architecture to facilitate adaptive learning directly on the chip itself. This is enabled by the MetaTF™ software development tools, which streamline the conversion and deployment of standard TensorFlow/Keras models. Achieving 800 giga operations per second (GOPS) while consuming under 300 milliwatts, the AKD1500 offers exceptional power efficiency, making it ideal for battery-powered wearables, smart sensors, and thermally constrained environments. Furthermore, its compatibility with x86, ARM, and RISC-V platforms via PCIe or Serial interfaces allows for seamless integration and upgrades within existing systems – from industrial and defense applications to healthcare and consumer electronics – without requiring a complete redesign.

Thanks T, great summary
 
  • Like
Reactions: 2 users

Diogenese

Top 20
Hi Dio, I asked Chat gpt5. The problem with the chat boxes is that you pretty much have to know the 'guts' of the answer before you ask. The chat box then gives a useful summary you can use.
When you do not have a fair idea of the answer before you cant be sure if its right unless you check thoroughly.
I asked and here is an extract from what.
" Direct answer: The AKD1500 is more advanced in raw performance and integration flexibility, while Akida Gen2 (IP platform) is broader in scope, designed as a licensable architecture for embedding into many SoCs. In terms of chip performance, the AKD1500 is the stronger, mission‑ready device. In terms of design flexibility and ecosystem reach, Akida Gen2 is more versatile."
Its a general statement and probably not much different to what we might have suspected/guessed given its for defense and health
It goes into more detail but not worth reproducing here.
I take it with a great deal of suspicion as apart from what we and chat box know from the recent releases not much is known about the new 1500. Unless there are comprehensive spec sheets out there - which i doubt - why tell the world 9 months out from delivery?
There has to be significant changes or they just would have put the initial 'mask' with minor changes out to production.
Yes. That was my thinking - it shouldn't take that long just to recycle the old 1500. They did do a new tapeout for Akida1 (with MAC-Lite), so i assumed they would use the new layout for 1500.

Akida 1 (new):

https://brainchip.com/wp-content/uploads/2025/04/Akida1-IP-Product-Brochure-V2.1-1.pdf

Self-contained neural processor • Scalable fabric of 1-128 nodes • Each neural node supports 128MACs • Configurable 50-130K embedded local SRAM • DMA for all memory and model operations • Multi-layer execution without host CPU • Integrate with ARM, RISC-V with AXI bus interface

The 1500 does not include the host CPU (ARM Cortex) - It needs an external processor for configuration - hence "co-processor".

I think it was Tony Lewis who mentioned that originally they couldn't get recurrence to work with TENNs, but they have filed a patent for that now:


US2025209313A1 METHOD AND SYSTEM FOR IMPLEMENTING ENCODER PROJECTION IN NEURAL NETWORKS 20231222


[0054] In some embodiments, the neural network may be configured to perform an encoder projection operation either in a buffer mode or in a recurrent mode. In some embodiments, the buffer mode operation is preferred during training and the recurrent mode operation is preferred during inference for generating processed content based on the input data stream or signal. The preferred operations in the buffer mode and the recurrent mode may be ascertained from the detailed operations of the buffer mode and the recurrent mode as described below with reference to FIGS. 4 to 14 .

1762341624920.png
 
  • Like
  • Fire
Reactions: 6 users

Attachments

  • Low_power_indirect_time_of_flight_near_field_LiDAR.pdf
    1.5 MB · Views: 22
  • Love
  • Fire
Reactions: 2 users

Diogenese

Top 20

BrainChip Launches AKD1500 Edge AI Co-Processor for Low-Power Devices​


November 5, 2025BY QUANTUM NEWS
BrainChip Launches AKD1500 Edge AI Co-Processor for Low-Power Devices
Forget cloud reliance – the future of artificial intelligence is heading straight to your devices, and BrainChip is powering that shift. The company today unveiled the AKD1500, a groundbreaking co-processor that brings advanced AI capabilities to low-power edge devices like wearables and smart sensors. Achieving an impressive 800 giga operations per second while consuming under 300 milliwatts, the AKD1500 dramatically improves performance and efficiency – a critical step towards truly ubiquitous AI in everything from healthcare and industrial automation to everyday consumer electronics. This isn’t just about faster processing; it’s about enabling adaptive learning and intelligent decision-making directly on the device, unlocking a new era of responsive and private AI experiences.

AKD1500 Performance and Efficiency​

BrainChip’s newly unveiled AKD1500 Edge AI co-processor is poised to redefine efficiency in edge computing, delivering a remarkable 800 giga operations per second (GOPS) while consuming under 300 milliwatts of power. This performance benchmark is particularly crucial for battery-powered devices and thermally constrained environments like wearables, smart sensors, and even defense applications – evidenced by early integrations with companies like Parsons, Bascom Hunter and Onsor Technologies. Unlike traditional AI accelerators reliant on cloud-based training, the AKD1500 leverages BrainChip’s Akida neuromorphic architecture to enable on-chip learning, a key differentiator. Furthermore, compatibility with x86, ARM, and RISC-V platforms through PCIe or serial interfaces, coupled with BrainChip’s MetaTF™ software suite for streamlined TensorFlow/KERAS model deployment, promises rapid adoption and reduced development costs for a wide range of AI-powered solutions.

Seamless Integration and Applications​

BrainChip’s newly launched AKD1500 Edge AI co-processor is designed for seamless integration into existing systems, a key factor for rapid deployment across diverse applications. The chip readily connects with x86, ARM, and RISC-V host processors via PCIe or serial interfaces, avoiding the need for complete system overhauls—a significant advantage for upgrading defense, industrial, and enterprise setups. This flexibility extends to embedded microcontrollers, enabling AI capabilities in healthcare, wearables, and consumer electronics. Supported by BrainChip’s MetaTF™ software suite—which streamlines model conversion from standard TensorFlow/KERAS formats—and boasting on-chip learning capabilities, the AKD1500 differentiates itself from cloud-dependent AI accelerators. Already designed into solutions with companies like Parsons, Bascom Hunter, and Onsor Technologies, the AKD1500 achieves 800 GOPS while consuming under 300 milliwatts, making it ideal for power and thermally constrained environments.

Software and On-Chip Learning​

BrainChip’s newly unveiled AKD1500 Edge AI co-processor represents a significant advancement in on-chip learning capabilities for edge devices. Unlike traditional AI accelerators dependent on cloud-based training, the AKD1500 leverages BrainChip’s Akida neuromorphic architecture to facilitate adaptive learning directly on the chip itself. This is enabled by the MetaTF™ software development tools, which streamline the conversion and deployment of standard TensorFlow/Keras models. Achieving 800 giga operations per second (GOPS) while consuming under 300 milliwatts, the AKD1500 offers exceptional power efficiency, making it ideal for battery-powered wearables, smart sensors, and thermally constrained environments. Furthermore, its compatibility with x86, ARM, and RISC-V platforms via PCIe or Serial interfaces allows for seamless integration and upgrades within existing systems – from industrial and defense applications to healthcare and consumer electronics – without requiring a complete redesign.


Looks pretty deserted - Is that a triffid in the corner?
 
  • Haha
Reactions: 3 users

Frangipani

Top 20

View attachment 92774

The EU-funded project SpikeHERO (Spike Hybrid Edge Computing for Robust Optoelectrical Signal Processing) does not involve Akida as the electrical SNN chip.

Instead, it will be using the SENNA chip developed by consortium lead Fraunhofer IIS and Fraunhofer EMFT, whose teams “are currently working on the second generation, which promises even higher spike rates with lower energy consumptions”.



B3023F44-F009-4798-8B76-AF6FE42CA68D.jpeg
 
  • Like
  • Fire
Reactions: 5 users

IloveLamp

Top 20
  • Like
Reactions: 3 users

manny100

Top 20
Parsons and Bascom Hunter are the innovators right now.
Our aim - Over time, neuromorphic co‑processors like Akida could become as standard in defense systems as GPUs are today
 
  • Like
  • Fire
  • Wow
Reactions: 5 users
Top Bottom