BRN Discussion Ongoing

CHIPS

Regular
I asked ChatGPT your question and I feel it is a way more balanced synopsis

: Please compare snapdragon with akida and who is the winner?


Snapdragon (by Qualcomm) and Akida (by BrainChip) are very different types of processors designed for different purposes. Here’s a detailed comparison:


1. Purpose and Architecture


  • Snapdragon: Snapdragon is a general-purpose system-on-chip (SoC) designed for mobile devices, laptops, and other consumer electronics. It includes CPU, GPU, AI engine, ISP (image signal processor), modem, and more.
  • Akida: Akida is a neuromorphic processor designed specifically for low-power AI and edge computing applications. It is optimized for event-based processing and on-device learning.

2. Performance


  • Snapdragon: Offers high performance for a wide range of applications, including gaming, photography, and AI inference. It is used in flagship smartphones, tablets, and laptops.
  • Akida: Specializes in ultra-low-power AI tasks, such as pattern recognition, anomaly detection, and sensor data processing. It is not designed for general-purpose computing or high-end mobile tasks.

3. AI Capabilities


  • Snapdragon: Uses a dedicated AI engine (Hexagon DSP, Tensor Accelerator, and GPU) to perform AI-based tasks like image processing, voice recognition, and natural language processing.
  • Akida: Uses a neuromorphic approach, mimicking the human brain’s event-driven processing for real-time, low-power AI computations. It is highly efficient for edge AI applications.

4. Power Efficiency


  • Snapdragon: Optimized for mobile battery efficiency but consumes more power than Akida when running AI tasks.
  • Akida: Extremely power-efficient, as it uses event-based processing to reduce unnecessary computations. It is ideal for battery-powered and always-on AI applications.

5. Use Cases


  • Snapdragon: Smartphones, tablets, laptops, AR/VR devices, automotive infotainment, etc.
  • Akida: Edge AI applications such as security cameras, medical devices, industrial sensors, and IoT devices.

Who is the Winner?


  • If you need a powerful general-purpose processor with strong AI capabilities, Snapdragon is the better choice.
  • If you need an ultra-low-power AI processor for edge computing and event-based learning, Akida is the winner.

It depends on the application! If you’re looking at mobile performance, Snapdragon wins. If you need efficient, real-time AI at the edge, Akida is the clear choice. Let me know your use case, and I can refine the comparison further! 🚀

Yes, I like yours better. I never liked Grok anyway.
Grok made a lot of words, but I wanted a shorter overview. Maybe I should have said so to him. That's how ChatGPT made it.
 
  • Like
  • Fire
Reactions: 2 users
Pico wearables are more likely to be smaller and much more power efficient than Snapdragon who are a full SOC.
Previous to Pico, the smallest iteration of AKIDA, was 1 node (1 to 2 nodes AKIDA E) which consists of 4 NPEs.

Pico is "one" NPE.

For comparison, AKD1000 contains 20 nodes, or 80 NPUs.
(doesn't have the benefit of TENNs though, whatever that exactly is..).

It's use cases are completely different.
 
Last edited:
  • Like
Reactions: 5 users

manny100

Regular
Pico isn't "smart" enough, for anything but very simple tasks, it is absolutely no comparison, to something like Snapdragon 8..
It's not as simple as that because no chip is the best at everything.
For always on types of wearables AKIDA is a great choice because of low power.
There are health, IOT and even defence applications for this.
It depends on the specified use and user requirnments.
Pico us a great choice for specific tasks eg, gesture, wake, irregularities detection eg heart issues..
Great as a choice if size of device is very important. Eg, Earbuds, ring, skin patch or even discreet military sensors.
Low power makes it good in extreme conditions, eg military.
Pico is best at single or minimal tasks and there is plenty of demand for that.
Obviously if you require wearables with display apps, power is not an issue and complex models are needed Snapdragon would be a better choice.
No one chip is best for every situation.
 
  • Like
  • Love
Reactions: 8 users

manny100

Regular
Also if you have a specific task that requires low power and on chip learning Pico is the best choice.
 
  • Like
Reactions: 4 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Just a quick summary of some of the main points contained in my last two posts, bearing in mind my questions were focussed specifically on comparisons between AKIDA and Snapdragon (Snapdragon 8 Gen 3 and Snapdragon 8 gen 4).

- AKIDA versus Snapdragon -1B+ parameter applications entirely on-device without any internet connectivity
- AKIDA versus Snapdragon -1B+ parameter GenAI applications running on a watch battery

Akida power draw is micro-watt to milliwatt range versus Snapdragon at 2-10 watts.
Akida offers on-device learning (real time learning) versus Snapdragon which doesn't.
Akida is event driven (active when needed) versus Snapdragon which isn't.
Akida battery runtime is hours to days versus Snapdragon which is minutes to one hour.
Akida never requires cooling versus Snapdragon which sometimes requires active cooling.
Akida can handle 1 billion parameters on device unconnected for GENAi applications on a watch battery versus Snapdragon which cannot.

Don't blame me, blame the messenger - ChatGPT!

Happy to see what others discover.
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 25 users

Frangipani

Top 20


This poster titled GRAIN - Radiation-Tolerant Edge AI, presented by Kenneth Östberg, one of its two co-authors (the other being Daniel Andersson), during the “RISC-V in Space” workshop in Gothenburg on Thursday…


A9A3A6ED-3013-418C-AA9C-7742FEF59249.jpeg

762EE177-B427-45BF-A3DC-83E4B906B800.jpeg


…finally reveals what NEURAVIS stands for - the name of that R&T project, which ESA awarded to the five consortium partners Airbus Toulouse, Airbus Ottobrunn, BrainChip, Frontgrade Gaisler and Neurobus in mid-2024 (see the July 2024 LinkedIn post by Airbus Space Project Manager Jérémy Lebreton below):

Neuromorphic Evaluation of Ultra-low-power Rad-hard Acceleration for Vision Inferences in Space.

The poster also provides more information with regards to the use cases currently being explored in the NEURAVIS project, although I’m afraid I couldn’t decipher everything due to the small print - maybe someone with eagle eyes or a magic tool to blow up the photo and unblur the small print can add in resp. correct what I’ve gathered so far:

1. Moon landing
Use Case #1: Vision-Based Navigation for Lunar Lander

Also see Alf Kuchenbuch’s recent comment on Argonaut, ESA’s lunar lander programme:

https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-452257

6040961B-D999-4075-AB0F-D613971DD063.jpeg


2. Debris detection/collect (?)
Use Case #2: Monitoring (?) Building Block for In-orbit Maintenance


3. Docking


4. Object ? (looks like “simulation”, but appears to be a longer word?)



In addition, the poster lists four “Application scenarios” for GRAIN’s Radiation-Tolerant Edge AI:

1. Remote Terminal Unit
2. Stand-alone Controller
3. Near-edge processing unit
4. Auxiliary data-processing module

Lots of small print to decipher here as well! 🔍





5673B6B1-208E-45F8-BAD7-D940A42DCAF6.jpeg



If I understand the above post correctly, we have yet to hear about what suggestion the NEURAVIS proposal has for AKD1500 (“BrainChip is proud that Airbus selected Akida for both COTS chips and IP in their proposal. ESA awarded the Airbus “NEURAVIS” proposal, including Akida in the Akida 1500 chip and on an FPGA together with Frontgrade Gaisler’s NOEL-V processor.”).

Whereas the underlined appears to refer to Frontgrade Gaisler’s newly revealed GR801 SoC that will incorporate Akida 1.0 IP - greatly benefitting the work of Airbus Toulouse computer vision experts such as Jérémy Lebreton (project lead) and Roland Brochard, as can be inferred from the GRAIN poster’s four listed use cases - there has to be another specific proposal by Airbus how to utilise our COTS chip AKD1500, then.

So I presume Airbus Ottobrunn and Neurobus might be the consortium partners currently collaborating on that second part of the NEURAVIS proposal?

8AB11195-CBF9-4693-BD6A-FA671DD6506E.jpeg
84474DC9-8DBC-4E9B-9503-6C6C2DE8A455.jpeg





D3579718-BEE0-4281-A213-E996ECC46C76.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 30 users
It's not as simple as that because no chip is the best at everything.
For always on types of wearables AKIDA is a great choice because of low power.
There are health, IOT and even defence applications for this.
It depends on the specified use and user requirnments.
Pico us a great choice for specific tasks eg, gesture, wake, irregularities detection eg heart issues..
Great as a choice if size of device is very important. Eg, Earbuds, ring, skin patch or even discreet military sensors.
Low power makes it good in extreme conditions, eg military.
Pico is best at single or minimal tasks and there is plenty of demand for that.
Obviously if you require wearables with display apps, power is not an issue and complex models are needed Snapdragon would be a better choice.
No one chip is best for every situation.
I'm not disagreeing, or saying that Pico doesn't have a multitude of applications, as you suggest.

Our customers actually "requested" something smaller than AKIDA E.

It's the comparison with Snapdragon 8.

The type of devices that Pico goes into, will be mass scale.
 
  • Like
Reactions: 5 users

Frangipani

Top 20
  • Like
  • Fire
  • Love
Reactions: 27 users
Just a quick summary of some of the main points contained in my last two posts, bearing in mind my questions were focussed specifically on comparisons between AKIDA and Snapdragon (Snapdragon 8 Gen 3 and Snapdragon 8 gen 4).

- AKIDA versus Snapdragon -1B+ parameter applications entirely on-device without any internet connectivity
- AKIDA versus Snapdragon -1B+ parameter GenAI applications running on a watch battery

Akida power draw is micro-watt to milliwatt range versus Snapdragon at 2-10 watts.
Akida offers on-device learning (real time learning) versus Snapdragon which doesn't.
Akida is event driven (active when needed) versus Snapdragon which isn't.
Akida battery runtime is hours to days versus Snapdragon which is minutes to one hour.
Akida never requires cooling versus Snapdragon which sometimes requires active cooling.
Akida can handle 1 billion parameters on device unconnected for GENAi applications on a watch battery versus Snapdragon which cannot.

Don't blame me, blame the messenger - ChatGPT!

Happy to see what others discover.
"Akida can handle 1 billion parameters on device unconnected for GENAi applications on a watch battery versus Snapdragon which cannot"

What "specifications" for AKIDA is this, assuming AKIDA 2.0 IP (node count?).

I'm pretty sure Snapdragon 8 is a defined size?

Whereas "AKIDA" is not.
 
  • Like
Reactions: 3 users

MDhere

Top 20
Just a quick summary of some of the main points contained in my last two posts, bearing in mind my questions were focussed specifically on comparisons between AKIDA and Snapdragon (Snapdragon 8 Gen 3 and Snapdragon 8 gen 4).

- AKIDA versus Snapdragon -1B+ parameter applications entirely on-device without any internet connectivity
- AKIDA versus Snapdragon -1B+ parameter GenAI applications running on a watch battery

Akida power draw is micro-watt to milliwatt range versus Snapdragon at 2-10 watts.
Akida offers on-device learning (real time learning) versus Snapdragon which doesn't.
Akida is event driven (active when needed) versus Snapdragon which isn't.
Akida battery runtime is hours to days versus Snapdragon which is minutes to one hour.
Akida never requires cooling versus Snapdragon which sometimes requires active cooling.
Akida can handle 1 billion parameters on device unconnected for GENAi applications on a watch battery versus Snapdragon which cannot.

Don't blame me, blame the messenger - ChatGPT!

Happy to see what others discover.
I read this as one sentence -
Qualcomm wants Brainchip.
 
  • Like
  • Haha
Reactions: 5 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
"Akida can handle 1 billion parameters on device unconnected for GENAi applications on a watch battery versus Snapdragon which cannot"

What "specifications" for AKIDA is this, assuming AKIDA 2.0 IP (node count?).

I'm pretty sure Snapdragon 8 is a defined size?

Whereas "AKIDA" is not.
Screenshot 2025-04-06 at 11.07.08 pm.png




If you look at the video in the link (below) Tony Lewis describes how it can run on a watch battery at approx 2.05 mins.


 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 17 users

7für7

Top 20
  • Like
Reactions: 7 users
View attachment 81595



If you look at the video in the link (below) Tony Lewis describes how it can run on a watch battery at approx 2.05 mins.



AKIDA 2.0 IP (E, S and P) is available with from 1 to 256 nodes..

So without stating the size of the network on the FPGA, it's a bit hard to make performance comparisons?..

I guess it doesn't "really" matter, as long as it can do it and it's likely to be in the P range (8- 256 nodes).

But it would be nice to know maybe, how many they are actually using, to get that performance?..
 
Last edited:
  • Like
Reactions: 2 users

manny100

Regular
I read this as one sentence -
Qualcomm wants Brainchip.
Who knows.
One thing we do know is that Snapdragon is not Neuromorphic.
It is not event based but has work arounds which still chew up power.
Does not have on chip learning either.
 
  • Like
  • Love
  • Fire
Reactions: 7 users

itsol4605

Regular
Is there any plausible reason or compelling argument why Qualcomm should "not" use Akida in their chips in the near future?
 

itsol4605

Regular
Is there any plausible reason or compelling argument why Qualcomm should "not" use Akida in their chips in the near future?
maybe ChatGPT or ... knows more...
 
Might have to try and find some spare cash today and pick up a bargain as

1743970824374.gif
 
  • Like
  • Haha
  • Thinking
Reactions: 6 users

Papacass

Regular
Hey Frangipani,
Great research! Are you inferring that you believe that AKD1500 has been produced in silicon?
I’m only going from memory now but information was circulating here that well over 12 months ago AKD1500 was going to be produced in silicon by Brainchip but allegedly an anonymous potential partner/customer halted that production as they wanted to do it. I think @Diogenese was sniffing that trail. Maybe I’ve got it all wrong in my old age but could Airbus be that company that caused AKD1500 to be produced in silicon?


This poster titled GRAIN - Radiation-Tolerant Edge AI, presented by Kenneth Östberg, one of its two co-authors (the other being Daniel Andersson), during the “RISC-V in Space” workshop in Gothenburg on Thursday…


View attachment 81573
View attachment 81577

…finally reveals what NEURAVIS stands for - the name of that R&T project, which ESA awarded to the five consortium partners Airbus Toulouse, Airbus Ottobrunn, BrainChip, Frontgrade Gaisler and Neurobus in mid-2024 (see the July 2024 LinkedIn post by Airbus Space Project Manager Jérémy Lebreton below):

Neuromorphic Evaluation of Ultra-low-power Rad-hard Acceleration for Vision Inferences in Space.

The poster also provides more information with regards to the use cases currently being explored in the NEURAVIS project, although I’m afraid I couldn’t decipher everything due to the small print - maybe someone with eagle eyes or a magic tool to blow up the photo and unblur the small print can add in resp. correct what I’ve gathered so far:

1. Moon landing
Use Case #1: Vision-Based Navigation for Lunar Lander

Also see Alf Kuchenbuch’s recent comment on Argonaut, ESA’s lunar lander programme:

https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-452257

View attachment 81579

2. Debris detection/collect (?)
Use Case #2: Monitoring (?) Building Block for In-orbit Maintenance


3. Docking


4. Object ? (looks like “simulation”, but appears to be a longer word?)



In addition, the poster lists four “Application scenarios” for GRAIN’s Radiation-Tolerant Edge AI:

1. Remote Terminal Unit
2. Stand-alone Controller
3. Near-edge processing unit
4. Auxiliary data-processing module

Lots of small print to decipher here as well! 🔍





View attachment 81578


If I understand the above post correctly, we have yet to hear about what suggestion the NEURAVIS proposal has for AKD1500 (“BrainChip is proud that Airbus selected Akida for both COTS chips and IP in their proposal. ESA awarded the Airbus “NEURAVIS” proposal, including Akida in the Akida 1500 chip and on an FPGA together with Frontgrade Gaisler’s NOEL-V processor.”).

Whereas the underlined appears to refer to Frontgrade Gaisler’s newly revealed GR801 SoC that will incorporate Akida 1.0 IP - greatly benefitting the work of Airbus Toulouse computer vision experts such as Jérémy Lebreton (project lead) and Roland Brochard, as can be inferred from the GRAIN poster’s four listed use cases - there has to be another specific proposal by Airbus how to utilise our COTS chip AKD1500, then.

So I presume Airbus Ottobrunn and Neurobus might be the consortium partners currently collaborating on that second part of the NEURAVIS proposal?
 
  • Like
  • Fire
  • Thinking
Reactions: 7 users
Hey Frangipani,
Great research! Are you inferring that you believe that AKD1500 has been produced in silicon?
I’m only going from memory now but information was circulating here that well over 12 months ago AKD1500 was going to be produced in silicon by Brainchip but allegedly an anonymous potential partner/customer halted that production as they wanted to do it. I think @Diogenese was sniffing that trail. Maybe I’ve got it all wrong in my old age but could Airbus be that company that caused AKD1500 to be produced in silicon?
Your memory is confused with what happened to the production of AKD2000 chips (which got to the "taping out" stage).

AKD1500, was produced by Global Foundries and has been utilised, by Bascom Hunter


BrainChip Receives First Shipment of AKD1500 Chips in Silicon from Technology Partner GlobalFoundries (August 2023)



In late December(2024) BrainChip signed a US$100,000 contract with U.S. defence contractor Bascom Hunter, for the sale and support of AKD1500 chips for full scale evaluation of commercial products, which will provide significant improvements over traditional CPU, GPU, and TPU processors and are considered optimal for low Size, Weight, and Power (SWaP) Machine Learning (ML) applications.
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 11 users

Papacass

Regular
Your memory is confused with what happened to the production of AKD2000 chips (which got to the "taping out" stage).

AKD1500, was produced by Global Foundries and has been utilised, by Bascom Hunter


BrainChip Receives First Shipment of AKD1500 Chips in Silicon from Technology Partner GlobalFoundries (August 2023)



In late December(2024) BrainChip signed a US$100,000 contract with U.S. defence contractor Bascom Hunter, for the sale and support of AKD1500 chips for full scale evaluation of commercial products, which will provide significant improvements over traditional CPU, GPU, and TPU processors and are considered optimal for low Size, Weight, and Power (SWaP) Machine Learning (ML) applications.
Cheers. Knew I was getting old.
 
  • Like
Reactions: 4 users
Top Bottom