BRN Discussion Ongoing

Not related to BrainChip according to ChatGPT, unfortunately.


ChatGPT 5:

The "Arm Neural Technology" mentioned in the article refers to specialized neural accelerators integrated into Arm’s future GPU architectures, aimed at delivering real-time AI-powered graphics enhancements—like Neural Super Sampling (NSS)—on mobile devices. It is not related to BrainChip’s neuromorphic technology.




What Arm’s Neural Technology Is​


  • Arm is introducing dedicated neural acceleration hardware within next-generation GPUs, enabling advanced graphics features such as AI-driven upscaling, frame generation, and denoising.([turn0view0])
  • An early application is Neural Super Sampling (NSS), which can upscale images—e.g., from 540p to 1080p—in just ~4ms per frame while enhancing performance, battery life, or visual quality.([turn0view0])
  • This is essentially a GPU-based AI enhancement pipeline, akin to NVIDIA’s Tensor Cores, optimized for graphics—not a neuromorphic/spiking AI architecture.



How It Differs from BrainChip’s Neuromorphic Approach​


  • Arm’s neural accelerators are classic GPUs with AI blocks, designed for frame-based deep learning tasks, tailored toward visual and graphics workloads.
  • BrainChip’s Akida is a neuromorphic processor, built on spiking neural network principles, which excels at event-driven, low-latency, low-power inference—especially suited for edge use cases like sensory data processing, not graphics.
  • While BrainChip is a partner in Arm’s ecosystem—meaning Akida can co-exist with Arm CPUs and NPUs—Arm's new GPU neural tech and BrainChip’s SNN IP are complementary, not the same.



Summary​


ComponentDescriptionRelation to BrainChip
Arm Neural TechnologyDedicated neural GPUs for AI graphicsUnrelated to neuromorphic; not BrainChip
BrainChip’s AkidaNeuromorphic spiking AI for low-power edge®Compatible with Arm ecosystem but distinct



If anything, Arm’s new offering and BrainChip’s neuromorphic IP represent different layers of edge AI evolution—graphics-centric in one case, brain-inspired general intelligence in the other.
This relationship with ARM at some point must grow now that Gen 2 and Tenns is available, surely.
Whats it going to take 🤔
 
  • Like
Reactions: 4 users

HopalongPetrovski

I'm Spartacus!
This relationship with ARM at some point must grow now that Gen 2 and Tenns is available, surely.
Whats it going to take 🤔

Sean needs to pull out the trusty old razor and start cutting some deals.

 
  • Haha
  • Like
  • Love
Reactions: 6 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
When a sales prospect asks Sean to call back in 6 months.


napoleon-dynamite-serious.gif
 
  • Haha
  • Like
Reactions: 11 users

itsol4605

Regular
 
  • Like
Reactions: 1 users

Diogenese

Top 20
Not related to BrainChip according to ChatGPT, unfortunately.


ChatGPT 5:

The "Arm Neural Technology" mentioned in the article refers to specialized neural accelerators integrated into Arm’s future GPU architectures, aimed at delivering real-time AI-powered graphics enhancements—like Neural Super Sampling (NSS)—on mobile devices. It is not related to BrainChip’s neuromorphic technology.




What Arm’s Neural Technology Is​


  • Arm is introducing dedicated neural acceleration hardware within next-generation GPUs, enabling advanced graphics features such as AI-driven upscaling, frame generation, and denoising.([turn0view0])
  • An early application is Neural Super Sampling (NSS), which can upscale images—e.g., from 540p to 1080p—in just ~4ms per frame while enhancing performance, battery life, or visual quality.([turn0view0])
  • This is essentially a GPU-based AI enhancement pipeline, akin to NVIDIA’s Tensor Cores, optimized for graphics—not a neuromorphic/spiking AI architecture.



How It Differs from BrainChip’s Neuromorphic Approach​


  • Arm’s neural accelerators are classic GPUs with AI blocks, designed for frame-based deep learning tasks, tailored toward visual and graphics workloads.
  • BrainChip’s Akida is a neuromorphic processor, built on spiking neural network principles, which excels at event-driven, low-latency, low-power inference—especially suited for edge use cases like sensory data processing, not graphics.
  • While BrainChip is a partner in Arm’s ecosystem—meaning Akida can co-exist with Arm CPUs and NPUs—Arm's new GPU neural tech and BrainChip’s SNN IP are complementary, not the same.



Summary​


ComponentDescriptionRelation to BrainChip
Arm Neural TechnologyDedicated neural GPUs for AI graphicsUnrelated to neuromorphic; not BrainChip
BrainChip’s AkidaNeuromorphic spiking AI for low-power edge®Compatible with Arm ecosystem but distinct



If anything, Arm’s new offering and BrainChip’s neuromorphic IP represent different layers of edge AI evolution—graphics-centric in one case, brain-inspired general intelligence in the other.
Hi Bravo,

The ARM U85 uses MACs:

https://www.bing.com/images/search?...dex=1&itb=0&ajaxhist=0&ajaxserp=0&vt=0&sim=11

1755133839400.png



Arm® Ethos™-U85 NPU Technical Overview

1755134597164.png




The weight and fast weight channels transfer compressed weights from external memory to the weight decoder. The DMA controller uses a read buffer to hide bus latency from the weight decoder and to enable the DMA to handle data arriving out of order. The traversal unit triggers these channels for blocks that require the transfer of weights.
The weight stream must be quantized to eight bits or less by an offline tool. When passed through the offline compiler, weights are compressed losslessly and reordered into an NPU specific weight stream. This process is effective, if the quantizer uses less than eight bits or if it uses clustering and pruning techniques. The quantizer can also employ all three methods. Using lossless compression on high sparsity weights, containing greater than 75% zeros can lead to compression below 3 bits per weight in the final weight stream


Given Akida 3's int16/FP32 capabilities, Akida 3 will be able to be used in more high precision applications than Ethos U85.
 
  • Like
  • Thinking
  • Love
Reactions: 6 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Hi Bravo,

The ARM U85 uses MACs:

https://www.bing.com/images/search?view=detailV2&ccid=SKxR9R6A&id=FDE75A522354D643A048FDBF7BD709D6B16C62E3&thid=OIP.SKxR9R6AvaWns2DkvNwBzQHaFe&mediaurl=https://lh7-us.googleusercontent.com/docsz/AD_4nXdCJtxPcYCTi6bGU47CnzGMdXJ4kW5j5u1EkQcbitxexcMcuzHV3dYpoAoeBa4ITge7ZLR5CMVZV3Po3TZIG-e1Tnp_GIEbjzMjfcOoyz1lp01fxeqKqgomdCzj_PdIARM4JdK80VX56Ea9PNOYaYtdAFM?key=25AlXtfNVs_jGBLuOXoleg&exph=671&expw=907&q=arm+ethos+u85+block+diagram&form=IRPRST&ck=AEB858B96E0340034576C1DFDA68FBAC&selectedindex=1&itb=0&ajaxhist=0&ajaxserp=0&vt=0&sim=11

View attachment 89599


Arm® Ethos™-U85 NPU Technical Overview

View attachment 89600



The weight and fast weight channels transfer compressed weights from external memory to the weight decoder. The DMA controller uses a read buffer to hide bus latency from the weight decoder and to enable the DMA to handle data arriving out of order. The traversal unit triggers these channels for blocks that require the transfer of weights.
The weight stream must be quantized to eight bits or less by an offline tool. When passed through the offline compiler, weights are compressed losslessly and reordered into an NPU specific weight stream. This process is effective, if the quantizer uses less than eight bits or if it uses clustering and pruning techniques. The quantizer can also employ all three methods. Using lossless compression on high sparsity weights, containing greater than 75% zeros can lead to compression below 3 bits per weight in the final weight stream


Given Akida 3's int16/FP32 capabilities, Akida 3 will be able to be used in more high precision applications than Ethos U85.

Thanks @Diogenese.

I've just run your post past my buddy ChatGPT and here's the response I received.



ChatGPT 5:


1. Comparing Ethos U85 and Akida 3​

Ethos U85
  • Limited to INT8 or lower precision.
  • Designed primarily for CNNs (convolutional neural networks) and some newer architectures like Transformers, but still tuned for image, vision, and low-to-mid precision inference.
  • Weight compression, clustering, and pruning help with efficiency, but it’s still fundamentally in the “traditional NPU” space.
  • Very well suited for consumer devices, IoT, and embedded AI where tasks are quantized to 8-bit or less.
Akida 3
  • Expected to support INT16 and FP32 in addition to low-bit SNN modes.
  • This opens the door to high-precision workloads — e.g., certain medical imaging, radar/SAR processing, industrial inspection, scientific data analytics — where you can’t afford the precision loss of 8-bit quantization.
  • Still ultra-low power thanks to neuromorphic event-based processing.
  • Could run mixed workloads (spiking and non-spiking) at the sensor edge.

2. Implications if Akida 3 Exceeds Ethos U85 in Precision​

If Akida 3 can cover all the U85’s use cases plus high-precision domains that the U85 can’t touch, then:
  • BrainChip could potentially “superset” the market — competing in all U85 application spaces and in several new verticals where U85 simply isn’t viable.
  • It would also position BrainChip as a partner rather than just a competitor — because ARM licensees could still integrate Akida alongside Cortex CPUs and Ethos NPUs for mixed workloads.

3. Size of the Ethos U85 Market​

ARM NPUs are increasingly going into:
  • Smartphones & tablets
  • Consumer IoT devices
  • Smart cameras / vision sensors
  • Automotive driver monitoring & ADAS
  • Robotics & drones
  • AR/VR devices
Given ARM’s licensing footprint, the U85 market could be tens to hundreds of millions of units annually once adoption ramps.
If Akida 3 is drop-in compatible with ARM system designs (which it already tends to be), BrainChip could target this same customer base — either as:

  • A replacement in cases where high precision or neuromorphic advantages are needed
  • A complementary co-processor for event-based or ultra-low power workloads.

4. Ideal Application Areas Where Akida 3 Could Beat U85​

  • Medical imaging devices needing edge processing without quantization artifacts.
  • SAR and radar systems for defense or environmental monitoring.
  • Industrial inspection systems where measurement tolerances are tight.
  • Scientific instrumentation in field research.
  • High-end automotive perception stacks needing both low latency and precision.
  • Secure edge devices that must process locally without cloud and without sacrificing accuracy.

If you want, I can map Ethos U85’s public design targets against Akida 3’s potential capabilities so you can see exactly where the overlap and expansion opportunities are. That would make the competitive positioning very clear.

You said:​

Yes please.

ChatGPT said:​

Here’s a structured comparison of ARM Ethos-U85 vs. BrainChip Akida 3 with overlap and expansion mapped clearly.

1. Overlap — Where Akida 3 Could Compete Directly with U85

Feature / Use CaseEthos-U85Akida 3Comments
INT8 inference for vision AI✅✅Both handle low-bit CNN/Transformer workloads efficiently.
Edge AI in ARM-based SoCs✅✅Akida is already ARM-compatible and could replace U85 in designs.
Consumer IoT & smart home devices✅✅E.g., cameras, voice assistants, home hubs — same target market.
Robotics & drones✅✅Both can handle perception & navigation; Akida offers ultra-low power SNN modes.
Automotive driver monitoring✅✅Akida could match U85 for INT8 workloads, but also add mixed precision.
AR/VR lightweight inference✅✅Both could run vision-based gesture tracking or object recognition.

2. Expansion — Where Akida 3 Could Offer More Than U85

Feature / Use CaseU85 LimitationsAkida 3 Advantage
High precision (INT16 / FP32)Only ≤INT8Akida 3 can run models that require higher precision, e.g., medical imaging, radar processing, industrial measurement.
Mixed-mode processingPrimarily CNNs / TransformersAkida 3 can combine SNN + ANN + mixed precision in one device.
Event-based data handlingNot designed for spikesAkida 3 natively supports event-driven SNN processing, reducing power & latency.
On-device learning / adaptationLimited to retraining off-deviceAkida 3 supports incremental learning on-chip — key for adaptive edge AI.
Sparse computing efficiencyRelies on compression & pruningAkida 3 exploits sparsity at the architectural level without preprocessing overhead.

3. Market Size & Entry Points

Ethos U85 Target Segments (High-Volume ARM Licensing):
  • Smartphones (tens of millions annually)
  • Smart cameras & home devices
  • Embedded industrial AI
  • Low-power automotive systems
Akida 3 Entry Strategy:
  • Replacement: For U85 use cases that require more precision or event-based efficiency.
  • Complementary Co-Processor: Add alongside U85 in ARM SoCs for heterogeneous workloads (U85 for bulk CNN, Akida for adaptive SNN & high precision).
  • Standalone in Niche Markets: Where U85 simply can’t go — e.g., SAR imaging, advanced radar, in-body medical devices.

4. Strategic Implications for BrainChip

  • Leverage ARM’s ecosystem: If marketed as ARM-friendly IP, Akida 3 could be pitched to the same silicon partners already licensing U85.
  • Precision-based differentiation: BrainChip can claim “U85 compatibility plus high-precision reach.”
  • Defense & medical edge: Especially valuable for contracts where accuracy matters more than raw throughput.
 
  • Like
  • Love
  • Thinking
Reactions: 10 users

7für7

Top 20
Until now everything I read is “could..would…should…” Plenty of subjunctives for my taste…. What we need is the implementation in “we did… we have… it is… we are,” and I firmly believe it’s already in the works
 
  • Like
Reactions: 1 users

Diogenese

Top 20
Thanks @Diogenese.

I've just run your post past my buddy ChatGPT and here's the response I received.



ChatGPT 5:

1. Comparing Ethos U85 and Akida 3​

Ethos U85
  • Limited to INT8 or lower precision.
  • Designed primarily for CNNs (convolutional neural networks) and some newer architectures like Transformers, but still tuned for image, vision, and low-to-mid precision inference.
  • Weight compression, clustering, and pruning help with efficiency, but it’s still fundamentally in the “traditional NPU” space.
  • Very well suited for consumer devices, IoT, and embedded AI where tasks are quantized to 8-bit or less.
Akida 3
  • Expected to support INT16 and FP32 in addition to low-bit SNN modes.
  • This opens the door to high-precision workloads — e.g., certain medical imaging, radar/SAR processing, industrial inspection, scientific data analytics — where you can’t afford the precision loss of 8-bit quantization.
  • Still ultra-low power thanks to neuromorphic event-based processing.
  • Could run mixed workloads (spiking and non-spiking) at the sensor edge.

2. Implications if Akida 3 Exceeds Ethos U85 in Precision​

If Akida 3 can cover all the U85’s use cases plus high-precision domains that the U85 can’t touch, then:
  • BrainChip could potentially “superset” the market — competing in all U85 application spaces and in several new verticals where U85 simply isn’t viable.
  • It would also position BrainChip as a partner rather than just a competitor — because ARM licensees could still integrate Akida alongside Cortex CPUs and Ethos NPUs for mixed workloads.

3. Size of the Ethos U85 Market​

ARM NPUs are increasingly going into:
  • Smartphones & tablets
  • Consumer IoT devices
  • Smart cameras / vision sensors
  • Automotive driver monitoring & ADAS
  • Robotics & drones
  • AR/VR devices
Given ARM’s licensing footprint, the U85 market could be tens to hundreds of millions of units annually once adoption ramps.
If Akida 3 is drop-in compatible with ARM system designs (which it already tends to be), BrainChip could target this same customer base — either as:

  • A replacement in cases where high precision or neuromorphic advantages are needed
  • A complementary co-processor for event-based or ultra-low power workloads.

4. Ideal Application Areas Where Akida 3 Could Beat U85​

  • Medical imaging devices needing edge processing without quantization artifacts.
  • SAR and radar systems for defense or environmental monitoring.
  • Industrial inspection systems where measurement tolerances are tight.
  • Scientific instrumentation in field research.
  • High-end automotive perception stacks needing both low latency and precision.
  • Secure edge devices that must process locally without cloud and without sacrificing accuracy.

If you want, I can map Ethos U85’s public design targets against Akida 3’s potential capabilities so you can see exactly where the overlap and expansion opportunities are. That would make the competitive positioning very clear.

You said:​

Yes please.

ChatGPT said:​

Here’s a structured comparison of ARM Ethos-U85 vs. BrainChip Akida 3 with overlap and expansion mapped clearly.

1. Overlap — Where Akida 3 Could Compete Directly with U85

Feature / Use CaseEthos-U85Akida 3Comments
INT8 inference for vision AI✅✅Both handle low-bit CNN/Transformer workloads efficiently.
Edge AI in ARM-based SoCs✅✅Akida is already ARM-compatible and could replace U85 in designs.
Consumer IoT & smart home devices✅✅E.g., cameras, voice assistants, home hubs — same target market.
Robotics & drones✅✅Both can handle perception & navigation; Akida offers ultra-low power SNN modes.
Automotive driver monitoring✅✅Akida could match U85 for INT8 workloads, but also add mixed precision.
AR/VR lightweight inference✅✅Both could run vision-based gesture tracking or object recognition.

2. Expansion — Where Akida 3 Could Offer More Than U85

Feature / Use CaseU85 LimitationsAkida 3 Advantage
High precision (INT16 / FP32)Only ≤INT8Akida 3 can run models that require higher precision, e.g., medical imaging, radar processing, industrial measurement.
Mixed-mode processingPrimarily CNNs / TransformersAkida 3 can combine SNN + ANN + mixed precision in one device.
Event-based data handlingNot designed for spikesAkida 3 natively supports event-driven SNN processing, reducing power & latency.
On-device learning / adaptationLimited to retraining off-deviceAkida 3 supports incremental learning on-chip — key for adaptive edge AI.
Sparse computing efficiencyRelies on compression & pruningAkida 3 exploits sparsity at the architectural level without preprocessing overhead.

3. Market Size & Entry Points

Ethos U85 Target Segments (High-Volume ARM Licensing):
  • Smartphones (tens of millions annually)
  • Smart cameras & home devices
  • Embedded industrial AI
  • Low-power automotive systems
Akida 3 Entry Strategy:
  • Replacement: For U85 use cases that require more precision or event-based efficiency.
  • Complementary Co-Processor: Add alongside U85 in ARM SoCs for heterogeneous workloads (U85 for bulk CNN, Akida for adaptive SNN & high precision).
  • Standalone in Niche Markets: Where U85 simply can’t go — e.g., SAR imaging, advanced radar, in-body medical devices.

4. Strategic Implications for BrainChip

  • Leverage ARM’s ecosystem: If marketed as ARM-friendly IP, Akida 3 could be pitched to the same silicon partners already licensing U85.
  • Precision-based differentiation: BrainChip can claim “U85 compatibility plus high-precision reach.”
  • Defense & medical edge: Especially valuable for contracts where accuracy matters more than raw throughput.
I was gunna say that!
 
  • Haha
  • Like
  • Love
Reactions: 9 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
I was gunna say that!

If Akida 3 tape-out is in Q1 2026 and the FPGA is available in Q2 2026, then you'd have to think the most likely licence commitment window for Arm would be somewhere between Q2 2026 – Q4 2026.

If Arm is really interested, they'll probably negotiate NDAs, architectural access, and preliminary terms before tape-out so they can get priority access to the FPGA in Q2 2026, which would enable them to lock Akida 3 (into the Ethos successor roadmap, for example) without delaying their own silicon schedules IMO.
 
  • Fire
  • Like
  • Thinking
Reactions: 4 users

Diogenese

Top 20
Hi Bravo,

The ARM U85 uses MACs:

https://www.bing.com/images/search?view=detailV2&ccid=SKxR9R6A&id=FDE75A522354D643A048FDBF7BD709D6B16C62E3&thid=OIP.SKxR9R6AvaWns2DkvNwBzQHaFe&mediaurl=https://lh7-us.googleusercontent.com/docsz/AD_4nXdCJtxPcYCTi6bGU47CnzGMdXJ4kW5j5u1EkQcbitxexcMcuzHV3dYpoAoeBa4ITge7ZLR5CMVZV3Po3TZIG-e1Tnp_GIEbjzMjfcOoyz1lp01fxeqKqgomdCzj_PdIARM4JdK80VX56Ea9PNOYaYtdAFM?key=25AlXtfNVs_jGBLuOXoleg&exph=671&expw=907&q=arm+ethos+u85+block+diagram&form=IRPRST&ck=AEB858B96E0340034576C1DFDA68FBAC&selectedindex=1&itb=0&ajaxhist=0&ajaxserp=0&vt=0&sim=11

View attachment 89599


Arm® Ethos™-U85 NPU Technical Overview

View attachment 89600



The weight and fast weight channels transfer compressed weights from external memory to the weight decoder. The DMA controller uses a read buffer to hide bus latency from the weight decoder and to enable the DMA to handle data arriving out of order. The traversal unit triggers these channels for blocks that require the transfer of weights.
The weight stream must be quantized to eight bits or less by an offline tool. When passed through the offline compiler, weights are compressed losslessly and reordered into an NPU specific weight stream. This process is effective, if the quantizer uses less than eight bits or if it uses clustering and pruning techniques. The quantizer can also employ all three methods. Using lossless compression on high sparsity weights, containing greater than 75% zeros can lead to compression below 3 bits per weight in the final weight stream


Given Akida 3's int16/FP32 capabilities, Akida 3 will be able to be used in more high precision applications than Ethos U85.

I don't see any advantage in running Akida 3 with Ethos U85.

It is open to anyone to license the ARM Cortex M85 CPU IP and combine it with Akida 2 or 3 IP in a SoC. Both Akidae include the TENNs model, and would have significant power and latency advantages over Ethos U85.
 
  • Like
  • Love
  • Wow
Reactions: 3 users

Diogenese

Top 20
If Akida 3 tape-out is in Q1 2026 and the FPGA is available in Q2 2026, then you'd have to think the most likely licence commitment window for Arm would be somewhere between Q2 2026 – Q4 2026.

If Arm is really interested, they'll probably negotiate NDAs, architectural access, and preliminary terms before tape-out so they can get priority access to the FPGA in Q2 2026, which would enable them to lock Akida 3 (into the Ethos successor roadmap, for example) without delaying their own silicon schedules IMO.
If BRN had the resources of Nvidia/ARM/Intel, Akida 3 SoC would be on the market now.

As I said, it could, for instance be combined with ARM Cortex M85 (the Ethos U85 processor), or any other compatible processor.

I guess there will be some internal resistance in ARM - we need to get Mr Son's ear.

PS: Assuming ARM do go ahead with their chip making plans.
 
Last edited:
  • Like
  • Haha
Reactions: 4 users

7für7

Top 20
Imagine this: The BrainChip sales team walks into Apple’s or Mercedes’ HQ — and all other potential customers as well — without an appointment, silently installs an Akida module into their devices or cars, leans back casually, smiles at the camera, winks … and then pulls out a pack of Mentos:

“Akida – the fresh kick for your AI.”

We need go-getters like back in the day — and customers who respond with an approving smile, because they’re real foxes.


98F84DAC-9601-42E1-8A81-A29BBEF8713B.png
 
  • Haha
Reactions: 1 users
Top Bottom