Pom down under
Top 20
Brings back memories as I use to love going to my Nan’s on a Saturday and watching the old movies but I’m guessing you were a teenager when this came out
This looks like a great hire, with a lot of industry relevant experience.Yep
![]()
James Shields - BrainChip | LinkedIn
• A versatile and skilled technical sales professional with leadership qualities and… · Experience: BrainChip · Education: University of California, Los Angeles · Location: Chicago · 500+ connections on LinkedIn. View James Shields’ profile on LinkedIn, a professional community of 1 billion...www.linkedin.com
Got the experience but someone having 3 jobs in 3 years is a concern so let’s hope this position is only temporaryWhile both Sales Managers and Business Development Managers aim to drive revenue, they focus on different aspects of the sales process. Sales Managers are primarily concerned with managing and motivating a sales team to achieve short-term sales targets. Business Development Managers focus on long-term strategic growth, identifying new markets, and building partnerships
Let’s hope he has a few easy wins in developing partnerships between those already interested in our product before identifying how big the rocket, I mean the potential market is![]()
One would have to think BRN is involved here in some way.Arm adding a new division....."Arm Neural Technology"
![]()
Arm Neural Technology Delivers Smarter, Sharper, More Efficient Mobile Graphics for Developers
Explore Arm Neural Technology, the first to integrate neural accelerators into GPUs, enhancing mobile graphics with AI.newsroom.arm.com
"Arm neural technology is an industry first, adding dedicated neural accelerators to Arm GPUs, bringing PC-quality, AI powered graphics to mobile for the first time – and laying the foundation for future on-device AI innovation"
No mention of Akida but right place, right time
Happy as Larry
One would have to think BRN is involved here in some way.
Go brainchip
| Component | Description | Relation to BrainChip |
|---|---|---|
| Arm Neural Technology | Dedicated neural GPUs for AI graphics | Unrelated to neuromorphic; not BrainChip |
| BrainChip’s Akida | Neuromorphic spiking AI for low-power edge® | Compatible with Arm ecosystem but distinct |
Must be a mistake as per normal as they have a price sensitive on the wheelSAY CHEEEEEEEEEEEESE
View attachment 89593
ai hallucinationMust be a mistake as per normal as they have a price sensitive on the wheel![]()
This relationship with ARM at some point must grow now that Gen 2 and Tenns is available, surely.Not related to BrainChip according to ChatGPT, unfortunately.
ChatGPT 5:
The "Arm Neural Technology" mentioned in the article refers to specialized neural accelerators integrated into Arm’s future GPU architectures, aimed at delivering real-time AI-powered graphics enhancements—like Neural Super Sampling (NSS)—on mobile devices. It is not related to BrainChip’s neuromorphic technology.
What Arm’s Neural Technology Is
- Arm is introducing dedicated neural acceleration hardware within next-generation GPUs, enabling advanced graphics features such as AI-driven upscaling, frame generation, and denoising.([turn0view0])
- An early application is Neural Super Sampling (NSS), which can upscale images—e.g., from 540p to 1080p—in just ~4ms per frame while enhancing performance, battery life, or visual quality.([turn0view0])
- This is essentially a GPU-based AI enhancement pipeline, akin to NVIDIA’s Tensor Cores, optimized for graphics—not a neuromorphic/spiking AI architecture.
How It Differs from BrainChip’s Neuromorphic Approach
- Arm’s neural accelerators are classic GPUs with AI blocks, designed for frame-based deep learning tasks, tailored toward visual and graphics workloads.
- BrainChip’s Akida is a neuromorphic processor, built on spiking neural network principles, which excels at event-driven, low-latency, low-power inference—especially suited for edge use cases like sensory data processing, not graphics.
- While BrainChip is a partner in Arm’s ecosystem—meaning Akida can co-exist with Arm CPUs and NPUs—Arm's new GPU neural tech and BrainChip’s SNN IP are complementary, not the same.
Summary
Component Description Relation to BrainChip Arm Neural Technology Dedicated neural GPUs for AI graphics Unrelated to neuromorphic; not BrainChip BrainChip’s Akida Neuromorphic spiking AI for low-power edge® Compatible with Arm ecosystem but distinct
If anything, Arm’s new offering and BrainChip’s neuromorphic IP represent different layers of edge AI evolution—graphics-centric in one case, brain-inspired general intelligence in the other.
This relationship with ARM at some point must grow now that Gen 2 and Tenns is available, surely.
Whats it going to take![]()
Hi Bravo,Not related to BrainChip according to ChatGPT, unfortunately.
ChatGPT 5:
The "Arm Neural Technology" mentioned in the article refers to specialized neural accelerators integrated into Arm’s future GPU architectures, aimed at delivering real-time AI-powered graphics enhancements—like Neural Super Sampling (NSS)—on mobile devices. It is not related to BrainChip’s neuromorphic technology.
What Arm’s Neural Technology Is
- Arm is introducing dedicated neural acceleration hardware within next-generation GPUs, enabling advanced graphics features such as AI-driven upscaling, frame generation, and denoising.([turn0view0])
- An early application is Neural Super Sampling (NSS), which can upscale images—e.g., from 540p to 1080p—in just ~4ms per frame while enhancing performance, battery life, or visual quality.([turn0view0])
- This is essentially a GPU-based AI enhancement pipeline, akin to NVIDIA’s Tensor Cores, optimized for graphics—not a neuromorphic/spiking AI architecture.
How It Differs from BrainChip’s Neuromorphic Approach
- Arm’s neural accelerators are classic GPUs with AI blocks, designed for frame-based deep learning tasks, tailored toward visual and graphics workloads.
- BrainChip’s Akida is a neuromorphic processor, built on spiking neural network principles, which excels at event-driven, low-latency, low-power inference—especially suited for edge use cases like sensory data processing, not graphics.
- While BrainChip is a partner in Arm’s ecosystem—meaning Akida can co-exist with Arm CPUs and NPUs—Arm's new GPU neural tech and BrainChip’s SNN IP are complementary, not the same.
Summary
Component Description Relation to BrainChip Arm Neural Technology Dedicated neural GPUs for AI graphics Unrelated to neuromorphic; not BrainChip BrainChip’s Akida Neuromorphic spiking AI for low-power edge® Compatible with Arm ecosystem but distinct
If anything, Arm’s new offering and BrainChip’s neuromorphic IP represent different layers of edge AI evolution—graphics-centric in one case, brain-inspired general intelligence in the other.
Hi Bravo,
The ARM U85 uses MACs:
https://www.bing.com/images/search?view=detailV2&ccid=SKxR9R6A&id=FDE75A522354D643A048FDBF7BD709D6B16C62E3&thid=OIP.SKxR9R6AvaWns2DkvNwBzQHaFe&mediaurl=https://lh7-us.googleusercontent.com/docsz/AD_4nXdCJtxPcYCTi6bGU47CnzGMdXJ4kW5j5u1EkQcbitxexcMcuzHV3dYpoAoeBa4ITge7ZLR5CMVZV3Po3TZIG-e1Tnp_GIEbjzMjfcOoyz1lp01fxeqKqgomdCzj_PdIARM4JdK80VX56Ea9PNOYaYtdAFM?key=25AlXtfNVs_jGBLuOXoleg&exph=671&expw=907&q=arm+ethos+u85+block+diagram&form=IRPRST&ck=AEB858B96E0340034576C1DFDA68FBAC&selectedindex=1&itb=0&ajaxhist=0&ajaxserp=0&vt=0&sim=11
View attachment 89599
Arm® Ethos™-U85 NPU Technical Overview
View attachment 89600
The weight and fast weight channels transfer compressed weights from external memory to the weight decoder. The DMA controller uses a read buffer to hide bus latency from the weight decoder and to enable the DMA to handle data arriving out of order. The traversal unit triggers these channels for blocks that require the transfer of weights.
The weight stream must be quantized to eight bits or less by an offline tool. When passed through the offline compiler, weights are compressed losslessly and reordered into an NPU specific weight stream. This process is effective, if the quantizer uses less than eight bits or if it uses clustering and pruning techniques. The quantizer can also employ all three methods. Using lossless compression on high sparsity weights, containing greater than 75% zeros can lead to compression below 3 bits per weight in the final weight stream
Given Akida 3's int16/FP32 capabilities, Akida 3 will be able to be used in more high precision applications than Ethos U85.
| Feature / Use Case | Ethos-U85 | Akida 3 | Comments |
|---|---|---|---|
| INT8 inference for vision AI | Both handle low-bit CNN/Transformer workloads efficiently. | ||
| Edge AI in ARM-based SoCs | Akida is already ARM-compatible and could replace U85 in designs. | ||
| Consumer IoT & smart home devices | E.g., cameras, voice assistants, home hubs — same target market. | ||
| Robotics & drones | Both can handle perception & navigation; Akida offers ultra-low power SNN modes. | ||
| Automotive driver monitoring | Akida could match U85 for INT8 workloads, but also add mixed precision. | ||
| AR/VR lightweight inference | Both could run vision-based gesture tracking or object recognition. |
| Feature / Use Case | U85 Limitations | Akida 3 Advantage |
|---|---|---|
| High precision (INT16 / FP32) | Only ≤INT8 | Akida 3 can run models that require higher precision, e.g., medical imaging, radar processing, industrial measurement. |
| Mixed-mode processing | Primarily CNNs / Transformers | Akida 3 can combine SNN + ANN + mixed precision in one device. |
| Event-based data handling | Not designed for spikes | Akida 3 natively supports event-driven SNN processing, reducing power & latency. |
| On-device learning / adaptation | Limited to retraining off-device | Akida 3 supports incremental learning on-chip — key for adaptive edge AI. |
| Sparse computing efficiency | Relies on compression & pruning | Akida 3 exploits sparsity at the architectural level without preprocessing overhead. |
I was gunna say that!Thanks @Diogenese.
I've just run your post past my buddy ChatGPT and here's the response I received.
ChatGPT 5:
1. Comparing Ethos U85 and Akida 3
Ethos U85
Akida 3
- Limited to INT8 or lower precision.
- Designed primarily for CNNs (convolutional neural networks) and some newer architectures like Transformers, but still tuned for image, vision, and low-to-mid precision inference.
- Weight compression, clustering, and pruning help with efficiency, but it’s still fundamentally in the “traditional NPU” space.
- Very well suited for consumer devices, IoT, and embedded AI where tasks are quantized to 8-bit or less.
- Expected to support INT16 and FP32 in addition to low-bit SNN modes.
- This opens the door to high-precision workloads — e.g., certain medical imaging, radar/SAR processing, industrial inspection, scientific data analytics — where you can’t afford the precision loss of 8-bit quantization.
- Still ultra-low power thanks to neuromorphic event-based processing.
- Could run mixed workloads (spiking and non-spiking) at the sensor edge.
2. Implications if Akida 3 Exceeds Ethos U85 in Precision
If Akida 3 can cover all the U85’s use cases plus high-precision domains that the U85 can’t touch, then:
- BrainChip could potentially “superset” the market — competing in all U85 application spaces and in several new verticals where U85 simply isn’t viable.
- It would also position BrainChip as a partner rather than just a competitor — because ARM licensees could still integrate Akida alongside Cortex CPUs and Ethos NPUs for mixed workloads.
3. Size of the Ethos U85 Market
ARM NPUs are increasingly going into:
Given ARM’s licensing footprint, the U85 market could be tens to hundreds of millions of units annually once adoption ramps.
- Smartphones & tablets
- Consumer IoT devices
- Smart cameras / vision sensors
- Automotive driver monitoring & ADAS
- Robotics & drones
- AR/VR devices
If Akida 3 is drop-in compatible with ARM system designs (which it already tends to be), BrainChip could target this same customer base — either as:
- A replacement in cases where high precision or neuromorphic advantages are needed
- A complementary co-processor for event-based or ultra-low power workloads.
4. Ideal Application Areas Where Akida 3 Could Beat U85
- Medical imaging devices needing edge processing without quantization artifacts.
- SAR and radar systems for defense or environmental monitoring.
- Industrial inspection systems where measurement tolerances are tight.
- Scientific instrumentation in field research.
- High-end automotive perception stacks needing both low latency and precision.
- Secure edge devices that must process locally without cloud and without sacrificing accuracy.
If you want, I can map Ethos U85’s public design targets against Akida 3’s potential capabilities so you can see exactly where the overlap and expansion opportunities are. That would make the competitive positioning very clear.
You said:
Yes please.
ChatGPT said:
Here’s a structured comparison of ARM Ethos-U85 vs. BrainChip Akida 3 with overlap and expansion mapped clearly.
1. Overlap — Where Akida 3 Could Compete Directly with U85
Feature / Use Case Ethos-U85 Akida 3 Comments INT8 inference for vision AI Both handle low-bit CNN/Transformer workloads efficiently. Edge AI in ARM-based SoCs Akida is already ARM-compatible and could replace U85 in designs. Consumer IoT & smart home devices E.g., cameras, voice assistants, home hubs — same target market. Robotics & drones Both can handle perception & navigation; Akida offers ultra-low power SNN modes. Automotive driver monitoring Akida could match U85 for INT8 workloads, but also add mixed precision. AR/VR lightweight inference Both could run vision-based gesture tracking or object recognition. 2. Expansion — Where Akida 3 Could Offer More Than U85
Feature / Use Case U85 Limitations Akida 3 Advantage High precision (INT16 / FP32) Only ≤INT8 Akida 3 can run models that require higher precision, e.g., medical imaging, radar processing, industrial measurement. Mixed-mode processing Primarily CNNs / Transformers Akida 3 can combine SNN + ANN + mixed precision in one device. Event-based data handling Not designed for spikes Akida 3 natively supports event-driven SNN processing, reducing power & latency. On-device learning / adaptation Limited to retraining off-device Akida 3 supports incremental learning on-chip — key for adaptive edge AI. Sparse computing efficiency Relies on compression & pruning Akida 3 exploits sparsity at the architectural level without preprocessing overhead. 3. Market Size & Entry Points
Ethos U85 Target Segments (High-Volume ARM Licensing):
Akida 3 Entry Strategy:
- Smartphones (tens of millions annually)
- Smart cameras & home devices
- Embedded industrial AI
- Low-power automotive systems
- Replacement: For U85 use cases that require more precision or event-based efficiency.
- Complementary Co-Processor: Add alongside U85 in ARM SoCs for heterogeneous workloads (U85 for bulk CNN, Akida for adaptive SNN & high precision).
- Standalone in Niche Markets: Where U85 simply can’t go — e.g., SAR imaging, advanced radar, in-body medical devices.
4. Strategic Implications for BrainChip
- Leverage ARM’s ecosystem: If marketed as ARM-friendly IP, Akida 3 could be pitched to the same silicon partners already licensing U85.
- Precision-based differentiation: BrainChip can claim “U85 compatibility plus high-precision reach.”
- Defense & medical edge: Especially valuable for contracts where accuracy matters more than raw throughput.
I was gunna say that!
Hi Bravo,
The ARM U85 uses MACs:
https://www.bing.com/images/search?view=detailV2&ccid=SKxR9R6A&id=FDE75A522354D643A048FDBF7BD709D6B16C62E3&thid=OIP.SKxR9R6AvaWns2DkvNwBzQHaFe&mediaurl=https://lh7-us.googleusercontent.com/docsz/AD_4nXdCJtxPcYCTi6bGU47CnzGMdXJ4kW5j5u1EkQcbitxexcMcuzHV3dYpoAoeBa4ITge7ZLR5CMVZV3Po3TZIG-e1Tnp_GIEbjzMjfcOoyz1lp01fxeqKqgomdCzj_PdIARM4JdK80VX56Ea9PNOYaYtdAFM?key=25AlXtfNVs_jGBLuOXoleg&exph=671&expw=907&q=arm+ethos+u85+block+diagram&form=IRPRST&ck=AEB858B96E0340034576C1DFDA68FBAC&selectedindex=1&itb=0&ajaxhist=0&ajaxserp=0&vt=0&sim=11
View attachment 89599
Arm® Ethos™-U85 NPU Technical Overview
View attachment 89600
The weight and fast weight channels transfer compressed weights from external memory to the weight decoder. The DMA controller uses a read buffer to hide bus latency from the weight decoder and to enable the DMA to handle data arriving out of order. The traversal unit triggers these channels for blocks that require the transfer of weights.
The weight stream must be quantized to eight bits or less by an offline tool. When passed through the offline compiler, weights are compressed losslessly and reordered into an NPU specific weight stream. This process is effective, if the quantizer uses less than eight bits or if it uses clustering and pruning techniques. The quantizer can also employ all three methods. Using lossless compression on high sparsity weights, containing greater than 75% zeros can lead to compression below 3 bits per weight in the final weight stream
Given Akida 3's int16/FP32 capabilities, Akida 3 will be able to be used in more high precision applications than Ethos U85.