Hi Bravo,Not related to BrainChip according to ChatGPT, unfortunately.
ChatGPT 5:
The "Arm Neural Technology" mentioned in the article refers to specialized neural accelerators integrated into Arm’s future GPU architectures, aimed at delivering real-time AI-powered graphics enhancements—like Neural Super Sampling (NSS)—on mobile devices. It is not related to BrainChip’s neuromorphic technology.
What Arm’s Neural Technology Is
- Arm is introducing dedicated neural acceleration hardware within next-generation GPUs, enabling advanced graphics features such as AI-driven upscaling, frame generation, and denoising.([turn0view0])
- An early application is Neural Super Sampling (NSS), which can upscale images—e.g., from 540p to 1080p—in just ~4ms per frame while enhancing performance, battery life, or visual quality.([turn0view0])
- This is essentially a GPU-based AI enhancement pipeline, akin to NVIDIA’s Tensor Cores, optimized for graphics—not a neuromorphic/spiking AI architecture.
How It Differs from BrainChip’s Neuromorphic Approach
- Arm’s neural accelerators are classic GPUs with AI blocks, designed for frame-based deep learning tasks, tailored toward visual and graphics workloads.
- BrainChip’s Akida is a neuromorphic processor, built on spiking neural network principles, which excels at event-driven, low-latency, low-power inference—especially suited for edge use cases like sensory data processing, not graphics.
- While BrainChip is a partner in Arm’s ecosystem—meaning Akida can co-exist with Arm CPUs and NPUs—Arm's new GPU neural tech and BrainChip’s SNN IP are complementary, not the same.
Summary
Component Description Relation to BrainChip Arm Neural Technology Dedicated neural GPUs for AI graphics Unrelated to neuromorphic; not BrainChip BrainChip’s Akida Neuromorphic spiking AI for low-power edge® Compatible with Arm ecosystem but distinct
If anything, Arm’s new offering and BrainChip’s neuromorphic IP represent different layers of edge AI evolution—graphics-centric in one case, brain-inspired general intelligence in the other.
The ARM U85 uses MACs:
https://www.bing.com/images/search?...dex=1&itb=0&ajaxhist=0&ajaxserp=0&vt=0&sim=11
Arm® Ethos™-U85 NPU Technical Overview
The weight and fast weight channels transfer compressed weights from external memory to the weight decoder. The DMA controller uses a read buffer to hide bus latency from the weight decoder and to enable the DMA to handle data arriving out of order. The traversal unit triggers these channels for blocks that require the transfer of weights.
The weight stream must be quantized to eight bits or less by an offline tool. When passed through the offline compiler, weights are compressed losslessly and reordered into an NPU specific weight stream. This process is effective, if the quantizer uses less than eight bits or if it uses clustering and pruning techniques. The quantizer can also employ all three methods. Using lossless compression on high sparsity weights, containing greater than 75% zeros can lead to compression below 3 bits per weight in the final weight stream
Given Akida 3's int16/FP32 capabilities, Akida 3 will be able to be used in more high precision applications than Ethos U85.