Hi Guzzi,Strange.
I asked the same question:
It is unlikely that BrainChip's neuromorphic AI processors, such as the Akida Pico, will end up in mainstream smartphones as their primary AI processing unit, though they could be integrated for specific, highly power-constrained tasks. While BrainChip's technology is designed for power-constrained devices and the "extreme edge," which includes mobile phones and wearables, the current market for smartphone AI processing is dominated by more general-purpose AI accelerators integrated into powerful System-on-Chips (SoCs) [1] [2] [3].
According to www.iAsk.Ai - Ask AI:
BrainChip's Akida platform, including the recently launched AKD1500 and the miniaturized Akida Pico, is specifically engineered for ultra-low power, real-time AI processing at the edge, leveraging neuromorphic principles and event-based hardware acceleration [4] [5] [6]. This design minimizes compute and communication, making it highly efficient for applications where power consumption and thermal limits are critical, such as battery-powered wearables, smart sensors, and heat-constrained environments [5] [7]. The Akida Pico, for instance, consumes as little as 1 milliwatt of power, targeting devices with severe limitations on power and wireless communication capacities [7].
However, the "extreme edge" market, while encompassing mobile phones, also includes a broader range of smaller user devices and wearables [7]. While BrainChip's technology offers significant power savings for specific tasks like keyword detection and audio de-noising, potentially reducing power consumption to one-fifth of traditional models on conventional microprocessors [7], the overall AI capabilities required by modern smartphones are far more extensive. These include complex tasks like advanced image and video processing, natural language understanding, augmented reality, and sophisticated user interface interactions, which typically demand high computational throughput and flexibility that current neuromorphic chips may not fully provide as a primary processor [8] [9].
Intel's Mike Davies notes that while neuromorphic computing excels in processing signal streams in real-time with energy efficiency, there's a "limited amount of magic you can bring to a problem" at the very tiny neural network level [7]. This suggests that while BrainChip's chips are ideal for specific, power-sensitive AI functions, they might not replace the more powerful and versatile AI accelerators found in flagship smartphone SoCs that handle a wider array of demanding AI workloads [8] [9]. Therefore, while a BrainChip processor could potentially be integrated into a smartphone for specialized, ultra-low-power background tasks, it is unlikely to serve as the main AI engine for the device's broad AI functionalities [7] [10].
World's Most Authoritative Sources
Neuromorphic Computing. IEEE Spectrum↩
BrainChip Unveils Breakthrough AKD1500 Edge AI Co-Processor at Embedded World North America. BrainChip Investor Relations↩
Akida. Open Neuromorphic↩
IP. BrainChip↩
Product. BrainChip↩
BrainChip Edge AI AI Chips. Quantum Zeitgeist↩
Neuromorphic Computing. IEEE Spectrum↩
Mobile AI Processors. Qualcomm↩
Apple Neural Engine. Apple Developer↩
Chips. BrainChip↩
There is a pervading misconception about Akida's capabilities.
This analysis misunderstands the capabilities of Akida, particularly Akida 2 and later.
For example, the use of Pico to demonstrate Akida's limited processing capabilities demonstrates total ignorance not only of Akida's capabilities, but also of TENNs. Pico is specifically designed for ultra-low power applications. It is the minimal configuration of Akida and uses a single NPU, whereas Akida can have hundreds of NPUs. Then, as with other processors, many Akidas can be ganged together for exceptionally heavy AI workloads.
Akida 1 is highly efficient for object classification and can be used as an AI accelerator/coprocessor. TENNs in Akida 2 adds temporal definition, enabling time dependent signals such as video and speech analysis.
Among other things, Akida GenAI and Akida 3 will boost Akida's SLLM capabilities.
As to mobile phones, Qualcomm has a firm grip on that market, and they have their own in-house AI in Snapdragon.
https://www.qualcomm.com/news/relea...g-redefine-premium-performance-by-bringing-th
Qualcomm and Samsung Redefine Premium Performance by Bringing the Most Powerful Mobile Platform to the Galaxy S25 Series Globally
US2020073636A1 MULTIPLY-ACCUMULATE (MAC) OPERATIONS FOR CONVOLUTIONAL NEURAL NETWORKS
Priority: 20180831
[0032] The SOC 100 may also include additional processing blocks tailored to specific functions, such as a GPU 104 , a DSP 106 , a connectivity block 110 , which may include fifth generation (5G) connectivity, fourth generation long term evolution (4G LTE) connectivity, Wi-Fi connectivity, USB connectivity, Bluetooth connectivity, and the like, and a multimedia processor 112 that may, for example, detect and recognize gestures. In one implementation, the NPU is implemented in the CPU, DSP, and/or GPU. The SOC 100 may also include a sensor processor 114 , image signal processors (ISPs) 116 , and/or navigation module 120 , which may include a global positioning system.
Qualcomm has been doing a lot of work on analog:
US2024095492A1 MEMORY MANAGEMENT FOR MATHEMATICAL OPERATIONS IN COMPUTING SYSTEMS WITH HETEROGENEOUS MEMORY ARCHITECTURES 20220901
[0019] To improve the performance of operations using machine learning models, various techniques may locate computation near or with memory (e.g., co-located with memory). For example, compute-in-memory techniques may allow for data to be stored in SRAM and for analog computation to be performed in memory using modified SRAM cells. In another example, function-in-memory or processing-in-memory techniques may locate digital computation capacity near memory devices (e.g., DRAM, SRAM, MRAM, etc.) in which the weight data and data to be processed are located. In each of these techniques, however, many data transfer operations may still need to be performed to move data into and out of memory for computation (e.g., when computation is co-located with some, but not all, memory in a computing system).
US2022414443A1 COMPUTE IN MEMORY-BASED MACHINE LEARNING ACCELERATOR ARCHITECTURE 20210625
techniques for processing machine learning model data with a machine learning task accelerator, including: configuring one or more signal processing units (SPUs) of the machine learning task accelerator to process a machine learning model; providing model input data to the one or more configured SPUs; processing the model input data with the machine learning model using the one or more configured SPUs; and receiving output data from the one or more configured SPUs
Akida has the capabilities to handle all the features listed in the first patent abstract. I don't have comparative performance figures, but my money would be on Akida to outperform Snapdragon AI.
Last edited: