BRN Discussion Ongoing

I’d like to think so, but I doubt it. There’s no reason that I can see that Megachips would hide the fact they were using Akida in HaLow
My thinking was HaLow was hiding it not Megachip, yet that's more than likely not the case either.
 
Last edited:
Tick Tock Sean... End of 2025 is near. Nothing has been announced or happened, as per usual.
 
  • Like
Reactions: 2 users

JoMo68

Regular
Can we stop with the juvenile ‘tick tick’ comments? They’re getting very boring.
 
  • Like
  • Love
  • Haha
Reactions: 14 users
Happy Sunday to most.


The likelihood of BrainChip's Akida technology being integrated into mobile phones is high, with strong indications of imminent commercialization.

According to www.iAsk.Ai - Ask AI:

BrainChip's neuromorphic AI processors, particularly the Akida NSoC, are designed for ultra-low power consumption and on-device processing, making them ideal for resource-constrained edge devices like smartphones [1] [2] [3]. The company has explicitly stated its intention to expand into consumer electronics, including mobile and embedded devices
 
  • Like
  • Fire
  • Love
Reactions: 6 users

Guzzi62

Regular
Happy Sunday to most.


The likelihood of BrainChip's Akida technology being integrated into mobile phones is high, with strong indications of imminent commercialization.

According to www.iAsk.Ai - Ask AI:

BrainChip's neuromorphic AI processors, particularly the Akida NSoC, are designed for ultra-low power consumption and on-device processing, making them ideal for resource-constrained edge devices like smartphones [1] [2] [3]. The company has explicitly stated its intention to expand into consumer electronics, including mobile and embedded devices
Strange.

I asked the same question:


It is unlikely that BrainChip's neuromorphic AI processors, such as the Akida Pico, will end up in mainstream smartphones as their primary AI processing unit, though they could be integrated for specific, highly power-constrained tasks. While BrainChip's technology is designed for power-constrained devices and the "extreme edge," which includes mobile phones and wearables, the current market for smartphone AI processing is dominated by more general-purpose AI accelerators integrated into powerful System-on-Chips (SoCs) [1] [2] [3].
According to www.iAsk.Ai - Ask AI:
BrainChip's Akida platform, including the recently launched AKD1500 and the miniaturized Akida Pico, is specifically engineered for ultra-low power, real-time AI processing at the edge, leveraging neuromorphic principles and event-based hardware acceleration [4] [5] [6]. This design minimizes compute and communication, making it highly efficient for applications where power consumption and thermal limits are critical, such as battery-powered wearables, smart sensors, and heat-constrained environments [5] [7]. The Akida Pico, for instance, consumes as little as 1 milliwatt of power, targeting devices with severe limitations on power and wireless communication capacities [7].
However, the "extreme edge" market, while encompassing mobile phones, also includes a broader range of smaller user devices and wearables [7]. While BrainChip's technology offers significant power savings for specific tasks like keyword detection and audio de-noising, potentially reducing power consumption to one-fifth of traditional models on conventional microprocessors [7], the overall AI capabilities required by modern smartphones are far more extensive. These include complex tasks like advanced image and video processing, natural language understanding, augmented reality, and sophisticated user interface interactions, which typically demand high computational throughput and flexibility that current neuromorphic chips may not fully provide as a primary processor [8] [9].
Intel's Mike Davies notes that while neuromorphic computing excels in processing signal streams in real-time with energy efficiency, there's a "limited amount of magic you can bring to a problem" at the very tiny neural network level [7]. This suggests that while BrainChip's chips are ideal for specific, power-sensitive AI functions, they might not replace the more powerful and versatile AI accelerators found in flagship smartphone SoCs that handle a wider array of demanding AI workloads [8] [9]. Therefore, while a BrainChip processor could potentially be integrated into a smartphone for specialized, ultra-low-power background tasks, it is unlikely to serve as the main AI engine for the device's broad AI functionalities [7] [10].

World's Most Authoritative Sources

Neuromorphic Computing. IEEE Spectrum↩
BrainChip Unveils Breakthrough AKD1500 Edge AI Co-Processor at Embedded World North America. BrainChip Investor Relations↩
Akida. Open Neuromorphic↩
IP. BrainChip↩
Product. BrainChip↩
BrainChip Edge AI AI Chips. Quantum Zeitgeist↩
Neuromorphic Computing. IEEE Spectrum↩
Mobile AI Processors. Qualcomm↩
Apple Neural Engine. Apple Developer↩
Chips. BrainChip↩
 
  • Like
Reactions: 1 users

Guzzi62

Regular
Strange.

I asked the same question:


It is unlikely that BrainChip's neuromorphic AI processors, such as the Akida Pico, will end up in mainstream smartphones as their primary AI processing unit, though they could be integrated for specific, highly power-constrained tasks. While BrainChip's technology is designed for power-constrained devices and the "extreme edge," which includes mobile phones and wearables, the current market for smartphone AI processing is dominated by more general-purpose AI accelerators integrated into powerful System-on-Chips (SoCs) [1] [2] [3].
According to www.iAsk.Ai - Ask AI:
BrainChip's Akida platform, including the recently launched AKD1500 and the miniaturized Akida Pico, is specifically engineered for ultra-low power, real-time AI processing at the edge, leveraging neuromorphic principles and event-based hardware acceleration [4] [5] [6]. This design minimizes compute and communication, making it highly efficient for applications where power consumption and thermal limits are critical, such as battery-powered wearables, smart sensors, and heat-constrained environments [5] [7]. The Akida Pico, for instance, consumes as little as 1 milliwatt of power, targeting devices with severe limitations on power and wireless communication capacities [7].
However, the "extreme edge" market, while encompassing mobile phones, also includes a broader range of smaller user devices and wearables [7]. While BrainChip's technology offers significant power savings for specific tasks like keyword detection and audio de-noising, potentially reducing power consumption to one-fifth of traditional models on conventional microprocessors [7], the overall AI capabilities required by modern smartphones are far more extensive. These include complex tasks like advanced image and video processing, natural language understanding, augmented reality, and sophisticated user interface interactions, which typically demand high computational throughput and flexibility that current neuromorphic chips may not fully provide as a primary processor [8] [9].
Intel's Mike Davies notes that while neuromorphic computing excels in processing signal streams in real-time with energy efficiency, there's a "limited amount of magic you can bring to a problem" at the very tiny neural network level [7]. This suggests that while BrainChip's chips are ideal for specific, power-sensitive AI functions, they might not replace the more powerful and versatile AI accelerators found in flagship smartphone SoCs that handle a wider array of demanding AI workloads [8] [9]. Therefore, while a BrainChip processor could potentially be integrated into a smartphone for specialized, ultra-low-power background tasks, it is unlikely to serve as the main AI engine for the device's broad AI functionalities [7] [10].

World's Most Authoritative Sources

Neuromorphic Computing. IEEE Spectrum↩
BrainChip Unveils Breakthrough AKD1500 Edge AI Co-Processor at Embedded World North America. BrainChip Investor Relations↩
Akida. Open Neuromorphic↩
IP. BrainChip↩
Product. BrainChip↩
BrainChip Edge AI AI Chips. Quantum Zeitgeist↩
Neuromorphic Computing. IEEE Spectrum↩
Mobile AI Processors. Qualcomm↩
Apple Neural Engine. Apple Developer↩
Chips. BrainChip↩
I asked my question negatively in order to get a negative response, you can's really trust them that much.

Popular LLM model AI's are often tuned to be responsive and positive to user queries, and are often mis-used to confirm beliefs
 
  • Like
Reactions: 3 users
Well there you have it, speak negative and you will receive negative, that's even human nature. Gets you know we're with that kind of attitude.
 

Diogenese

Top 20
Strange.

I asked the same question:


It is unlikely that BrainChip's neuromorphic AI processors, such as the Akida Pico, will end up in mainstream smartphones as their primary AI processing unit, though they could be integrated for specific, highly power-constrained tasks. While BrainChip's technology is designed for power-constrained devices and the "extreme edge," which includes mobile phones and wearables, the current market for smartphone AI processing is dominated by more general-purpose AI accelerators integrated into powerful System-on-Chips (SoCs) [1] [2] [3].
According to www.iAsk.Ai - Ask AI:
BrainChip's Akida platform, including the recently launched AKD1500 and the miniaturized Akida Pico, is specifically engineered for ultra-low power, real-time AI processing at the edge, leveraging neuromorphic principles and event-based hardware acceleration [4] [5] [6]. This design minimizes compute and communication, making it highly efficient for applications where power consumption and thermal limits are critical, such as battery-powered wearables, smart sensors, and heat-constrained environments [5] [7]. The Akida Pico, for instance, consumes as little as 1 milliwatt of power, targeting devices with severe limitations on power and wireless communication capacities [7].
However, the "extreme edge" market, while encompassing mobile phones, also includes a broader range of smaller user devices and wearables [7]. While BrainChip's technology offers significant power savings for specific tasks like keyword detection and audio de-noising, potentially reducing power consumption to one-fifth of traditional models on conventional microprocessors [7], the overall AI capabilities required by modern smartphones are far more extensive. These include complex tasks like advanced image and video processing, natural language understanding, augmented reality, and sophisticated user interface interactions, which typically demand high computational throughput and flexibility that current neuromorphic chips may not fully provide as a primary processor [8] [9].
Intel's Mike Davies notes that while neuromorphic computing excels in processing signal streams in real-time with energy efficiency, there's a "limited amount of magic you can bring to a problem" at the very tiny neural network level [7]. This suggests that while BrainChip's chips are ideal for specific, power-sensitive AI functions, they might not replace the more powerful and versatile AI accelerators found in flagship smartphone SoCs that handle a wider array of demanding AI workloads [8] [9]. Therefore, while a BrainChip processor could potentially be integrated into a smartphone for specialized, ultra-low-power background tasks, it is unlikely to serve as the main AI engine for the device's broad AI functionalities [7] [10].

World's Most Authoritative Sources

Neuromorphic Computing. IEEE Spectrum↩
BrainChip Unveils Breakthrough AKD1500 Edge AI Co-Processor at Embedded World North America. BrainChip Investor Relations↩
Akida. Open Neuromorphic↩
IP. BrainChip↩
Product. BrainChip↩
BrainChip Edge AI AI Chips. Quantum Zeitgeist↩
Neuromorphic Computing. IEEE Spectrum↩
Mobile AI Processors. Qualcomm↩
Apple Neural Engine. Apple Developer↩
Chips. BrainChip↩
Hi Guzzi,

There is a pervading misconception about Akida's capabilities.

This analysis misunderstands the capabilities of Akida, particularly Akida 2 and later.

For example, the use of Pico to demonstrate Akida's limited processing capabilities demonstrates total ignorance not only of Akida's capabilities, but also of TENNs. Pico is specifically designed for ultra-low power applications. It is the minimal configuration of Akida and uses a single NPU, whereas Akida can have hundreds of NPUs. Then, as with other processors, many Akidas can be ganged together for exceptionally heavy AI workloads.

Akida 1 is highly efficient for object classification and can be used as an AI accelerator/coprocessor. TENNs in Akida 2 adds temporal definition, enabling time dependent signals such as video and speech analysis.

Among other things, Akida GenAI and Akida 3 will boost Akida's SLLM capabilities.

As to mobile phones, Qualcomm has a firm grip on that market, and they have their own in-house AI in Snapdragon.

https://www.qualcomm.com/news/relea...g-redefine-premium-performance-by-bringing-th

Qualcomm and Samsung Redefine Premium Performance by Bringing the Most Powerful Mobile Platform to the Galaxy S25 Series Globally​


US2020073636A1 MULTIPLY-ACCUMULATE (MAC) OPERATIONS FOR CONVOLUTIONAL NEURAL NETWORKS

Priority: 20180831

1765681717547.png


[0032] The SOC 100 may also include additional processing blocks tailored to specific functions, such as a GPU 104 , a DSP 106 , a connectivity block 110 , which may include fifth generation (5G) connectivity, fourth generation long term evolution (4G LTE) connectivity, Wi-Fi connectivity, USB connectivity, Bluetooth connectivity, and the like, and a multimedia processor 112 that may, for example, detect and recognize gestures. In one implementation, the NPU is implemented in the CPU, DSP, and/or GPU. The SOC 100 may also include a sensor processor 114 , image signal processors (ISPs) 116 , and/or navigation module 120 , which may include a global positioning system.


Qualcomm has been doing a lot of work on analog:

US2024095492A1 MEMORY MANAGEMENT FOR MATHEMATICAL OPERATIONS IN COMPUTING SYSTEMS WITH HETEROGENEOUS MEMORY ARCHITECTURES 20220901

[0019] To improve the performance of operations using machine learning models, various techniques may locate computation near or with memory (e.g., co-located with memory). For example, compute-in-memory techniques may allow for data to be stored in SRAM and for analog computation to be performed in memory using modified SRAM cells. In another example, function-in-memory or processing-in-memory techniques may locate digital computation capacity near memory devices (e.g., DRAM, SRAM, MRAM, etc.) in which the weight data and data to be processed are located. In each of these techniques, however, many data transfer operations may still need to be performed to move data into and out of memory for computation (e.g., when computation is co-located with some, but not all, memory in a computing system).

US2022414443A1 COMPUTE IN MEMORY-BASED MACHINE LEARNING ACCELERATOR ARCHITECTURE 20210625

techniques for processing machine learning model data with a machine learning task accelerator, including: configuring one or more signal processing units (SPUs) of the machine learning task accelerator to process a machine learning model; providing model input data to the one or more configured SPUs; processing the model input data with the machine learning model using the one or more configured SPUs; and receiving output data from the one or more configured SPUs

Akida has the capabilities to handle all the features listed in the first patent abstract. I don't have comparative performance figures, but my money would be on Akida to outperform Snapdragon AI.
 
Last edited:
  • Like
  • Fire
Reactions: 5 users
Can we stop with the juvenile ‘tick tick’ comments? They’re getting very boring.
Can BRN stop promising shit that never comes true, they are becoming very boring... Broken promise after broken promise. Watch financials, unprecedented interest, big announcement before the end of the year.

And yet here we are...
 
Hi Guzzi,

There is a pervading misconception about Akida's capabilities.

This analysis misunderstands the capabilities of Akida, particularly Akida 2 and later.

For example, the use of Pico to demonstrate Akida's limited processing capabilities demonstrates total ignorance not only of Akida's capabilities, but also of TENNs. Pico is specifically designed for ultra-low power applications. It is the minimal configuration of Akida and uses a single NPU, whereas Akida can have hundreds of NPUs. Then, as with other processors, many Akidas can be ganged together for exceptionally heavy AI workloads.

Akida 1 is highly efficient for object classification and can be used as an AI accelerator/coprocessor. TENNs in Akida 2 adds temporal definition, enabling time dependent signals such as video and speech analysis.

Among other things, Akida GenAI and Akida 3 will boost Akida's SLLM capabilities.

As to mobile phones, Qualcomm has a firm grip on that market, and they have their own in-house AI in Snapdragon.

https://www.qualcomm.com/news/relea...g-redefine-premium-performance-by-bringing-th

Qualcomm and Samsung Redefine Premium Performance by Bringing the Most Powerful Mobile Platform to the Galaxy S25 Series Globally​


US2020073636A1 MULTIPLY-ACCUMULATE (MAC) OPERATIONS FOR CONVOLUTIONAL NEURAL NETWORKS

Priority: 20180831

View attachment 93692

[0032] The SOC 100 may also include additional processing blocks tailored to specific functions, such as a GPU 104 , a DSP 106 , a connectivity block 110 , which may include fifth generation (5G) connectivity, fourth generation long term evolution (4G LTE) connectivity, Wi-Fi connectivity, USB connectivity, Bluetooth connectivity, and the like, and a multimedia processor 112 that may, for example, detect and recognize gestures. In one implementation, the NPU is implemented in the CPU, DSP, and/or GPU. The SOC 100 may also include a sensor processor 114 , image signal processors (ISPs) 116 , and/or navigation module 120 , which may include a global positioning system.


Qualcomm has been doing a lot of work on analog:

US2024095492A1 MEMORY MANAGEMENT FOR MATHEMATICAL OPERATIONS IN COMPUTING SYSTEMS WITH HETEROGENEOUS MEMORY ARCHITECTURES 20220901

[0019] To improve the performance of operations using machine learning models, various techniques may locate computation near or with memory (e.g., co-located with memory). For example, compute-in-memory techniques may allow for data to be stored in SRAM and for analog computation to be performed in memory using modified SRAM cells. In another example, function-in-memory or processing-in-memory techniques may locate digital computation capacity near memory devices (e.g., DRAM, SRAM, MRAM, etc.) in which the weight data and data to be processed are located. In each of these techniques, however, many data transfer operations may still need to be performed to move data into and out of memory for computation (e.g., when computation is co-located with some, but not all, memory in a computing system).

US2022414443A1 COMPUTE IN MEMORY-BASED MACHINE LEARNING ACCELERATOR ARCHITECTURE 20210625

techniques for processing machine learning model data with a machine learning task accelerator, including: configuring one or more signal processing units (SPUs) of the machine learning task accelerator to process a machine learning model; providing model input data to the one or more configured SPUs; processing the model input data with the machine learning model using the one or more configured SPUs; and receiving output data from the one or more configured SPUs

Akida has the capabilities to handle all the features listed in the first patent abstract. I don't have comparative performance figures, but my money would be on Akida to outperform Snapdragon AI.
I can imagine it will be hard for brainchip to take business from Qualcomm due to their existing partnerships with in the phone industry as they do have a lot of bargaining power to keep their seat. I think brainchip will be at the table and in phones at some point but maybe not at snapdragons party, it may be a new company outside of their strangle hold 🤔.
 
  • Like
Reactions: 1 users

Guzzi62

Regular
I can imagine it will be hard for brainchip to take business from Qualcomm due to their existing partnerships with in the phone industry as they do have a lot of bargaining power to keep their seat. I think brainchip will be at the table and in phones at some point but maybe not at snapdragons party, it may be a new company outside of their strangle hold 🤔.
Goggle/AI:

A typical smartphone contains many individual chips, often a dozen or more, working together, dominated by the main System-on-Chip (SoC) which integrates the CPU, GPU, and other cores, plus separate chips for memory, power management, wireless (5G/Wi-Fi/BT), cameras, sensors, and display drivers. While the SoC is the powerhouse, numerous specialized integrated circuits (ICs) handle different functions, making for a complex system built from various silicon wafers.

Key Chips You'll Find:

Think of it like this:
Instead of one giant chip, you have one very complex main chip (the SoC) and several smaller, dedicated chips, all connected on the main board, each performing a crucial task for the overall smartphone experience.

My own non AI take, LOL: Based on that, it's certainly not impossible that AKD can/will be used in smartphones one day, but as mentioned getting a foot though the door is the hardest part.
 
  • Like
Reactions: 1 users

Fiendish

Regular
I can imagine it will be hard for brainchip to take business from Qualcomm due to their existing partnerships with in the phone industry as they do have a lot of bargaining power to keep their seat. I think brainchip will be at the table and in phones at some point but maybe not at snapdragons party, it may be a new company outside of their strangle hold 🤔.
While no direct partnership is announced as of December 2025, indirect enablers like Qualcomm's acquisition of Edge Impulse (which officially supports Akida model training and deployment) boost feasibility. Samsung uses both its Exynos and Qualcomm Snapdragon chips, making Akida integration viable via tools like MetaTF framework.
 
  • Like
Reactions: 1 users
Top Bottom