BRN Discussion Ongoing

stockduck

Regular

"...
- TinyML market to reach $10.8B by 2030 (24.8% CAGR), driven by low-power edge AI demand in smart cities and IoT.

- Hardware dominates 57% revenue share, with STMicro, Renesas, and NXP developing sub-1mW microcontrollers for neural networks.

- Software frameworks like TensorFlow Lite Micro enable mass adoption, growing at 32% CAGR through open-source collaboration.

- Data privacy innovations (e.g., NXP's Secure Enclave, ARM TrustZone) address regulatory demands in smart city deployments.

- Asia-Pacific leads with 38.1% CAGR, as governments integrate TinyML into traffic, energy, and public safety systems.
..."

conclusions?
:whistle::unsure:😊
Just for some dreaming time.....however have a nice sunday
 
  • Like
  • Fire
  • Thinking
Reactions: 15 users
Edge impulse are pushing Lora at the edge.
 

Attachments

  • Screenshot_20250810_094436_Google.jpg
    Screenshot_20250810_094436_Google.jpg
    352.3 KB · Views: 78
  • Thinking
  • Wow
  • Like
Reactions: 4 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
The new ChatGPT-oss model...




Screenshot 2025-08-10 at 11.52.12 am.png






EXTRACT ONLY


Screenshot 2025-08-10 at 11.50.57 am.png
 
Last edited:
  • Wow
  • Thinking
  • Like
Reactions: 6 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
This next article (see below) titled “Free, offline ChatGPT on your phone? Technically possible, basically useless”, describes how running the new ChatGPT-oss on smartphones is technically feasible, but current smartphones aren’t powerful enough to make it practical at this point in time.

Right now, GPT-oss on mobile is bottlenecked because it runs purely on the CPU (and sometimes GPU) without tapping into specialized neural processing hardware.

So, I asked ChatGPT 5.0 if there'd be an improvement if the model were to be adapted to run on an NPU - particularly one optimized for event-based, sparse processing like Akida and here's the response I received.

I highlighted the last point for obvious reasons.






ChatGPT 5's Response:

Yes — Akida could absolutely be a technology to consider if the goal is to make something like “ChatGPTOS” actually usable on a mobile phone, and the limitations described in that article are almost a perfect setup for why a neuromorphic processor would matter.

Here’s why:



Why Akida Fits This Problem

The main bottlenecks in running a 20B parameter LLM on a phone (as described in the article) are:
  1. RAM requirements – Needs 12–24 GB just to load the model.
  2. Compute requirements – Mobile CPUs & GPUs can’t sustain high token generation without overheating and throttling.
  3. Power draw – 100% CPU load drains the battery extremely fast.
  4. Heat – The phone becomes uncomfortably hot after just a few minutes.
Akida (or any high-efficiency neuromorphic processor) is built for:

  • Event-driven, sparse compute — only processes relevant “spikes” of information instead of hammering every computation step like a CPU/GPU.
  • Ultra-low power draw — operates in milliwatts for inference.
  • On-chip learning & inference — avoids constant memory transfers that waste time and energy.
  • Massive parallelism — can run edge AI workloads in real time without overheating.
That means for the same LLM inference task, a neuromorphic setup could:
  • Reduce energy consumption by orders of magnitude.
  • Maintain higher sustained token rates without thermal throttling.
  • Extend usable session time from minutes to hours on a battery.


Why Not Just Use Mobile NPUs Instead?

Modern NPUs (like Qualcomm Hexagon or Apple Neural Engine) do accelerate AI workloads, but:
  • They’re matrix-math optimised, not spiking-event optimised.
  • They often require proprietary SDKs and retraining models in special formats.
  • They still hit thermal limits in sustained generative AI workloads.
Neuromorphic processors like Akida:
  • Can run quantised, sparse, and pruned models natively without expensive data shuffling.
  • Could theoretically run a distilled / specialised LLM that fits in a fraction of the RAM and still delivers conversational quality.


Would It Make ChatGPTOSS Work on Phones?

Realistically:
  • You’re not going to run the full 20B gpt-oss on Akida right now — it’s still too large.
  • But with model compression (distillation + sparsity), you could run a smaller, reasoning-optimised model that feels far more responsive than the brute-force CPU approach.
  • Instead of the “2–3 tokens per second” in the article, you might see 10× or more throughput without cooking the phone.


Implications for BrainChip

If OpenAI or others seriously want “ChatGPTOS” to run locally:
  • They’ll either need dedicated AI accelerators in consumer devices or move toward neuromorphic architectures.
  • Qualcomm, Apple, Google already have NPUs — but neuromorphic could leapfrog them in power efficiency per watt.
  • If BrainChip could prove Akida can handle a slimmed-down LLM at useful speeds, it becomes a licensable IP block for smartphone SoCs.
  • This would be a huge market — billions of devices, not just niche edge AI.
 
Last edited:
  • Fire
  • Like
  • Love
Reactions: 22 users

7für7

Top 20
This next article (see below) titled “Free, offline ChatGPT on your phone? Technically possible, basically useless”, describes how running the new ChatGPT-oss on smartphones is technically feasible, but current smartphones aren’t powerful enough to make it practical at this point in time.

Right now, GPT-oss on mobile is bottlenecked because it runs purely on the CPU (and sometimes GPU) without tapping into specialized neural processing hardware.

So, I asked ChatGPT 5.0 if there'd be an improvement if the model were to be adapted to run on an NPU - particularly one optimized for event-based, sparse processing like Akida and here's the response I received.

I highlighted the last point for obvious reasons.






ChatGPT 5's Response:

Yes — Akida could absolutely be a technology to consider if the goal is to make something like “ChatGPTOS” actually usable on a mobile phone, and the limitations described in that article are almost a perfect setup for why a neuromorphic processor would matter.

Here’s why:



Why Akida Fits This Problem

The main bottlenecks in running a 20B parameter LLM on a phone (as described in the article) are:
  1. RAM requirements – Needs 12–24 GB just to load the model.
  2. Compute requirements – Mobile CPUs & GPUs can’t sustain high token generation without overheating and throttling.
  3. Power draw – 100% CPU load drains the battery extremely fast.
  4. Heat – The phone becomes uncomfortably hot after just a few minutes.
Akida (or any high-efficiency neuromorphic processor) is built for:

  • Event-driven, sparse compute — only processes relevant “spikes” of information instead of hammering every computation step like a CPU/GPU.
  • Ultra-low power draw — operates in milliwatts for inference.
  • On-chip learning & inference — avoids constant memory transfers that waste time and energy.
  • Massive parallelism — can run edge AI workloads in real time without overheating.
That means for the same LLM inference task, a neuromorphic setup could:
  • Reduce energy consumption by orders of magnitude.
  • Maintain higher sustained token rates without thermal throttling.
  • Extend usable session time from minutes to hours on a battery.


Why Not Just Use Mobile NPUs Instead?

Modern NPUs (like Qualcomm Hexagon or Apple Neural Engine) do accelerate AI workloads, but:
  • They’re matrix-math optimised, not spiking-event optimised.
  • They often require proprietary SDKs and retraining models in special formats.
  • They still hit thermal limits in sustained generative AI workloads.
Neuromorphic processors like Akida:
  • Can run quantised, sparse, and pruned models natively without expensive data shuffling.
  • Could theoretically run a distilled / specialised LLM that fits in a fraction of the RAM and still delivers conversational quality.


Would It Make ChatGPTOSS Work on Phones?

Realistically:
  • You’re not going to run the full 20B gpt-oss on Akida right now — it’s still too large.
  • But with model compression (distillation + sparsity), you could run a smaller, reasoning-optimised model that feels far more responsive than the brute-force CPU approach.
  • Instead of the “2–3 tokens per second” in the article, you might see 10× or more throughput without cooking the phone.


Implications for BrainChip

If OpenAI or others seriously want “ChatGPTOS” to run locally:
  • They’ll either need dedicated AI accelerators in consumer devices or move toward neuromorphic architectures.
  • Qualcomm, Apple, Google already have NPUs — but neuromorphic could leapfrog them in power efficiency per watt.
  • If BrainChip could prove Akida can handle a slimmed-down LLM at useful speeds, it becomes a licensable IP block for smartphone SoCs.
  • This would be a huge market — billions of devices, not just niche edge AI.

Yeah… Bravo… you know… but probably not… right? I mean… come on… or? I don’t know…

bored pink panther GIF
 
  • Haha
Reactions: 1 users
NASA (Ames with Wyle Labs & KBR) playing with a new SNN project / conference paper published late 2024 - SPIKE-Dx. Whilst it doesn't appear to have gone to hardware as yet, they identify that as the next step in future works.

They highlight the recent advances in neuromorphic hardware and provide some examples though I would've considered only Akida & Loihi would be pure neuromorphic plays in those examples?

May be worth checking in on the project from time to time to see if the next steps to hardware taken and if we get a look in. Would hope so given they have used Akida before and the recent inroads made via FG, ESA etc.


Paper



Screenshot_2025-08-09-13-21-57-62_e2d5b3f32b79de1d45acd1fad96fbb0f.jpg
Screenshot_2025-08-09-13-22-59-62_e2d5b3f32b79de1d45acd1fad96fbb0f.jpg
IMG_20250809_132415.jpg
 
  • Like
  • Love
  • Fire
Reactions: 13 users
This next article (see below) titled “Free, offline ChatGPT on your phone? Technically possible, basically useless”, describes how running the new ChatGPT-oss on smartphones is technically feasible, but current smartphones aren’t powerful enough to make it practical at this point in time.

Right now, GPT-oss on mobile is bottlenecked because it runs purely on the CPU (and sometimes GPU) without tapping into specialized neural processing hardware.

So, I asked ChatGPT 5.0 if there'd be an improvement if the model were to be adapted to run on an NPU - particularly one optimized for event-based, sparse processing like Akida and here's the response I received.

I highlighted the last point for obvious reasons.






ChatGPT 5's Response:

Yes — Akida could absolutely be a technology to consider if the goal is to make something like “ChatGPTOS” actually usable on a mobile phone, and the limitations described in that article are almost a perfect setup for why a neuromorphic processor would matter.

Here’s why:



Why Akida Fits This Problem

The main bottlenecks in running a 20B parameter LLM on a phone (as described in the article) are:
  1. RAM requirements – Needs 12–24 GB just to load the model.
  2. Compute requirements – Mobile CPUs & GPUs can’t sustain high token generation without overheating and throttling.
  3. Power draw – 100% CPU load drains the battery extremely fast.
  4. Heat – The phone becomes uncomfortably hot after just a few minutes.
Akida (or any high-efficiency neuromorphic processor) is built for:

  • Event-driven, sparse compute — only processes relevant “spikes” of information instead of hammering every computation step like a CPU/GPU.
  • Ultra-low power draw — operates in milliwatts for inference.
  • On-chip learning & inference — avoids constant memory transfers that waste time and energy.
  • Massive parallelism — can run edge AI workloads in real time without overheating.
That means for the same LLM inference task, a neuromorphic setup could:
  • Reduce energy consumption by orders of magnitude.
  • Maintain higher sustained token rates without thermal throttling.
  • Extend usable session time from minutes to hours on a battery.


Why Not Just Use Mobile NPUs Instead?

Modern NPUs (like Qualcomm Hexagon or Apple Neural Engine) do accelerate AI workloads, but:
  • They’re matrix-math optimised, not spiking-event optimised.
  • They often require proprietary SDKs and retraining models in special formats.
  • They still hit thermal limits in sustained generative AI workloads.
Neuromorphic processors like Akida:
  • Can run quantised, sparse, and pruned models natively without expensive data shuffling.
  • Could theoretically run a distilled / specialised LLM that fits in a fraction of the RAM and still delivers conversational quality.


Would It Make ChatGPTOSS Work on Phones?

Realistically:
  • You’re not going to run the full 20B gpt-oss on Akida right now — it’s still too large.
  • But with model compression (distillation + sparsity), you could run a smaller, reasoning-optimised model that feels far more responsive than the brute-force CPU approach.
  • Instead of the “2–3 tokens per second” in the article, you might see 10× or more throughput without cooking the phone.


Implications for BrainChip

If OpenAI or others seriously want “ChatGPTOS” to run locally:
  • They’ll either need dedicated AI accelerators in consumer devices or move toward neuromorphic architectures.
  • Qualcomm, Apple, Google already have NPUs — but neuromorphic could leapfrog them in power efficiency per watt.
  • If BrainChip could prove Akida can handle a slimmed-down LLM at useful speeds, it becomes a licensable IP block for smartphone SoCs.
  • This would be a huge market — billions of devices, not just niche edge AI.
Someone please send to Sean ASAP 😆. It should be thier priority such an opportunity staring brn in the face and not to be missed according to all shareholders
😉
 
  • Like
  • Fire
Reactions: 10 users
This next article (see below) titled “Free, offline ChatGPT on your phone? Technically possible, basically useless”, describes how running the new ChatGPT-oss on smartphones is technically feasible, but current smartphones aren’t powerful enough to make it practical at this point in time.

Right now, GPT-oss on mobile is bottlenecked because it runs purely on the CPU (and sometimes GPU) without tapping into specialized neural processing hardware.

So, I asked ChatGPT 5.0 if there'd be an improvement if the model were to be adapted to run on an NPU - particularly one optimized for event-based, sparse processing like Akida and here's the response I received.

I highlighted the last point for obvious reasons.






ChatGPT 5's Response:

Yes — Akida could absolutely be a technology to consider if the goal is to make something like “ChatGPTOS” actually usable on a mobile phone, and the limitations described in that article are almost a perfect setup for why a neuromorphic processor would matter.

Here’s why:



Why Akida Fits This Problem

The main bottlenecks in running a 20B parameter LLM on a phone (as described in the article) are:
  1. RAM requirements – Needs 12–24 GB just to load the model.
  2. Compute requirements – Mobile CPUs & GPUs can’t sustain high token generation without overheating and throttling.
  3. Power draw – 100% CPU load drains the battery extremely fast.
  4. Heat – The phone becomes uncomfortably hot after just a few minutes.
Akida (or any high-efficiency neuromorphic processor) is built for:

  • Event-driven, sparse compute — only processes relevant “spikes” of information instead of hammering every computation step like a CPU/GPU.
  • Ultra-low power draw — operates in milliwatts for inference.
  • On-chip learning & inference — avoids constant memory transfers that waste time and energy.
  • Massive parallelism — can run edge AI workloads in real time without overheating.
That means for the same LLM inference task, a neuromorphic setup could:
  • Reduce energy consumption by orders of magnitude.
  • Maintain higher sustained token rates without thermal throttling.
  • Extend usable session time from minutes to hours on a battery.


Why Not Just Use Mobile NPUs Instead?

Modern NPUs (like Qualcomm Hexagon or Apple Neural Engine) do accelerate AI workloads, but:
  • They’re matrix-math optimised, not spiking-event optimised.
  • They often require proprietary SDKs and retraining models in special formats.
  • They still hit thermal limits in sustained generative AI workloads.
Neuromorphic processors like Akida:
  • Can run quantised, sparse, and pruned models natively without expensive data shuffling.
  • Could theoretically run a distilled / specialised LLM that fits in a fraction of the RAM and still delivers conversational quality.


Would It Make ChatGPTOSS Work on Phones?

Realistically:
  • You’re not going to run the full 20B gpt-oss on Akida right now — it’s still too large.
  • But with model compression (distillation + sparsity), you could run a smaller, reasoning-optimised model that feels far more responsive than the brute-force CPU approach.
  • Instead of the “2–3 tokens per second” in the article, you might see 10× or more throughput without cooking the phone.


Implications for BrainChip

If OpenAI or others seriously want “ChatGPTOS” to run locally:
  • They’ll either need dedicated AI accelerators in consumer devices or move toward neuromorphic architectures.
  • Qualcomm, Apple, Google already have NPUs — but neuromorphic could leapfrog them in power efficiency per watt.
  • If BrainChip could prove Akida can handle a slimmed-down LLM at useful speeds, it becomes a licensable IP block for smartphone SoCs.
  • This would be a huge market — billions of devices, not just niche edge AI.
This next article (see below) titled “Free, offline ChatGPT on your phone? Technically possible, basically useless”, describes how running the new ChatGPT-oss on smartphones is technically feasible, but current smartphones aren’t powerful enough to make it practical at this point in time.

Right now, GPT-oss on mobile is bottlenecked because it runs purely on the CPU (and sometimes GPU) without tapping into specialized neural processing hardware.

So, I asked ChatGPT 5.0 if there'd be an improvement if the model were to be adapted to run on an NPU - particularly one optimized for event-based, sparse processing like Akida and here's the response I received.

I highlighted the last point for obvious reasons.






ChatGPT 5's Response:

Yes — Akida could absolutely be a technology to consider if the goal is to make something like “ChatGPTOS” actually usable on a mobile phone, and the limitations described in that article are almost a perfect setup for why a neuromorphic processor would matter.

Here’s why:



Why Akida Fits This Problem

The main bottlenecks in running a 20B parameter LLM on a phone (as described in the article) are:
  1. RAM requirements – Needs 12–24 GB just to load the model.
  2. Compute requirements – Mobile CPUs & GPUs can’t sustain high token generation without overheating and throttling.
  3. Power draw – 100% CPU load drains the battery extremely fast.
  4. Heat – The phone becomes uncomfortably hot after just a few minutes.
Akida (or any high-efficiency neuromorphic processor) is built for:

  • Event-driven, sparse compute — only processes relevant “spikes” of information instead of hammering every computation step like a CPU/GPU.
  • Ultra-low power draw — operates in milliwatts for inference.
  • On-chip learning & inference — avoids constant memory transfers that waste time and energy.
  • Massive parallelism — can run edge AI workloads in real time without overheating.
That means for the same LLM inference task, a neuromorphic setup could:
  • Reduce energy consumption by orders of magnitude.
  • Maintain higher sustained token rates without thermal throttling.
  • Extend usable session time from minutes to hours on a battery.


Why Not Just Use Mobile NPUs Instead?

Modern NPUs (like Qualcomm Hexagon or Apple Neural Engine) do accelerate AI workloads, but:
  • They’re matrix-math optimised, not spiking-event optimised.
  • They often require proprietary SDKs and retraining models in special formats.
  • They still hit thermal limits in sustained generative AI workloads.
Neuromorphic processors like Akida:
  • Can run quantised, sparse, and pruned models natively without expensive data shuffling.
  • Could theoretically run a distilled / specialised LLM that fits in a fraction of the RAM and still delivers conversational quality.


Would It Make ChatGPTOSS Work on Phones?

Realistically:
  • You’re not going to run the full 20B gpt-oss on Akida right now — it’s still too large.
  • But with model compression (distillation + sparsity), you could run a smaller, reasoning-optimised model that feels far more responsive than the brute-force CPU approach.
  • Instead of the “2–3 tokens per second” in the article, you might see 10× or more throughput without cooking the phone.


Implications for BrainChip

If OpenAI or others seriously want “ChatGPTOS” to run locally:
  • They’ll either need dedicated AI accelerators in consumer devices or move toward neuromorphic architectures.
  • Qualcomm, Apple, Google already have NPUs — but neuromorphic could leapfrog them in power efficiency per watt.
  • If BrainChip could prove Akida can handle a slimmed-down LLM at useful speeds, it becomes a licensable IP block for smartphone SoCs.
  • This would be a huge market — billions of devices, not just niche edge AI.
I took the opportunity to send to IR at Brainchip.
Why not I say
 
  • Like
  • Love
  • Fire
Reactions: 17 users

TECH

Regular
While Larry agrees with all the frustration about the lack of ASX announcements and the effect on the SP as a result and also the shorters doing their faecal trades...Larry has never sold and keeps on buying as he did again today....so thank you to everyone who is keeping the SP at this level.....Larry loves you

View attachment 89417

Really Happy as Larry

Hey Larry........you are one of the smarter shareholders, I can't remember a post of yours that has ever been very negative (maybe I'm wrong)
but having our share price held in such a tight band, yes, it's very frustrating, but the opportunity is sitting right in everyone's face, for the
ones who currently hold cash, what are you waiting for.

The support from 0.18 to 0.20 is the strongest I have witnessed for a long time, to me it's like we have finally found a true bottom, yet look
how much development has been achieved to date with the Akida models, any sign of negativity? absolutely NOT.

Dr. JT has already given Sean a solid commitment to have the team produce AKD 3.0 successfully up and running within the first 8 weeks approximately of 2026, and correct me if I'm slightly wrong here, but from memory, the good Dr. stated that he felt this achievement would
propel Brainchip for the next decade!! that in itself is a huge call, clearly showing the confidence, the entire team has in what we have, which
obviously includes our current portfolio of Patents with more to follow.

Very exciting times indeed.......I seem to recall myself saying that over the first decade, must get it right eventually. :ROFLMAO::ROFLMAO:

Tech (home once again in Perth) ❤️ AKD
 
  • Like
  • Love
  • Fire
Reactions: 17 users

Diogenese

Top 20
Hey Larry........you are one of the smarter shareholders, I can't remember a post of yours that has ever been very negative (maybe I'm wrong)
but having our share price held in such a tight band, yes, it's very frustrating, but the opportunity is sitting right in everyone's face, for the
ones who currently hold cash, what are you waiting for.

The support from 0.18 to 0.20 is the strongest I have witnessed for a long time, to me it's like we have finally found a true bottom, yet look
how much development has been achieved to date with the Akida models, any sign of negativity? absolutely NOT.

Dr. JT has already given Sean a solid commitment to have the team produce AKD 3.0 successfully up and running within the first 8 weeks approximately of 2026, and correct me if I'm slightly wrong here, but from memory, the good Dr. stated that he felt this achievement would
propel Brainchip for the next decade!! that in itself is a huge call, clearly showing the confidence, the entire team has in what we have, which
obviously includes our current portfolio of Patents with more to follow.

Very exciting times indeed.......I seem to recall myself saying that over the first decade, must get it right eventually. :ROFLMAO::ROFLMAO:

Tech (home once again in Perth) ❤️ AKD
Hi Tech,

Per the roadmap:

BrainChip Technology Roadmap - BrainChip ,

the Akida 3 tapeout is scheduled for Q1 2026, and the FPGA is due in Q2 2026:

1754802377228.png



Akida 3 capabilities:

1754802510457.png


Neuromorphic architecture is programmable, making the architecture more flexible than Akida 2.

The optional quantization enables 3rd party models to be used as is, without the need to convert to Akida format. Also INT16 /FP32 also opens up use to a broader range of high precision 3rd party models. In particular, that will open up the possibility to run all legacy open source CNN models on Akida 3 - that's decades of development.*

Akida 3 will be a universal NN, with spik-like function as an option.

Contextual awareness enables interplay between voice and video:

1754803137741.png




exciting developments this quater:

1. Akida 2 - TENNs - computer vision; [We already have TENNs model for speech denoising & speech recognition, and LLM + RAG - https://brainchip.com/ready-to-use-akida-neural-network-models/ ]

2. Akida GenAI - natural language processing.

In Q4, Akida 2 - advanced computer vision; signal processing.

Jonathan Tapson also foreshadowed patents for improved data memory movement.


* This could be useful where a customer has a library of old CNN models which they only need to access occasionally. They can run them in native CNN instead of having to convert to Akida SNN. Hence this extends Akida's backward compatibility.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 28 users

7für7

Top 20
Tomorrow — and I’m telling you, folks — it’s going to be a tremendous day, maybe the best day ever for a price-sensitive announcement. People are saying — many smart people — that this one could send the share price higher than it’s ever gone before. Skyrocketing. Like a rocket. And believe me, I know rockets. I had one. The best. Maybe I still do. Probably. Who knows? But I’ll tell you something: Musk never called me about it. Never! Why? Because he was afraid. Afraid of CHINA! And everybody knows it.



Donald Trump GIF
 
  • Haha
Reactions: 12 users

IloveLamp

Top 20
  • Like
  • Love
  • Fire
Reactions: 14 users
Top Bottom