BRN Discussion Ongoing

Frangipani

Top 20

View attachment 89419

At this point in time, the AKIDA satellite with an AKD1000 PCIe Card onboard that OHB Hellas has been developing is not slated to handle “fire detection” - according to their concept website https://giasaas.eu/, this application is instead supposed to be performed by their two other satellites: KRIA (equipped with a Xilinx KRIA KV260 Vision AI Starter Kit) and CORAL (equipped with a Google Coral TPU Dev Board):

https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-462394

7889DEFA-DAD3-4ECD-B660-BF20C28B7E46.jpeg


B3FC150A-C58C-4E08-963B-835AF2B5D79F.jpeg

FD4905F7-78AF-48F1-83C6-C28BF87EF869.jpeg
5050B15B-D1B2-4A04-BDEE-2C9F51F3A414.jpeg
 

Attachments

  • 70FA431A-9263-421C-9773-8E7388AA44A0.jpeg
    70FA431A-9263-421C-9773-8E7388AA44A0.jpeg
    92.6 KB · Views: 63
  • Like
  • Love
Reactions: 9 users

View attachment 89420



Unlock Your AI Potential: A Deep Dive into BrainChip’sAkida™ Cloud​


In the rapidly evolving landscape of Artificial Intelligence, having immediate access to the most powerful and efficient tools is paramount. BrainChip is revolutionizing AI development with its innovative Akida Cloud, a platform designed to provide developers with seamless access to the latest neuromorphic processor technology. This cloud initiative focuses on significantly shortening development cycles and empowering creators to bring their AI visions to life faster than ever before.

The core mission of the Akida Cloud is to remove barriers and accelerate innovation. By providing a direct line to the latest Akida technology, developers can program and execute neural networks for immediate results, streamlining the path from concept to product.

The Strategic Advantage of Akida Cloud​


Leveraging the Akida Cloud offers a multitude of strategic benefits for developers, partners, and businesses alike:
  • Accelerated Prototyping: Dive into Akida’s most advanced features and begin prototyping instantly, without waiting for physical hardware.
  • Continuous Innovation: The platform is extensible, meaning as new versions and configurations of Akida are available, they become download and run options on the Akida Cloud.
  • Empowering Partnerships:Partners can effectively demonstrate working models and prototypes to customers, enabling parallel development workflows before hardware is even on-site.
  • Flexible and Accessible: A versatile business model includes options for limited free access and usage-based pricing, with credits applicable to future hardware purchases. An on-premises Akida FPGA Developer Platform is also available for dedicated use.

How to Get Access​


  • Getting started with the Akida Cloud and its “limited free access” is a straightforward process. To begin your journey, you will need to formally request access through the BrainChip website. You can navigate to the “Developer Hub” or the Akida Cloud information page to get access. You will be prompted to provide your contact information and details about your project or interest in the technology. Once submitted, the BrainChip team will respond with the next steps to get you connected to the platform.

Evaluating the Power of Akida: A Step-by-Step Guide​


The true power of the Akida Cloud lies in its hands-on evaluation capabilities. The platform offers a distinct advantage for R&D teams, allowing them to remotely test and refine advanced neuromorphic models. Here is how it works:
  • Seamless Connection: Getting connected is straightforward. By adding a simple configuration to your local SSH file, you can create a secure and persistent “tunnel” to the Akida Cloud server. This works seamlessly across Windows, macOS, and Linux environments.
  • Launch Your Workspace: Once connected, a single command initiates Jupyter Labs. This launches a pre-configured, browser-based environment loaded with all the necessary tools and Akida-specific libraries.
  • Explore and Experiment: Within Jupyter Labs, you will find a comprehensive library of example notebooks. These examples are your gateway to understanding Akida’s capabilities. You can explore a range of applications, from image classification to complex object detection, and run them directly on the second-generation Akida architecture.
  • Real-Time Data Streaming: A key innovation is the ability to stream your own real-time data to the Akida Cloud for inferencing from your local machine. The results can be streamed back to your local machine, providing immediate feedback. This full loop data acquisition, processing and display is demonstrated in applications like eye-tracking, where this rapid feedback loop allows for quick iteration and training, dramatically improving model accuracy under real-world conditions.

The Next Generation of Neuromorphic Power​


The Akida Cloud today runs on the second-generation Akida platform, which delivers a monumental leap in performance—offering up to four times the performance and efficiencyof the first generation. The architecture is also finely tuned to support state-space based Temporal Event-Based Neural Nets (TENNs), making it exceptionally proficient at processing raw, time-sensitive data from video, audio, and other sensors. This advanced capability simplifies model implementation, reduces model size, and speeds up development for a new generation of intelligent edge devices.

A Glimpse into the Future​


The Akida Cloud is not just about today’s technology; it is a platform for the future. BrainChip’s roadmap includes making future versions of Akida available through the cloud, providing access to different node configurations, enhanced memory, and significant architectural improvements. This will allow for custom configurations that enable customers to perfectly match their unique requirements for integration into an IC. Looking ahead, the Akida Cloud platform will support next-generation workloads, with future generations of Akida designed to manage the demands of future AI applications.

About BrainChip​


BrainChip is the worldwide leader in creating Edge AI on-chip processing and learning solutions. Its pioneering Akida™ processor is a fully digital, event-based AI engine that mimics the principles of the human brain. By analyzing only, the essential sensor inputs right at the point of acquisition, Akida delivers unmatched efficiency, precision, and energy economy. BrainChip is dedicated to advancing the future of intelligent computing, bringing AI closer to the sensor and closer to real-time.

View attachment 89420



Unlock Your AI Potential: A Deep Dive into BrainChip’sAkida™ Cloud​


In the rapidly evolving landscape of Artificial Intelligence, having immediate access to the most powerful and efficient tools is paramount. BrainChip is revolutionizing AI development with its innovative Akida Cloud, a platform designed to provide developers with seamless access to the latest neuromorphic processor technology. This cloud initiative focuses on significantly shortening development cycles and empowering creators to bring their AI visions to life faster than ever before.

The core mission of the Akida Cloud is to remove barriers and accelerate innovation. By providing a direct line to the latest Akida technology, developers can program and execute neural networks for immediate results, streamlining the path from concept to product.

The Strategic Advantage of Akida Cloud​


Leveraging the Akida Cloud offers a multitude of strategic benefits for developers, partners, and businesses alike:
  • Accelerated Prototyping: Dive into Akida’s most advanced features and begin prototyping instantly, without waiting for physical hardware.
  • Continuous Innovation: The platform is extensible, meaning as new versions and configurations of Akida are available, they become download and run options on the Akida Cloud.
  • Empowering Partnerships:Partners can effectively demonstrate working models and prototypes to customers, enabling parallel development workflows before hardware is even on-site.
  • Flexible and Accessible: A versatile business model includes options for limited free access and usage-based pricing, with credits applicable to future hardware purchases. An on-premises Akida FPGA Developer Platform is also available for dedicated use.

How to Get Access​


  • Getting started with the Akida Cloud and its “limited free access” is a straightforward process. To begin your journey, you will need to formally request access through the BrainChip website. You can navigate to the “Developer Hub” or the Akida Cloud information page to get access. You will be prompted to provide your contact information and details about your project or interest in the technology. Once submitted, the BrainChip team will respond with the next steps to get you connected to the platform.

Evaluating the Power of Akida: A Step-by-Step Guide​


The true power of the Akida Cloud lies in its hands-on evaluation capabilities. The platform offers a distinct advantage for R&D teams, allowing them to remotely test and refine advanced neuromorphic models. Here is how it works:
  • Seamless Connection: Getting connected is straightforward. By adding a simple configuration to your local SSH file, you can create a secure and persistent “tunnel” to the Akida Cloud server. This works seamlessly across Windows, macOS, and Linux environments.
  • Launch Your Workspace: Once connected, a single command initiates Jupyter Labs. This launches a pre-configured, browser-based environment loaded with all the necessary tools and Akida-specific libraries.
  • Explore and Experiment: Within Jupyter Labs, you will find a comprehensive library of example notebooks. These examples are your gateway to understanding Akida’s capabilities. You can explore a range of applications, from image classification to complex object detection, and run them directly on the second-generation Akida architecture.
  • Real-Time Data Streaming: A key innovation is the ability to stream your own real-time data to the Akida Cloud for inferencing from your local machine. The results can be streamed back to your local machine, providing immediate feedback. This full loop data acquisition, processing and display is demonstrated in applications like eye-tracking, where this rapid feedback loop allows for quick iteration and training, dramatically improving model accuracy under real-world conditions.

The Next Generation of Neuromorphic Power​


The Akida Cloud today runs on the second-generation Akida platform, which delivers a monumental leap in performance—offering up to four times the performance and efficiencyof the first generation. The architecture is also finely tuned to support state-space based Temporal Event-Based Neural Nets (TENNs), making it exceptionally proficient at processing raw, time-sensitive data from video, audio, and other sensors. This advanced capability simplifies model implementation, reduces model size, and speeds up development for a new generation of intelligent edge devices.

A Glimpse into the Future​


The Akida Cloud is not just about today’s technology; it is a platform for the future. BrainChip’s roadmap includes making future versions of Akida available through the cloud, providing access to different node configurations, enhanced memory, and significant architectural improvements. This will allow for custom configurations that enable customers to perfectly match their unique requirements for integration into an IC. Looking ahead, the Akida Cloud platform will support next-generation workloads, with future generations of Akida designed to manage the demands of future AI applications.

About BrainChip​


BrainChip is the worldwide leader in creating Edge AI on-chip processing and learning solutions. Its pioneering Akida™ processor is a fully digital, event-based AI engine that mimics the principles of the human brain. By analyzing only, the essential sensor inputs right at the point of acquisition, Akida delivers unmatched efficiency, precision, and energy economy. BrainChip is dedicated to advancing the future of intelligent computing, bringing AI closer to the sensor and closer to real-time.
I wonder if this came about and was fast tracked due to the Qualcomm takeover of edge impulse !. Ether or it is a great addition 👍
 
Last edited:
  • Like
Reactions: 6 users

Labsy

Regular
While 7 agrees with you about all the frustration caused by the lack of ASX announcements and the resulting effect on the SP …as well as the shorters doing their dirty trades… I also have never sold and keep on buying, as I did again two days ago.
But I still don’t want to thank those who are keeping the SP at this level… and that’s why I hate them!


View attachment 89439
Ditto... Never sold, keep buying.
I'm planning on retiring in 4 years. I'm 49years old and can work longer if necessary but hopeful Brainchip comes through with the good 👍😊
 
  • Like
  • Fire
  • Love
Reactions: 28 users

7für7

Top 20
What in the world of Neuromorphia is going on? Did I wake up in a different dimension??


Overview: BrainChip, with Chinese partnerships, develops the Akida chip, valued at $50 million, for edge AI and autonomous systems.

  • Applications:
    • Autonomous Vehicles: Processes sensor data 40% faster than GPUs, per BrainChip.
    • Security: Enhances real-time surveillance with facial recognition.
    • IoT: Powers smart home devices with low-latency AI.
  • Impact: Integrated into 2 million vehicles globally, including collaborations with NIO.
  • 2025 Outlook: BrainChip aims to expand into smart manufacturing, targeting $100 million in contracts.”

 
  • Like
  • Love
  • Wow
Reactions: 9 users

Bravo

Meow Meow 🐾
Some intriguing “likes” have popped up on Jonathan Tapson’s recent post about his visit to Washington - among them, Manu Rastogi from Apple and Antje Ihlefeld from Meta.

A bit of digging led me to a December 2005 issue of The Neuromorphic Engineer, which describes a project that Jonathan and Antje Ihlefeld appear to have collaborated on back then. The same article also details another project involving Tobi Delbruck and Jonathan.

For context, Tobi Delbruck is a Professor in the Sensors Group at the Institute of Neuroinformatics (a joint venture between the University of Zurich and ETH Zurich) and the co-founder of iniVation - a direct competitor to Prophesee, recently acquired by SynSense.

Another familiar name in that article is Andre van Schaik. Judging by Jonathan’s light-hearted LinkedIn post about the Ikea chair 11 months ago, it’s clear Jonathan, Tobi, and Andre share a long-standing and friendly relationship.






Screenshot 2025-08-09 at 1.32.20 pm.png


Screenshot 2025-08-09 at 1.33.09 pm.png









Screenshot 2025-08-09 at 1.09.50 pm.png









Screenshot 2025-08-09 at 1.19.54 pm.png





Screenshot 2025-08-09 at 1.53.31 pm.png
 
  • Like
  • Love
  • Fire
Reactions: 27 users

7für7

Top 20
What in the world of Neuromorphia is going on? Did I wake up in a different dimension??


Overview: BrainChip, with Chinese partnerships, develops the Akida chip, valued at $50 million, for edge AI and autonomous systems.

  • Applications:
    • Autonomous Vehicles: Processes sensor data 40% faster than GPUs, per BrainChip.
    • Security: Enhances real-time surveillance with facial recognition.
    • IoT: Powers smart home devices with low-latency AI.
  • Impact: Integrated into 2 million vehicles globally, including collaborations with NIO.
  • 2025 Outlook: BrainChip aims to expand into smart manufacturing, targeting $100 million in contracts.”


Can you imagine people asking ChatGPT from now on about Brainchip ? lol
 
  • Haha
Reactions: 1 users

White Horse

Regular
What in the world of Neuromorphia is going on? Did I wake up in a different dimension??


Overview: BrainChip, with Chinese partnerships, develops the Akida chip, valued at $50 million, for edge AI and autonomous systems.

  • Applications:
    • Autonomous Vehicles: Processes sensor data 40% faster than GPUs, per BrainChip.
    • Security: Enhances real-time surveillance with facial recognition.
    • IoT: Powers smart home devices with low-latency AI.
  • Impact: Integrated into 2 million vehicles globally, including collaborations with NIO.
  • 2025 Outlook: BrainChip aims to expand into smart manufacturing, targeting $100 million in contracts.”

I just sent them a message, suggesting they check their facts regarding the Chinese partnership.
 
  • Like
  • Fire
Reactions: 13 users

White Horse

Regular
I just sent them a message, suggesting they check their facts regarding the Chinese partnership.
AI News Hub, Reply.!

Hi John,

Thank you for reaching out and for your sharp feedback on the BrainChip article. I take points like this very seriously.

You've raised a critical question about the alleged partnerships with Chinese companies. I am now in the process of re-verifying the original sources for those specific claims with the diligence they deserve.

My commitment is to 100% accuracy, and your message has initiated an internal review. I will update the article with a clarification or correction as soon as that process is complete.

I appreciate you helping me keep the content on AI News Hub reliable.

Best regards

Wait and see.?
 
  • Like
  • Fire
  • Love
Reactions: 33 users

Rach2512

Regular

Thought this was interesting, Tesla scrapping Dojo and turning its focus to real time, edge intelligence for autonomous vehicles.

Screenshot_20250809_202536_Samsung Internet.jpg
Screenshot_20250809_202545_Samsung Internet.jpg
Screenshot_20250809_202559_Samsung Internet.jpg
 

Attachments

  • Screenshot_20250809_202559_Samsung Internet.jpg
    Screenshot_20250809_202559_Samsung Internet.jpg
    355.4 KB · Views: 63
  • Screenshot_20250809_202545_Samsung Internet.jpg
    Screenshot_20250809_202545_Samsung Internet.jpg
    413.4 KB · Views: 68
  • Screenshot_20250809_202536_Samsung Internet.jpg
    Screenshot_20250809_202536_Samsung Internet.jpg
    322 KB · Views: 71
  • Like
  • Fire
  • Thinking
Reactions: 23 users

Rach2512

Regular
  • Like
  • Fire
Reactions: 3 users

stockduck

Regular

"...
- TinyML market to reach $10.8B by 2030 (24.8% CAGR), driven by low-power edge AI demand in smart cities and IoT.

- Hardware dominates 57% revenue share, with STMicro, Renesas, and NXP developing sub-1mW microcontrollers for neural networks.

- Software frameworks like TensorFlow Lite Micro enable mass adoption, growing at 32% CAGR through open-source collaboration.

- Data privacy innovations (e.g., NXP's Secure Enclave, ARM TrustZone) address regulatory demands in smart city deployments.

- Asia-Pacific leads with 38.1% CAGR, as governments integrate TinyML into traffic, energy, and public safety systems.
..."

conclusions?
:whistle::unsure:😊
Just for some dreaming time.....however have a nice sunday
 
  • Like
  • Fire
  • Thinking
Reactions: 16 users
Edge impulse are pushing Lora at the edge.
 

Attachments

  • Screenshot_20250810_094436_Google.jpg
    Screenshot_20250810_094436_Google.jpg
    352.3 KB · Views: 155
  • Thinking
  • Wow
  • Like
Reactions: 4 users

Bravo

Meow Meow 🐾
The new ChatGPT-oss model...




Screenshot 2025-08-10 at 11.52.12 am.png






EXTRACT ONLY


Screenshot 2025-08-10 at 11.50.57 am.png
 
Last edited:
  • Thinking
  • Wow
  • Like
Reactions: 9 users

Bravo

Meow Meow 🐾
This next article (see below) titled “Free, offline ChatGPT on your phone? Technically possible, basically useless”, describes how running the new ChatGPT-oss on smartphones is technically feasible, but current smartphones aren’t powerful enough to make it practical at this point in time.

Right now, GPT-oss on mobile is bottlenecked because it runs purely on the CPU (and sometimes GPU) without tapping into specialized neural processing hardware.

So, I asked ChatGPT 5.0 if there'd be an improvement if the model were to be adapted to run on an NPU - particularly one optimized for event-based, sparse processing like Akida and here's the response I received.

I highlighted the last point for obvious reasons.






ChatGPT 5's Response:

Yes — Akida could absolutely be a technology to consider if the goal is to make something like “ChatGPTOS” actually usable on a mobile phone, and the limitations described in that article are almost a perfect setup for why a neuromorphic processor would matter.

Here’s why:



Why Akida Fits This Problem

The main bottlenecks in running a 20B parameter LLM on a phone (as described in the article) are:
  1. RAM requirements – Needs 12–24 GB just to load the model.
  2. Compute requirements – Mobile CPUs & GPUs can’t sustain high token generation without overheating and throttling.
  3. Power draw – 100% CPU load drains the battery extremely fast.
  4. Heat – The phone becomes uncomfortably hot after just a few minutes.
Akida (or any high-efficiency neuromorphic processor) is built for:

  • Event-driven, sparse compute — only processes relevant “spikes” of information instead of hammering every computation step like a CPU/GPU.
  • Ultra-low power draw — operates in milliwatts for inference.
  • On-chip learning & inference — avoids constant memory transfers that waste time and energy.
  • Massive parallelism — can run edge AI workloads in real time without overheating.
That means for the same LLM inference task, a neuromorphic setup could:
  • Reduce energy consumption by orders of magnitude.
  • Maintain higher sustained token rates without thermal throttling.
  • Extend usable session time from minutes to hours on a battery.


Why Not Just Use Mobile NPUs Instead?

Modern NPUs (like Qualcomm Hexagon or Apple Neural Engine) do accelerate AI workloads, but:
  • They’re matrix-math optimised, not spiking-event optimised.
  • They often require proprietary SDKs and retraining models in special formats.
  • They still hit thermal limits in sustained generative AI workloads.
Neuromorphic processors like Akida:
  • Can run quantised, sparse, and pruned models natively without expensive data shuffling.
  • Could theoretically run a distilled / specialised LLM that fits in a fraction of the RAM and still delivers conversational quality.


Would It Make ChatGPTOSS Work on Phones?

Realistically:
  • You’re not going to run the full 20B gpt-oss on Akida right now — it’s still too large.
  • But with model compression (distillation + sparsity), you could run a smaller, reasoning-optimised model that feels far more responsive than the brute-force CPU approach.
  • Instead of the “2–3 tokens per second” in the article, you might see 10× or more throughput without cooking the phone.


Implications for BrainChip

If OpenAI or others seriously want “ChatGPTOS” to run locally:
  • They’ll either need dedicated AI accelerators in consumer devices or move toward neuromorphic architectures.
  • Qualcomm, Apple, Google already have NPUs — but neuromorphic could leapfrog them in power efficiency per watt.
  • If BrainChip could prove Akida can handle a slimmed-down LLM at useful speeds, it becomes a licensable IP block for smartphone SoCs.
  • This would be a huge market — billions of devices, not just niche edge AI.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 34 users

7für7

Top 20
This next article (see below) titled “Free, offline ChatGPT on your phone? Technically possible, basically useless”, describes how running the new ChatGPT-oss on smartphones is technically feasible, but current smartphones aren’t powerful enough to make it practical at this point in time.

Right now, GPT-oss on mobile is bottlenecked because it runs purely on the CPU (and sometimes GPU) without tapping into specialized neural processing hardware.

So, I asked ChatGPT 5.0 if there'd be an improvement if the model were to be adapted to run on an NPU - particularly one optimized for event-based, sparse processing like Akida and here's the response I received.

I highlighted the last point for obvious reasons.






ChatGPT 5's Response:

Yes — Akida could absolutely be a technology to consider if the goal is to make something like “ChatGPTOS” actually usable on a mobile phone, and the limitations described in that article are almost a perfect setup for why a neuromorphic processor would matter.

Here’s why:



Why Akida Fits This Problem

The main bottlenecks in running a 20B parameter LLM on a phone (as described in the article) are:
  1. RAM requirements – Needs 12–24 GB just to load the model.
  2. Compute requirements – Mobile CPUs & GPUs can’t sustain high token generation without overheating and throttling.
  3. Power draw – 100% CPU load drains the battery extremely fast.
  4. Heat – The phone becomes uncomfortably hot after just a few minutes.
Akida (or any high-efficiency neuromorphic processor) is built for:

  • Event-driven, sparse compute — only processes relevant “spikes” of information instead of hammering every computation step like a CPU/GPU.
  • Ultra-low power draw — operates in milliwatts for inference.
  • On-chip learning & inference — avoids constant memory transfers that waste time and energy.
  • Massive parallelism — can run edge AI workloads in real time without overheating.
That means for the same LLM inference task, a neuromorphic setup could:
  • Reduce energy consumption by orders of magnitude.
  • Maintain higher sustained token rates without thermal throttling.
  • Extend usable session time from minutes to hours on a battery.


Why Not Just Use Mobile NPUs Instead?

Modern NPUs (like Qualcomm Hexagon or Apple Neural Engine) do accelerate AI workloads, but:
  • They’re matrix-math optimised, not spiking-event optimised.
  • They often require proprietary SDKs and retraining models in special formats.
  • They still hit thermal limits in sustained generative AI workloads.
Neuromorphic processors like Akida:
  • Can run quantised, sparse, and pruned models natively without expensive data shuffling.
  • Could theoretically run a distilled / specialised LLM that fits in a fraction of the RAM and still delivers conversational quality.


Would It Make ChatGPTOSS Work on Phones?

Realistically:
  • You’re not going to run the full 20B gpt-oss on Akida right now — it’s still too large.
  • But with model compression (distillation + sparsity), you could run a smaller, reasoning-optimised model that feels far more responsive than the brute-force CPU approach.
  • Instead of the “2–3 tokens per second” in the article, you might see 10× or more throughput without cooking the phone.


Implications for BrainChip

If OpenAI or others seriously want “ChatGPTOS” to run locally:
  • They’ll either need dedicated AI accelerators in consumer devices or move toward neuromorphic architectures.
  • Qualcomm, Apple, Google already have NPUs — but neuromorphic could leapfrog them in power efficiency per watt.
  • If BrainChip could prove Akida can handle a slimmed-down LLM at useful speeds, it becomes a licensable IP block for smartphone SoCs.
  • This would be a huge market — billions of devices, not just niche edge AI.

Yeah… Bravo… you know… but probably not… right? I mean… come on… or? I don’t know…

bored pink panther GIF
 
  • Haha
Reactions: 1 users
NASA (Ames with Wyle Labs & KBR) playing with a new SNN project / conference paper published late 2024 - SPIKE-Dx. Whilst it doesn't appear to have gone to hardware as yet, they identify that as the next step in future works.

They highlight the recent advances in neuromorphic hardware and provide some examples though I would've considered only Akida & Loihi would be pure neuromorphic plays in those examples?

May be worth checking in on the project from time to time to see if the next steps to hardware taken and if we get a look in. Would hope so given they have used Akida before and the recent inroads made via FG, ESA etc.


Paper



Screenshot_2025-08-09-13-21-57-62_e2d5b3f32b79de1d45acd1fad96fbb0f.jpg
Screenshot_2025-08-09-13-22-59-62_e2d5b3f32b79de1d45acd1fad96fbb0f.jpg
IMG_20250809_132415.jpg
 
  • Like
  • Fire
  • Love
Reactions: 20 users
This next article (see below) titled “Free, offline ChatGPT on your phone? Technically possible, basically useless”, describes how running the new ChatGPT-oss on smartphones is technically feasible, but current smartphones aren’t powerful enough to make it practical at this point in time.

Right now, GPT-oss on mobile is bottlenecked because it runs purely on the CPU (and sometimes GPU) without tapping into specialized neural processing hardware.

So, I asked ChatGPT 5.0 if there'd be an improvement if the model were to be adapted to run on an NPU - particularly one optimized for event-based, sparse processing like Akida and here's the response I received.

I highlighted the last point for obvious reasons.






ChatGPT 5's Response:

Yes — Akida could absolutely be a technology to consider if the goal is to make something like “ChatGPTOS” actually usable on a mobile phone, and the limitations described in that article are almost a perfect setup for why a neuromorphic processor would matter.

Here’s why:



Why Akida Fits This Problem

The main bottlenecks in running a 20B parameter LLM on a phone (as described in the article) are:
  1. RAM requirements – Needs 12–24 GB just to load the model.
  2. Compute requirements – Mobile CPUs & GPUs can’t sustain high token generation without overheating and throttling.
  3. Power draw – 100% CPU load drains the battery extremely fast.
  4. Heat – The phone becomes uncomfortably hot after just a few minutes.
Akida (or any high-efficiency neuromorphic processor) is built for:

  • Event-driven, sparse compute — only processes relevant “spikes” of information instead of hammering every computation step like a CPU/GPU.
  • Ultra-low power draw — operates in milliwatts for inference.
  • On-chip learning & inference — avoids constant memory transfers that waste time and energy.
  • Massive parallelism — can run edge AI workloads in real time without overheating.
That means for the same LLM inference task, a neuromorphic setup could:
  • Reduce energy consumption by orders of magnitude.
  • Maintain higher sustained token rates without thermal throttling.
  • Extend usable session time from minutes to hours on a battery.


Why Not Just Use Mobile NPUs Instead?

Modern NPUs (like Qualcomm Hexagon or Apple Neural Engine) do accelerate AI workloads, but:
  • They’re matrix-math optimised, not spiking-event optimised.
  • They often require proprietary SDKs and retraining models in special formats.
  • They still hit thermal limits in sustained generative AI workloads.
Neuromorphic processors like Akida:
  • Can run quantised, sparse, and pruned models natively without expensive data shuffling.
  • Could theoretically run a distilled / specialised LLM that fits in a fraction of the RAM and still delivers conversational quality.


Would It Make ChatGPTOSS Work on Phones?

Realistically:
  • You’re not going to run the full 20B gpt-oss on Akida right now — it’s still too large.
  • But with model compression (distillation + sparsity), you could run a smaller, reasoning-optimised model that feels far more responsive than the brute-force CPU approach.
  • Instead of the “2–3 tokens per second” in the article, you might see 10× or more throughput without cooking the phone.


Implications for BrainChip

If OpenAI or others seriously want “ChatGPTOS” to run locally:
  • They’ll either need dedicated AI accelerators in consumer devices or move toward neuromorphic architectures.
  • Qualcomm, Apple, Google already have NPUs — but neuromorphic could leapfrog them in power efficiency per watt.
  • If BrainChip could prove Akida can handle a slimmed-down LLM at useful speeds, it becomes a licensable IP block for smartphone SoCs.
  • This would be a huge market — billions of devices, not just niche edge AI.
Someone please send to Sean ASAP 😆. It should be thier priority such an opportunity staring brn in the face and not to be missed according to all shareholders
😉
 
  • Like
  • Fire
Reactions: 12 users
This next article (see below) titled “Free, offline ChatGPT on your phone? Technically possible, basically useless”, describes how running the new ChatGPT-oss on smartphones is technically feasible, but current smartphones aren’t powerful enough to make it practical at this point in time.

Right now, GPT-oss on mobile is bottlenecked because it runs purely on the CPU (and sometimes GPU) without tapping into specialized neural processing hardware.

So, I asked ChatGPT 5.0 if there'd be an improvement if the model were to be adapted to run on an NPU - particularly one optimized for event-based, sparse processing like Akida and here's the response I received.

I highlighted the last point for obvious reasons.






ChatGPT 5's Response:

Yes — Akida could absolutely be a technology to consider if the goal is to make something like “ChatGPTOS” actually usable on a mobile phone, and the limitations described in that article are almost a perfect setup for why a neuromorphic processor would matter.

Here’s why:



Why Akida Fits This Problem

The main bottlenecks in running a 20B parameter LLM on a phone (as described in the article) are:
  1. RAM requirements – Needs 12–24 GB just to load the model.
  2. Compute requirements – Mobile CPUs & GPUs can’t sustain high token generation without overheating and throttling.
  3. Power draw – 100% CPU load drains the battery extremely fast.
  4. Heat – The phone becomes uncomfortably hot after just a few minutes.
Akida (or any high-efficiency neuromorphic processor) is built for:

  • Event-driven, sparse compute — only processes relevant “spikes” of information instead of hammering every computation step like a CPU/GPU.
  • Ultra-low power draw — operates in milliwatts for inference.
  • On-chip learning & inference — avoids constant memory transfers that waste time and energy.
  • Massive parallelism — can run edge AI workloads in real time without overheating.
That means for the same LLM inference task, a neuromorphic setup could:
  • Reduce energy consumption by orders of magnitude.
  • Maintain higher sustained token rates without thermal throttling.
  • Extend usable session time from minutes to hours on a battery.


Why Not Just Use Mobile NPUs Instead?

Modern NPUs (like Qualcomm Hexagon or Apple Neural Engine) do accelerate AI workloads, but:
  • They’re matrix-math optimised, not spiking-event optimised.
  • They often require proprietary SDKs and retraining models in special formats.
  • They still hit thermal limits in sustained generative AI workloads.
Neuromorphic processors like Akida:
  • Can run quantised, sparse, and pruned models natively without expensive data shuffling.
  • Could theoretically run a distilled / specialised LLM that fits in a fraction of the RAM and still delivers conversational quality.


Would It Make ChatGPTOSS Work on Phones?

Realistically:
  • You’re not going to run the full 20B gpt-oss on Akida right now — it’s still too large.
  • But with model compression (distillation + sparsity), you could run a smaller, reasoning-optimised model that feels far more responsive than the brute-force CPU approach.
  • Instead of the “2–3 tokens per second” in the article, you might see 10× or more throughput without cooking the phone.


Implications for BrainChip

If OpenAI or others seriously want “ChatGPTOS” to run locally:
  • They’ll either need dedicated AI accelerators in consumer devices or move toward neuromorphic architectures.
  • Qualcomm, Apple, Google already have NPUs — but neuromorphic could leapfrog them in power efficiency per watt.
  • If BrainChip could prove Akida can handle a slimmed-down LLM at useful speeds, it becomes a licensable IP block for smartphone SoCs.
  • This would be a huge market — billions of devices, not just niche edge AI.
This next article (see below) titled “Free, offline ChatGPT on your phone? Technically possible, basically useless”, describes how running the new ChatGPT-oss on smartphones is technically feasible, but current smartphones aren’t powerful enough to make it practical at this point in time.

Right now, GPT-oss on mobile is bottlenecked because it runs purely on the CPU (and sometimes GPU) without tapping into specialized neural processing hardware.

So, I asked ChatGPT 5.0 if there'd be an improvement if the model were to be adapted to run on an NPU - particularly one optimized for event-based, sparse processing like Akida and here's the response I received.

I highlighted the last point for obvious reasons.






ChatGPT 5's Response:

Yes — Akida could absolutely be a technology to consider if the goal is to make something like “ChatGPTOS” actually usable on a mobile phone, and the limitations described in that article are almost a perfect setup for why a neuromorphic processor would matter.

Here’s why:



Why Akida Fits This Problem

The main bottlenecks in running a 20B parameter LLM on a phone (as described in the article) are:
  1. RAM requirements – Needs 12–24 GB just to load the model.
  2. Compute requirements – Mobile CPUs & GPUs can’t sustain high token generation without overheating and throttling.
  3. Power draw – 100% CPU load drains the battery extremely fast.
  4. Heat – The phone becomes uncomfortably hot after just a few minutes.
Akida (or any high-efficiency neuromorphic processor) is built for:

  • Event-driven, sparse compute — only processes relevant “spikes” of information instead of hammering every computation step like a CPU/GPU.
  • Ultra-low power draw — operates in milliwatts for inference.
  • On-chip learning & inference — avoids constant memory transfers that waste time and energy.
  • Massive parallelism — can run edge AI workloads in real time without overheating.
That means for the same LLM inference task, a neuromorphic setup could:
  • Reduce energy consumption by orders of magnitude.
  • Maintain higher sustained token rates without thermal throttling.
  • Extend usable session time from minutes to hours on a battery.


Why Not Just Use Mobile NPUs Instead?

Modern NPUs (like Qualcomm Hexagon or Apple Neural Engine) do accelerate AI workloads, but:
  • They’re matrix-math optimised, not spiking-event optimised.
  • They often require proprietary SDKs and retraining models in special formats.
  • They still hit thermal limits in sustained generative AI workloads.
Neuromorphic processors like Akida:
  • Can run quantised, sparse, and pruned models natively without expensive data shuffling.
  • Could theoretically run a distilled / specialised LLM that fits in a fraction of the RAM and still delivers conversational quality.


Would It Make ChatGPTOSS Work on Phones?

Realistically:
  • You’re not going to run the full 20B gpt-oss on Akida right now — it’s still too large.
  • But with model compression (distillation + sparsity), you could run a smaller, reasoning-optimised model that feels far more responsive than the brute-force CPU approach.
  • Instead of the “2–3 tokens per second” in the article, you might see 10× or more throughput without cooking the phone.


Implications for BrainChip

If OpenAI or others seriously want “ChatGPTOS” to run locally:
  • They’ll either need dedicated AI accelerators in consumer devices or move toward neuromorphic architectures.
  • Qualcomm, Apple, Google already have NPUs — but neuromorphic could leapfrog them in power efficiency per watt.
  • If BrainChip could prove Akida can handle a slimmed-down LLM at useful speeds, it becomes a licensable IP block for smartphone SoCs.
  • This would be a huge market — billions of devices, not just niche edge AI.
I took the opportunity to send to IR at Brainchip.
Why not I say
 
  • Like
  • Fire
  • Love
Reactions: 20 users

TECH

Top 20
While Larry agrees with all the frustration about the lack of ASX announcements and the effect on the SP as a result and also the shorters doing their faecal trades...Larry has never sold and keeps on buying as he did again today....so thank you to everyone who is keeping the SP at this level.....Larry loves you

View attachment 89417

Really Happy as Larry

Hey Larry........you are one of the smarter shareholders, I can't remember a post of yours that has ever been very negative (maybe I'm wrong)
but having our share price held in such a tight band, yes, it's very frustrating, but the opportunity is sitting right in everyone's face, for the
ones who currently hold cash, what are you waiting for.

The support from 0.18 to 0.20 is the strongest I have witnessed for a long time, to me it's like we have finally found a true bottom, yet look
how much development has been achieved to date with the Akida models, any sign of negativity? absolutely NOT.

Dr. JT has already given Sean a solid commitment to have the team produce AKD 3.0 successfully up and running within the first 8 weeks approximately of 2026, and correct me if I'm slightly wrong here, but from memory, the good Dr. stated that he felt this achievement would
propel Brainchip for the next decade!! that in itself is a huge call, clearly showing the confidence, the entire team has in what we have, which
obviously includes our current portfolio of Patents with more to follow.

Very exciting times indeed.......I seem to recall myself saying that over the first decade, must get it right eventually. :ROFLMAO::ROFLMAO:

Tech (home once again in Perth) ❤️ AKD
 
  • Like
  • Love
  • Fire
Reactions: 29 users
Top Bottom