Would have thought this story would have got prime placement in the monthly BRN newsletter.Morning Easytiger ,
Not that I'm aware of , iv only seen the three still photos which were attached to the original LinkedIn post.
Regards,
Esq.
I think we're in the Epstein files ...What I honestly donāt understand is this: What are they afraid of?
They call the Cloud system a milestone, yet they canāt manage to announce it properly through the ASX.
Whereās the problem? Is there something behind this that we donāt know about?
Is there a reason why they donāt officially share this milestone?
Or are they deliberately trying to prevent another hype?
To me, it makes no sense ā and if anyone from BrainChip is reading this: Please give us a plausible answer without the usual platitudes.
Thank you.
MILESTONE!!! HELLOOOO!?!?![]()
Elon's new motto:![]()
š Tesla Refines Its AI Chip Strategy | ASSOCIATION OF ELECTRONIC INDUSTRIES IN SINGAPORE (AEIS)
š Tesla Refines Its AI Chip Strategy Elon Musk announced that Tesla will streamline #AI #chip #design, focusing on versatile inference chips for both real-time decision-making and limited training ā ending the costly split between multiple chip architectures. The Dojo supercomputer project has...www.linkedin.com
I want Brainchip to be sick of winningMy prediction was absolutely right ā Iām always right, everybody knows it! This morning, the chip calls me⦠and they say, Donald, please, please, give us a little more time. We donāt want to put out tiny, pathetic little news. No, no small news! Weāre talking about something BIG.
And I said ā and this is true, everybody knows it ā CHIP⦠I said..Take it. Take it and make it!
Look at this⦠This is going to be HUUUUUGE! Bigger than anything youāve ever seen. People are going to say, āDonald, how did you do it again?ā And Iāll say, āVery simple I DID Itā
![]()
And the timing of this is coincidental with both announcements.Interesting⦠BUTā¦
If BrainChip is involved in any way and thereās already a deal in place that is expected to generate significant revenue, there must definitely be a price-sensitive announcement.
Since this announcement has already been made public by them, there is no reason to assume that an NDA exists.
Editā¦. COULD IT BE??
BrainChipās Akida Pico is already being explored for LLM acceleration at the edge, with real-time processing and ultra-low power consumption. 9 month old video ' starring' Todd. No doubt we have made further advances over the past 9 months - we have the 'want to do it' incentive.This next article (see below) titled āFree, offline ChatGPT on your phone? Technically possible, basically uselessā, describes how running the new ChatGPT-oss on smartphones is technically feasible, but current smartphones arenāt powerful enough to make it practical at this point in time.
Right now, GPT-oss on mobile is bottlenecked because it runs purely on the CPU (and sometimes GPU) without tapping into specialized neural processing hardware.
So, I asked ChatGPT 5.0 if there'd be an improvement if the model were to be adapted to run on an NPU - particularly one optimized for event-based, sparse processing like Akida and here's the response I received.
I highlighted the last point for obvious reasons.
![]()
Free, offline ChatGPT on your phone? Technically possible, basically useless
You can now run ChatGPT offline with OpenAIās gpt-oss model, even on a phone. But is it practical? Here's what works and what really doesnāt.www.androidauthority.com
ChatGPT 5's Response:
Yes ā Akida could absolutely be a technology to consider if the goal is to make something like āChatGPTOSā actually usable on a mobile phone, and the limitations described in that article are almost a perfect setup for why a neuromorphic processor would matter.
Hereās why:
Why Akida Fits This Problem
The main bottlenecks in running a 20B parameter LLM on a phone (as described in the article) are:
Akida (or any high-efficiency neuromorphic processor) is built for:
- RAM requirements ā Needs 12ā24 GB just to load the model.
- Compute requirements ā Mobile CPUs & GPUs canāt sustain high token generation without overheating and throttling.
- Power draw ā 100% CPU load drains the battery extremely fast.
- Heat ā The phone becomes uncomfortably hot after just a few minutes.
That means for the same LLM inference task, a neuromorphic setup could:
- Event-driven, sparse compute ā only processes relevant āspikesā of information instead of hammering every computation step like a CPU/GPU.
- Ultra-low power draw ā operates in milliwatts for inference.
- On-chip learning & inference ā avoids constant memory transfers that waste time and energy.
- Massive parallelism ā can run edge AI workloads in real time without overheating.
- Reduce energy consumption by orders of magnitude.
- Maintain higher sustained token rates without thermal throttling.
- Extend usable session time from minutes to hours on a battery.
Why Not Just Use Mobile NPUs Instead?
Modern NPUs (like Qualcomm Hexagon or Apple Neural Engine) do accelerate AI workloads, but:
Neuromorphic processors like Akida:
- Theyāre matrix-math optimised, not spiking-event optimised.
- They often require proprietary SDKs and retraining models in special formats.
- They still hit thermal limits in sustained generative AI workloads.
- Can run quantised, sparse, and pruned models natively without expensive data shuffling.
- Could theoretically run a distilled / specialised LLM that fits in a fraction of the RAM and still delivers conversational quality.
Would It Make ChatGPTOSS Work on Phones?
Realistically:
- Youāre not going to run the full 20B gpt-oss on Akida right now ā itās still too large.
- But with model compression (distillation + sparsity), you could run a smaller, reasoning-optimised model that feels far more responsive than the brute-force CPU approach.
- Instead of the ā2ā3 tokens per secondā in the article, you might see 10Ć or more throughput without cooking the phone.
Implications for BrainChip
If OpenAI or others seriously want āChatGPTOSā to run locally:
- Theyāll either need dedicated AI accelerators in consumer devices or move toward neuromorphic architectures.
- Qualcomm, Apple, Google already have NPUs ā but neuromorphic could leapfrog them in power efficiency per watt.
- If BrainChip could prove Akida can handle a slimmed-down LLM at useful speeds, it becomes a licensable IP block for smartphone SoCs.
- This would be a huge market ā billions of devices, not just niche edge AI.
Purely coincidental. Yeah nah. lol@7für7 Is there a reason?
Tony Lewis likes this LinkedIn post. Coincidence the Akida cloud is released or is it just good timing?
View attachment 89494
Be nice to think Brainchip can join this national network of AI cloud labs. Nice earner if they can.
https://www.linkedin.com/posts/nati...urce=social_share_send&utm_campaign=copy_link
Looks like a normal day of trading for BRN on ASX. Appears at least on the surface there is so much dodgy shite going on but hopefully BRN SP punches out the other side with some solid upswing. Still laughing at a very recent MF post by Dickleboro (last week) but no mention of the last 4C with uptick in revenue etc. If the revenue was only as much as a coffee shop he would have been all over it like a
Unfortunately no as they have interns / trainees doing the official BRN newsletters.Would have thought this story would have got prime placement in the monthly BRN newsletter.
Purely coincidental. Yeah nah. lol
So much BS going on here and on the crapper with some clicky group shite thrown in (happy clappers and negative Nancies). All amusing. Hope all had a good day.
Do you mean the Tesla filesElon's new motto:
"Move fast and fall flat on your face!"
![]()
We had a great conference with the EDGE AI FOUNDATION, validating our approach with Akida2 with TENNs, which we will start rolling out for evaluation in summer 2025. | Alf Kuchenbuch
We had a great conference with the EDGE AI FOUNDATION, validating our approach with Akida2 with TENNs, which we will start rolling out for evaluation in summer 2025.www.linkedin.com
(ā¦) Now here comes the question for the more tech-savvy: Would it still hypothetically be possible that we are involved?
I did find two references online that Portenta X8 now has support for the Akida PCIe module (one by YouTuber Chris MĆ©ndez from the Dominican Republic, who works for Arduino and has uploaded several videos featuring the Akida), but couldnāt find anything regarding Portenta H7ā¦
View attachment 72965
View attachment 72966
And since Iām in the mood of maxing out my file uploads again, here are Padma AgRoboticsā SBIR phase I and II applications, in case anyone would like to use these to fuel or end this speculation (note that they refer to a cilantro harvester, though, not to the weed-control AI-powered agricultural robot featured on their website and in the January 2023 video):
View attachment 72967
View attachment 72968
View attachment 72969
View attachment 72970
Looks like they are 'happy clapping' at the White house now. Not a bad convert to have on your side.Purely coincidental. Yeah nah. lol
So much BS going on here and on the crapper with some clicky group shite thrown in (happy clappers and negative Nancies). All amusing. Hope all had a good day.