BRN Discussion Ongoing

Rach2512

Regular


Screenshot_20250811_162509_Samsung Internet.jpg
Screenshot_20250811_162524_Samsung Internet.jpg
 
  • Like
  • Thinking
  • Love
Reactions: 16 users

Diogenese

Top 20
What I honestly don’t understand is this: What are they afraid of?

They call the Cloud system a milestone, yet they can’t manage to announce it properly through the ASX.
Where’s the problem? Is there something behind this that we don’t know about?
Is there a reason why they don’t officially share this milestone?

Or are they deliberately trying to prevent another hype?

To me, it makes no sense – and if anyone from BrainChip is reading this: Please give us a plausible answer without the usual platitudes.
Thank you.

MILESTONE!!! HELLOOOO!?!?
Michael J Fox Hello GIF by Back to the Future Trilogy
I think we're in the Epstein files ...
 
  • Haha
  • Like
Reactions: 22 users

Diogenese

Top 20
Elon's new motto:

"Move fast and fall flat on your face!"
 
  • Haha
Reactions: 10 users

Terroni2105

Founding Member
My prediction was absolutely right – I’m always right, everybody knows it! This morning, the chip calls me… and they say, Donald, please, please, give us a little more time. We don’t want to put out tiny, pathetic little news. No, no small news! We’re talking about something BIG.

And I said – and this is true, everybody knows it – CHIP… I said..Take it. Take it and make it!

Look at this… This is going to be HUUUUUGE! Bigger than anything you’ve ever seen. People are going to say, “Donald, how did you do it again?” And I’ll say, “Very simple I DID It”


Donald Trump GIF
I want Brainchip to be sick of winning
 
  • Haha
  • Like
  • Love
Reactions: 12 users
Interesting… BUT…
If BrainChip is involved in any way and there’s already a deal in place that is expected to generate significant revenue, there must definitely be a price-sensitive announcement.

Since this announcement has already been made public by them, there is no reason to assume that an NDA exists.

Edit…. COULD IT BE??
And the timing of this is coincidental with both announcements. 😎🫣🥸

https://www.linkedin.com/posts/brai...s&rcm=ACoAABr1PUAB8xoHGlYCgh2Xjp9DaZYo5hN0voU
IMG_5464.png
 

Attachments

  • IMG_5464.png
    IMG_5464.png
    2.1 MB · Views: 59
  • Like
  • Fire
  • Love
Reactions: 16 users

manny100

Top 20
This next article (see below) titled “Free, offline ChatGPT on your phone? Technically possible, basically useless”, describes how running the new ChatGPT-oss on smartphones is technically feasible, but current smartphones aren’t powerful enough to make it practical at this point in time.

Right now, GPT-oss on mobile is bottlenecked because it runs purely on the CPU (and sometimes GPU) without tapping into specialized neural processing hardware.

So, I asked ChatGPT 5.0 if there'd be an improvement if the model were to be adapted to run on an NPU - particularly one optimized for event-based, sparse processing like Akida and here's the response I received.

I highlighted the last point for obvious reasons.






ChatGPT 5's Response:

Yes — Akida could absolutely be a technology to consider if the goal is to make something like “ChatGPTOS” actually usable on a mobile phone, and the limitations described in that article are almost a perfect setup for why a neuromorphic processor would matter.

Here’s why:



Why Akida Fits This Problem

The main bottlenecks in running a 20B parameter LLM on a phone (as described in the article) are:
  1. RAM requirements – Needs 12–24 GB just to load the model.
  2. Compute requirements – Mobile CPUs & GPUs can’t sustain high token generation without overheating and throttling.
  3. Power draw – 100% CPU load drains the battery extremely fast.
  4. Heat – The phone becomes uncomfortably hot after just a few minutes.
Akida (or any high-efficiency neuromorphic processor) is built for:

  • Event-driven, sparse compute — only processes relevant “spikes” of information instead of hammering every computation step like a CPU/GPU.
  • Ultra-low power draw — operates in milliwatts for inference.
  • On-chip learning & inference — avoids constant memory transfers that waste time and energy.
  • Massive parallelism — can run edge AI workloads in real time without overheating.
That means for the same LLM inference task, a neuromorphic setup could:
  • Reduce energy consumption by orders of magnitude.
  • Maintain higher sustained token rates without thermal throttling.
  • Extend usable session time from minutes to hours on a battery.


Why Not Just Use Mobile NPUs Instead?

Modern NPUs (like Qualcomm Hexagon or Apple Neural Engine) do accelerate AI workloads, but:
  • They’re matrix-math optimised, not spiking-event optimised.
  • They often require proprietary SDKs and retraining models in special formats.
  • They still hit thermal limits in sustained generative AI workloads.
Neuromorphic processors like Akida:
  • Can run quantised, sparse, and pruned models natively without expensive data shuffling.
  • Could theoretically run a distilled / specialised LLM that fits in a fraction of the RAM and still delivers conversational quality.


Would It Make ChatGPTOSS Work on Phones?

Realistically:
  • You’re not going to run the full 20B gpt-oss on Akida right now — it’s still too large.
  • But with model compression (distillation + sparsity), you could run a smaller, reasoning-optimised model that feels far more responsive than the brute-force CPU approach.
  • Instead of the “2–3 tokens per second” in the article, you might see 10× or more throughput without cooking the phone.


Implications for BrainChip

If OpenAI or others seriously want “ChatGPTOS” to run locally:
  • They’ll either need dedicated AI accelerators in consumer devices or move toward neuromorphic architectures.
  • Qualcomm, Apple, Google already have NPUs — but neuromorphic could leapfrog them in power efficiency per watt.
  • If BrainChip could prove Akida can handle a slimmed-down LLM at useful speeds, it becomes a licensable IP block for smartphone SoCs.
  • This would be a huge market — billions of devices, not just niche edge AI.
BrainChip’s Akida Pico is already being explored for LLM acceleration at the edge, with real-time processing and ultra-low power consumption. 9 month old video ' starring' Todd. No doubt we have made further advances over the past 9 months - we have the 'want to do it' incentive.

PICO can handle 'specialist' areas of the web and disregards the absolute mountains of data not relevant to your special needs. Eg Brainchip had appliance repair manuals.
Whether Pico can be adapted for cybersecurity in mobiles which has been identified as a weak spot vulnerable to hacking networks.? Perhaps even diagnostic and repair suggestions for the phone itself?
Perhaps information suitable for your trade or profession?
Has potential for sure.
Title:

AI at the Edge: How BrainChip’s Akida Pico Takes on the Cloud with LLMs!​

 
  • Like
  • Love
  • Fire
Reactions: 20 users

The Pope

Regular
@7für7 Is there a reason?

Tony Lewis likes this LinkedIn post. Coincidence the Akida cloud is released or is it just good timing?

View attachment 89494

Be nice to think Brainchip can join this national network of AI cloud labs. Nice earner if they can.


https://www.linkedin.com/posts/nati...urce=social_share_send&utm_campaign=copy_link
Purely coincidental. Yeah nah. lol
So much BS going on here and on the crapper with some clicky group shite thrown in (happy clappers and negative Nancies). All amusing. Hope all had a good day.
 
  • Like
  • Haha
Reactions: 8 users

The Pope

Regular
Looks like a normal day of trading for BRN on ASX. Appears at least on the surface there is so much dodgy shite going on but hopefully BRN SP punches out the other side with some solid upswing. Still laughing at a very recent MF post by Dickleboro (last week) but no mention of the last 4C with uptick in revenue etc. If the revenue was only as much as a coffee shop he would have been all over it like a 🪰 to 💩
 
  • Like
  • Fire
  • Love
Reactions: 10 users

The Pope

Regular
Would have thought this story would have got prime placement in the monthly BRN newsletter.
Unfortunately no as they have interns / trainees doing the official BRN newsletters. 🤪
 
Last edited:
  • Fire
  • Wow
Reactions: 4 users

IloveLamp

Top 20
1000009870.jpg
1000009867.jpg
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 23 users
Hey Guys and gals
Amazing communication from the chip I have opened my emails for today and just received the July newsletter from the brain
Now we are already in august and it’s the 11 th at this moment not July and I doubt that it’s taken them that long to write the newsletter
Now I am in the top 200 investors and for me that really is piss poor efforts on the company side
For fuck sake Sean wake the fuck up and do better than other companies
 
  • Like
  • Love
  • Haha
Reactions: 12 users

7für7

Top 20
Purely coincidental. Yeah nah. lol
So much BS going on here and on the crapper with some clicky group shite thrown in (happy clappers and negative Nancies). All amusing. Hope all had a good day.
Donald Trump GIF
 

Frangipani

Top 20


D1236A7D-8DD7-4786-8126-3B63BC0F24A5.jpeg
9826B791-D9C4-460C-94D4-F63E2C8D2F5B.jpeg




80458E8E-078C-4344-8923-D697E6095BB3.jpeg
46C36E87-6EF4-4DA2-8D2D-2C96761A3084.jpeg






Not sure whether the info about the alleged autonomous Russian UAV is correct, though - those photos shared are verifiably not images from Russia, but show a project by Andrew Bernas recently featured on Hackster.io:



D093CA54-E071-4A55-81A5-7DBFDB888B36.jpeg




33EA0197-4CDB-40B9-8D22-518BB4B1672D.jpeg


(…)
5569444F-B4B2-43F4-BB31-46E0E02228FE.jpeg


(…)

62B41C51-72D9-4DFB-9110-420487F58C72.jpeg




The comment section under Jordan Linn’s initial post also begs the question whether re-poster Roberto Lafforgue had actually been aware of “neuromorphic hardware as that developed by #BrainChip” prior to reading the comment of someone whom I believe to be a BrainChip shareholder:



8318C0F0-96A0-4FF8-B5FA-0C342D79F843.jpeg


Well, at least Roberto Lafforgue is aware now, and so are the people in his LinkedIn network…

Yet, it is a good example that not every mention of our technology on LinkedIn should be given the same weight.

I generally prefer to hear from people with verifiable hands-on experience...
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 22 users

cosors

👀
Elon's new motto:

"Move fast and fall flat on your face!"
Do you mean the Tesla files 😅
An extract:
"...
The documents that Krupski claims he was able to download from the internal project management system included thousands of customer complaints about Tesla's Autopilot, the company's most strategically important project. They did not match the claims that Musk had been making for years about the alleged quality of his software.
..."
 
  • Like
  • Love
Reactions: 7 users

Frangipani

Top 20

Short interview with Alf Kuchenbuch recorded five weeks ago at Edge AI Milan 2025:






483F4C0A-A3D2-4F54-BA08-C5F2E9078662.jpeg


The video running in loop on the screens at the BrainChip booth shows two people wearing bulky VR headsets. Have we seen this footage before? 🤔

399BB3B5-2E8F-4FC0-8467-5E36846C7837.jpeg

8F73A829-6268-471B-9369-0DC8221C108B.jpeg


Not sure, though, whether this 👇🏻 has anything to do with us, as I don’t recognise any of our tech?

03ED44D1-A37B-4C9F-9442-D5B983032987.jpeg


And are we supposed to interpret more into the fact that the 5V Media team who made the video included footage of these three gentlemen from Arduino twice when Alf Kuchenbuch was asked “What does being part of the foundation mean to you?”, to which he replied “It’s great to work with everybody and finding out if there are maybe some opportunities where we can actually connect and see if we can do something together.” resp. “What excites you most about the future of Edge AI?”, to which he replied “It’s probably this aspect of all the parts of the chain coming together and being super low-power. So in the end [cue picture above, which, however, doesn’t look like this has anything to do with Arduino either, as their boards are always blue as far as I’m aware] solutions that are possible that run on a coin cell battery for months, even for a whole year, that would be something, for more autonomy…”

CAC8B14F-6A18-4089-986A-DDE5CE26341E.jpeg



While we do have some touchpoints with Arduino…
(…) Now here comes the question for the more tech-savvy: Would it still hypothetically be possible that we are involved?

I did find two references online that Portenta X8 now has support for the Akida PCIe module (one by YouTuber Chris Méndez from the Dominican Republic, who works for Arduino and has uploaded several videos featuring the Akida), but couldn’t find anything regarding Portenta H7…


View attachment 72965


View attachment 72966


And since I’m in the mood of maxing out my file uploads again, here are Padma AgRobotics’ SBIR phase I and II applications, in case anyone would like to use these to fuel or end this speculation (note that they refer to a cilantro harvester, though, not to the weed-control AI-powered agricultural robot featured on their website and in the January 2023 video):

View attachment 72967
View attachment 72968
View attachment 72969
View attachment 72970

… the two companies have so far never announced any formal kind of collaboration.

It could of course be some general footage from Edge AI Milan 2025, like the first or last few seconds of the video, but it did strike me as odd.


A9995426-59C6-4762-A0EB-A2E06BE0AEE8.jpeg
 
  • Like
  • Love
  • Fire
Reactions: 21 users

manny100

Top 20
Purely coincidental. Yeah nah. lol
So much BS going on here and on the crapper with some clicky group shite thrown in (happy clappers and negative Nancies). All amusing. Hope all had a good day.
Looks like they are 'happy clapping' at the White house now. Not a bad convert to have on your side.
What is that old, maybe new saying? 'today the White house, tomorow the world'?
This would have got some market interest had it been non price sensitive announced. I guess that is why it was not announced - fear of to much attention and a price spike the ASX do not like??
I guess the power of AKIDA just has to ours and the White House's little secret for now.
 
Last edited:
  • Like
  • Thinking
Reactions: 14 users

Frangipani

Top 20
Akida gets a mention alongside Loihi 2 - representing “brain-inspired hardware” - in a chapter of a book titled Machine Vision Analysis in Industry 5.0 Fundamentals, Applications, and Challenges, to be published by Chapman & Hall in September: “Chapter 9: Comprehensive Study of Machine Vision and Image Analysis in Industry 5.0” (pp.216):


A3994BB6-348E-4B88-8C4D-F0FF1DFE7BB3.jpeg
F708914E-E38E-4F5A-A327-6A09D289522E.jpeg
(…)

38E3B4CF-6658-42BB-937E-45A48EF154CF.jpeg

35401078-3C23-4477-82B9-1B8BD1B9560B.jpeg


(…)

97D4FE48-B684-4627-889F-82137A65FC65.jpeg

By describing herself as an “Independent Researcher in AI” and listing her private Gmail address, co-author Royana Anand (possibly related to first author Surabhi Anand?) makes it clear she has not contributed to this publication in her capacity as an AWS (Amazon Web Services) employee.
 
  • Like
  • Love
  • Fire
Reactions: 23 users

IloveLamp

Top 20
  • Like
  • Thinking
  • Fire
Reactions: 23 users

Frangipani

Top 20
Sorry if these two vids have already been posted.

Tony Lewis



Todd Vierra


Need an edge ai account to watch the full thing ^

Here are the presentation slides of our CTO’s tutorial👆🏻at the May 2025 Embedded Vision Summit titled “State-Space Models vs. Transformers for Ultra-Low-Power Edge AI”. (Far too technical for most of us, though…)



33E1D6BA-8B16-4461-A352-3915C2A5A457.jpeg
3D1F7953-953A-4D5E-8ABA-88370B10B6DA.jpeg
C3A4B9BB-A18D-4C3D-8D07-B0B06390E4D2.jpeg
17107DC2-6E6B-436C-AA32-B423445E8E4F.jpeg
72074516-FF07-437A-9B6A-0A7221339EC0.jpeg
7EA9A0E0-78BD-4934-A2CF-29762BA0FEDB.jpeg
259F9A73-F80D-4307-9188-9EE3F716D798.jpeg
FCE3BC0E-E20A-4EE3-80E5-E42A926B8DD5.jpeg

CF701F94-551C-46E6-8F88-10ACCCF10688.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 30 users

Frangipani

Top 20
Easy-and-worthwhile-to-read blog post by Innatera Strategic Business Development Manager Katarzyna ‘Kasia’ Kożdoń that will surely resonate with our staff at BrainChip:



1754332713654

Lessons from the floor: What global events reveal about the future of AI sensing​

Katarzyna (Kasia) Kożdoń

Katarzyna (Kasia) Kożdoń


Strategic Business Development Manager



August 5, 2025

This year, we spoke with industry leaders at some of the most important tech events around the globe, including CES and Sensors Converge in the USA, Mobile World Congress in both Spain and mainland China, Embedded World in Germany, and Computex in Taiwan. Across continents and markets, one thing stood out: everyone is talking about edge AI.

But while the term is everywhere, what people mean by it – and what they need from it – varies wildly. As edge AI gains popularity, its definition is expanding. We’re starting to see powerful “edge AI” solutions – even GPU-based “edge” computers – that deliver tremendous compute at an equally tremendous power draw. These systems are impressive, but they miss the original point: bringing intelligence closer to the source without ballooning energy costs or form factors.

At Innatera, we remain focused on redefining the edge – not by chasing bigger chips, but by taking intelligence to places it simply couldn’t go before. We're talking about AI that fits in the smallest corners of your world: always-on, ultra-efficient, intuitive, and ambient. Think AI that’s woven into the environment – not sitting on a desk with a fan – and adapts to application needs rapidly and with minimal user input requirements. It can anticipate which lights to turn on to suit your activity, enable headphones that instantly adapt to changing acoustic environments, and power smart bikes that provide real-time feedback on your posture and detect maintenance needs from subtle vibration patterns. It can even enable insect-scale autonomous drones that navigate and inspect infrastructure – no operator required.

The "move from cloud to edge" narrative has gained real traction. But what we heard on the floors of these events went deeper, revealing more specific needs and challenges at the device level.

The Edge AI realms​

Talking with engineers and product leads across events, a pattern emerged – across different use cases, a number of bottlenecks arise – but one aspect remains consistent: traditional hardware isn't enough.

The ARM Upgraders​

"We're already on the edge, but we're compute-starved"
These teams are already using microcontrollers or small SoCs. They’ve done the work to build on low-power ARM systems – but they’ve hit a wall. The problem isn’t moving to the edge; it’s what you can do once you’re there.
Their ask: more intelligence, faster reaction times, and richer features – all while staying within tight power and area budgets.

The Pi Downsizers​

"We have the compute, but can't scale down."
These are the Raspberry Pi and Jetson Nano users. They’re used to development boards with plenty of headroom. But now, they want to productize. And suddenly, the size, cost, and power footprint of their go-to platforms just don’t fit.

Their challenge: replicating the intelligence they’ve built on development kits – but in a much smaller, more efficient, and deployable form factor.

The Always-On Seekers​

"We need something that never sleeps."
These teams are often running on chips like Qualcomm Snapdragon or other heavy-duty SoCs. But they only need those systems to wake up when it’s necessary – for a voice command, a gesture, or an unusual event.

Right now, they’re duty cycling or using inefficient wake-up triggers. What they need is true always-on intelligence that doesn’t cost them battery life or generate heat.

The Sensor Hub Pioneers​

"We want to connect everything, but there's no good solution."
This group is one of the most exciting – and underserved. They’re building products that rely on multiple sensor modalities: audio, motion, vision, and touch and others. But there’s no great, power-efficient chip that can act as a unified sensor hub and provide smart preprocessing or inference.

Every time we explain how neuromorphic technology can support multiple sensors, the conversation shifts. Eyes light up.

Why Neuromorphic Makes Sense – Finally​

For years, neuromorphic computing was viewed as researchy, exotic, niche. That perception is shifting. These four user types are showing us exactly why neuromorphic is becoming practically relevant.
Here’s how we map:

  • ARM Upgraders: We offer more compute, but within the same tight constraints.
  • Pi Downsizers: We deliver significant improvements in power and area, with an acceptable trade-off in raw throughput.
  • Always-On Seekers: We give you true always-on AI – not duty cycling hacks.
  • Sensor Hub Pioneers: We provide the missing piece they didn’t know existed – smart sensor fusion on a single, ultra-efficient chip.

This is AI that starts with sensing. That reacts only when it needs to. That lives and breathes in the real world, not in data center specs.

The Ecosystem Reality Check​

We also noticed a stark gap between the market hype and the actual challenges teams are facing:

  • Many embedded teams lack deep AI experience. They know sensors and embedded systems, but they need approachable, adaptable AI.
  • Data is hard to come by. Collecting and labeling real-world data for training takes time and resources most teams don’t have.
  • Benchmarking novel hardware is non-trivial. Neuromorphic, Spiking Neural Networks (SNNs), non-von Neumann architectures – people are interested, but they don’t know how to compare them fairly (or how to integrate them into their current pipeline).

It is clear that innovation at the edge doesn’t happen in isolation. It requires collaboration across the entire stack. From sensor vendors, through solution providers, to infrastructure players – dataset providers, PCB makers, module builders, and distributors – each plays a vital role in bringing smarter edge systems to life.

At Innatera, we sit at the intersection of this network, enabling our partners to build intelligence where it previously wasn’t possible. Whether that means integrating neuromorphic compute into sensor hubs, powering always-on inference, or guiding teams through new AI workflows, we help close the gap between what’s needed and what’s technically achievable today. Together, we’re shaping the next wave of embedded intelligence – one practical step at a time.

Looking Ahead​

The future of AI is ambient – systems that blend into their environment: efficient yet responsive, autonomous yet intuitive when engaged. There is so much potential when tapping into the data already around us. I’m excited to work with partners around the world to bring these kinds of solutions to market.

If you recognize your team in one of the categories above – if you’re compute-starved, power-burdened, or drowning in sensor complexity, or if you're building the missing ecosystem pieces that could help solve these challenges – we should talk and explore what we can create together.
 
  • Like
  • Fire
  • Love
Reactions: 16 users
Top Bottom