Tothemoon24
Top 20
Gee Sean was unlucky
U real? If pico can run llama 1b. It is not hard to stack up to run full llama 405b model. It will be significantly cheaper to buy and run this chip at much lower power than nvidia 5090Details of Tony Lewis presentation at Embedded Vision Summit on Wednesday 21st May'25.
"
Date: Wednesday, May 21
Start Time: 2:05 pm
End Time: 2:35 pm
At the embedded edge, choices of language model architectures have profound implications on the ability to meet demanding performance, latency and energy efficiency requirements. In this presentation, we contrast state-space models (SSMs) with transformers for use in this constrained regime. While transformers rely on a read-write key-value cache, SSMs can be constructed as read-only architectures, enabling the use of novel memory types and reducing power consumption. Furthermore, SSMs require significantly fewer multiply-accumulate units—drastically reducing compute energy and chip area. New techniques enable distillation-based migration from transformer models such as Llama to SSMs without major performance loss. In latency-sensitive applications, techniques such as precomputing input sequences allow SSMs to achieve sub-100 ms time-to-first-token, enabling real-time interactivity. We present a detailed side-by-side comparison of these architectures, outlining their trade-offs and opportunities at the extreme edge."
See bold above. Seems that BRN are now able to migrate traditional transformer models to State Spece models (SSM's) without major performance loss.
Note the italics above that SSMs reduce energy requirements and chip area. Think Pico or a Pico plus.
Pico runs off TENNs which is as type of SSM.
Does that mean developers can now feast on the multitude of traditional models and distill them to SSMs? Appears so.
Is this potentially another game changer?
We might find out more tomorow?
I have the 'kiddy' co pilot which you do not get much out of.
Is someone with GPT4 or Grok etc able to quiz the AI to see what the possibilities are?
Off-course, but you just keep ranting on and on, there is a difference!
As I said countless times, why can't you downrampers wait until after the AGM? Let them speak first, but no, you just bla-bla-bla and want them gone NOW!
It's you that comes out childish by having no patience like a little kid!
You don't understand how much it takes to shift AI technology to the edge but want it done yesterday!
This sub forum on HC taught me a lot and made me much more positive: Understanding the Technology adoption cycle! One poster, Obseverr is extremely knowledgeable and clearly knows the industry in depth.
Looking at it, I see positive signs, space and military have started using our technology, which actually is very positive because they are typically early adopters.
I think we will get there this year and the next one.
Yes I know that have been said for the last couple of years, but technology shift towards edge AI have slowly started now IMO, but I will say, if they can't sell IPs over the next year, Sean has failed, so I will listen to them carefully tomorrow and take it from there.
I do not want to get to excited but it looks pretty good. Even a Pico plus size would be good. Drastically reduce energy use and chip area. Hopefully a Pico or Pico plus?Fk. This is a game changer. We can run Facebook llama models on akida pico.
You didn’t fully understand the implication. If Pico can run LLaMA 1B, it can easily scale up to run DeepSeek, Qwen, and the full LLaMA model — something only the NVIDIA 5090 can currently handle(priced between $5,599 and $7,199 AUD), and even that is out of stock. BrainChip could manufacture and sell it directly to the consumer market, bypassing those outdated 10-year IP deal constraints. The demand for running local AI models is tremendous, especially since ChatGPT tokens are becoming increasingly expensive.I do not want to get to excited but it looks pretty good. Even a Pico plus size would be good. Drastically reduce energy use and chip area. Hopefully a Pico or Pico plus?
If its small enough to fit into a mobile phone that would be a game changer for sure - the gap for hackers via mobiles would be closed - that is a game changer on its own. Guessing though
Its wait and see.
Although at tomorows tech meeting i do expect we will get some good tech news. Otherwise why schedule it right before the AGM?
To maybe get a little WOW factor before attending the AGM? Nothing like a game changer to soften the mood - speculation of course.
Thanks Tony's presentation description looked the goods and i just hope its accurate.You didn’t fully understand the implication. If Pico can run LLaMA 1B, it can easily scale up to run DeepSeek, Qwen, and the full LLaMA model — something only the NVIDIA 5090 can currently handle(priced between $5,599 and $7,199 AUD), and even that is out of stock. BrainChip could manufacture and sell it directly to the consumer market, bypassing those outdated 10-year IP deal constraints. The demand for running local AI models is tremendous, especially since ChatGPT tokens are becoming increasingly expensive.
Hi KK,You didn’t fully understand the implication. If Pico can run LLaMA 1B, it can easily scale up to run DeepSeek, Qwen, and the full LLaMA model — something only the NVIDIA 5090 can currently handle(priced between $5,599 and $7,199 AUD), and even that is out of stock. BrainChip could manufacture and sell it directly to the consumer market, bypassing those outdated 10-year IP deal constraints. The demand for running local AI models is tremendous, especially since ChatGPT tokens are becoming increasingly expensive.
And this will immediately position us as one of the biggest competitors of Nvidia.
Steve talks about Edge Impulse. I wonder when this was recorded.
![]()
Neuromorphic Computing Delivers Low-Power AI
BrainChip’s Akida platform offers low-power, neuromorphic machine-learning capabilities.www.electronicdesign.com
Hi FJ,Hi KK,
The problem with Pico (and Akida 2) is that they only exist on the drawing board. BRN have stated that there is no plan to tape out gen 2, although they did come up with a compromise and we now have a FPGA version that potential customers can use via the cloud.
That was the story with AKD1000, the IP was released in May 2019 but no one would look at it until we stumped up the cash to make the actual, physical chip.
History repeating???
That’s different. These chips are designed for edge devices and will be manufactured into SoCs, which is why an FPGA demo is sufficient at this stage. The key value lies in the AI model it runs and its ultra-low power consumption. The AI models provided in the Akida Zoo—or customers’ own proprietary models—are what enable different functionalitiesHi KK,
The problem with Pico (and Akida 2) is that they only exist on the drawing board. BRN have stated that there is no plan to tape out gen 2, although they did come up with a compromise and we now have a FPGA version that potential customers can use via the cloud.
That was the story with AKD1000, the IP was released in May 2019 but no one would look at it until we stumped up the cash to make the actual, physical chip.
History repeating???
Pom, you could've had 2 investment properties lad.Glad only a small fraction of my retirement is in brainchip especially having hardly any super and all the spare cash I’ve put into BRN and now hold a nice little parcel and if it a success it will make me a very happy man in retirement. But if this fails then my other investment was buying a property in 2019 that we now rent out now will get me out of the shit just incase.
They scheduled it before the AGM to soften the blow to the CEO and board.I do not want to get to excited but it looks pretty good. Even a Pico plus size would be good. Drastically reduce energy use and chip area. Hopefully a Pico or Pico plus?
If its small enough to fit into a mobile phone that would be a game changer for sure - the gap for hackers via mobiles would be closed - that is a game changer on its own. Guessing though
Its wait and see.
Although at tomorows tech meeting i do expect we will get some good tech news. Otherwise why schedule it right before the AGM?
To maybe get a little WOW factor before attending the AGM? Nothing like a game changer to soften the mood - speculation of course.
I would like to see us do it sooner rather than later. Could be done on the cheap (ish) if they do it as a multi project wafer like Gen 1Hi FJ,
Maybe this is what Alf was alluding to in that Linkedin video when said something like “Our next step may or may not be to turn this into a chip?“. Or something elusive like that.
So your telling me this is kind of a big step in the way of next generation computing?That’s different. These chips are designed for edge devices and will be manufactured into SoCs, which is why an FPGA demo is sufficient at this stage. The key value lies in the AI model it runs and its ultra-low power consumption. The AI models provided in the Akida Zoo—or customers’ own proprietary models—are what enable different functionalities
Do you think a regular computer can’t handle vision tasks on a CPU or GPU? Of course it can—too easily. But desktop CPU/GPU can't run on edge devices
But if BrainChip can manufacture, or at least demonstrate, the capability to run the full LLaMA 400B model on its design—even Elon Musk would place an order. It currently takes 5 A100 GPUs (costing around $30,000 each) to run the 400B model.
Hopefully, BrainChip’s CTO can quickly design some demos to validate this concept—perhaps by stacking multiple Akida 1000 chips together to prove its feasibility.
I can’t predict the future, but it’s certainly a much better alternative for running AI models compared to NVIDIA’s power-hungry, expensive, and bulky GPUs.So your telling me this is kind of a big step in the way of next generation computing?
And I just realised it's been over 2 years since Akida 2 came out, is numbe 3 almost ready?
You didn’t fully understand the implication. If Pico can run LLaMA 1B, it can easily scale up to run DeepSeek, Qwen, and the full LLaMA model — something only the NVIDIA 5090 can currently handle(priced between $5,599 and $7,199 AUD), and even that is out of stock. BrainChip could manufacture and sell it directly to the consumer market, bypassing those outdated 10-year IP deal constraints. The demand for running local AI models is tremendous, especially since ChatGPT tokens are becoming increasingly expensive.
And this will immediately position us as one of the biggest competitors of Nvidia.