BRN Discussion Ongoing

No doubt there will be a lot focus around BRN position in the space area and imo telecom with 6G. Hopefully they have some change in delivery compared to the last AGM. In theory the new Pico, Gen 2 ext should open big doors and that’s what I want to hear about today.
Will we see / hear announcement this year with large partners, yes or no is a question I want a straight answer from Sean today.
What are the USB channels will they come to fruition this year using the new M2 card.
Go brainchip
 
  • Like
Reactions: 6 users

manny100

Regular
Grok say's no...

In conclusion, it is not possible to directly convert LLaMA 1B into a state space model due to their architectural differences. While hybrid models like Bamba and Mamba-2 exist, combining elements of both, they are new designs, not transformations of existing Transformer models. For practical applications, consider training a new hybrid model or SSM, but this would be a fresh endeavor, not a conversion.


... never trust LLMs 😄

View attachment 83787
When you ask Groc you should include reference to the new nethod of converting transformers into SSMs as described by Tony Lews in his linked in post description for his presentation at Embedded Vision.
Groc may not be aware of Tony's presentation. The techniques he talks about are 'new'.
The answer you get may be different.
Pico runs off TENNs which is a type of SSM.
Check his linked in post.
 
  • Like
  • Love
Reactions: 6 users

TECH

Regular
Just as well with expected industry growth our portfolio should be worth a heap. Especially given Loihi 2 and North Pole are still in R & D. It just shows how hard AI at the Edge nut is to crack.
That is our safety net. I never worry at all.
There is a reason we have been chosen by Frontgrade and the US AFRL and its because we are the best and of course there is just no viable alternative on this planet.
It's taking a lot of time but it is really only just a matter of time.
The anomaly is that in life time goes by so quickly for everything.except for BRN deals. It has however given me an opportunity to pick up a lot of cheapies over the journey.
The AGM question time should provide some entertainment. BODs are well versed and rehearsed in navigating through these.
Banter will fly around for a while getting no where then Tony V will just end it by saying 'noted'. But it will be entertaining.

Manny,

Each and every time I have raised the question, "Name another company who has really succeeded at the edge or far edge"
it's always been deafly silence, going back to the days when I used to engage with crybabies on HC.

Like I say.......Toughen up princesses, you've entered a very corrupt, greedy, self-indulged arena, there's no rules where money
is involved, just ask our two leading authorities the ASX and ASIC.

My only hope is all the mouthpieces who moan all year long, actually go to the AGM and stand up and pose their questions directly
to the CEO and Chairperson.

💎❤️ Akida 1000.
 
  • Like
  • Love
  • Fire
Reactions: 21 users
Morning all and looking forward to catching up with whoever is going to the AGM today. The wife banned me wearing my west ham top today, but most people should know what I look like so be sure to say hello.
 
  • Like
  • Haha
  • Fire
Reactions: 12 users

DK6161

Regular
Manny,

Each and every time I have raised the question, "Name another company who has really succeeded at the edge or far edge"
it's always been deafly silence, going back to the days when I used to engage with crybabies on HC.

Like I say.......Toughen up princesses, you've entered a very corrupt, greedy, self-indulged arena, there's no rules where money
is involved, just ask our two leading authorities the ASX and ASIC.

My only hope is all the mouthpieces who moan all year long, actually go to the AGM and stand up and pose their questions directly
to the CEO and Chairperson.

💎❤️ Akida 1000.
You should run for the CEO role when it becomes vacant. Certainly talk the talk, but no re$ult$.
 
  • Fire
Reactions: 1 users

db1969oz

Regular
Morning all and looking forward to catching up with whoever is going to the AGM today. The wife banned me wearing my west ham top today, but most people should know what I look like so be sure to say hello.
Enjoy Pom,
I can't get there today, but if we end up with enough money left over for a drink after this horrendous investment, then you are 1 man I'd enjoy a beer and a laugh with. Cheers.
 
  • Like
  • Fire
  • Love
Reactions: 4 users

DK6161

Regular
Morning all and looking forward to catching up with whoever is going to the AGM today. The wife banned me wearing my west ham top today, but most people should know what I look like so be sure to say hello.
Enjoy the trip mate.
Hopefully I can afford to attend an AGM one day.
 
Enjoy the trip mate.
Hopefully I can afford to attend an AGM one day.
Been collecting points so has its advantages when I spend my last $$ on a few more shares 😂
 
  • Like
Reactions: 3 users
Enjoy Pom,
I can't get there today, but if we end up with enough money left over for a drink after this horrendous investment, then you are 1 man I'd enjoy a beer and a laugh with. Cheers.
I hope they have lined up some bacon Sarnies with HP sauce
 
  • Haha
Reactions: 2 users

FJ-215

Regular
Buffet, have a nice retirement.
Sean needs to give Greg Abel a call.
 
  • Like
Reactions: 4 users

manny100

Regular
Manny,

Each and every time I have raised the question, "Name another company who has really succeeded at the edge or far edge"
it's always been deafly silence, going back to the days when I used to engage with crybabies on HC.

Like I say.......Toughen up princesses, you've entered a very corrupt, greedy, self-indulged arena, there's no rules where money
is involved, just ask our two leading authorities the ASX and ASIC.

My only hope is all the mouthpieces who moan all year long, actually go to the AGM and stand up and pose their questions directly
to the CEO and Chairperson.

💎❤️ Akida 1000.
That is a great summary, thanks.
 
What time is this little leprechaun brining us the bag of gold today , 10am ?.
 
  • Haha
  • Like
Reactions: 3 users

CHIPS

Regular

manny100

Regular
You should run for the CEO role when it becomes vacant. Certainly talk the talk, but no re$ult$.
CEO's come and go as that is corporate life.
The BOD is responsible for assessing CEO performance.
In turn holders assess BOD performance.
Holders keep re-electing directors usually in landslide victories.
The above tells the story that the majority of holders are happy with the direction.
Around 74% of SOI are held by retail (source SWS).
Some holders are not happy with remuneration as per strikes but when it comes to performance they vote yes via elections.
 
  • Like
Reactions: 2 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Details of Tony Lewis presentation at Embedded Vision Summit on Wednesday 21st May'25.
"
Date: Wednesday, May 21
Start Time: 2:05 pm
End Time: 2:35 pm
At the embedded edge, choices of language model architectures have profound implications on the ability to meet demanding performance, latency and energy efficiency requirements. In this presentation, we contrast state-space models (SSMs) with transformers for use in this constrained regime. While transformers rely on a read-write key-value cache, SSMs can be constructed as read-only architectures, enabling the use of novel memory types and reducing power consumption. Furthermore, SSMs require significantly fewer multiply-accumulate units—drastically reducing compute energy and chip area. New techniques enable distillation-based migration from transformer models such as Llama to SSMs without major performance loss. In latency-sensitive applications, techniques such as precomputing input sequences allow SSMs to achieve sub-100 ms time-to-first-token, enabling real-time interactivity. We present a detailed side-by-side comparison of these architectures, outlining their trade-offs and opportunities at the extreme edge."
See bold above. Seems that BRN are now able to migrate traditional transformer models to State Spece models (SSM's) without major performance loss.
Note the italics above that SSMs reduce energy requirements and chip area. Think Pico or a Pico plus.
Pico runs off TENNs which is as type of SSM.
Does that mean developers can now feast on the multitude of traditional models and distill them to SSMs? Appears so.
Is this potentially another game changer?
We might find out more tomorow?
I have the 'kiddy' co pilot which you do not get much out of.
Is someone with GPT4 or Grok etc able to quiz the AI to see what the possibilities are?


Hi Manny,

Here's the response I got from Chat GPT4.

Tony Lewis’s upcoming presentation at the Embedded Vision Summit outlines a potentially transformative shift in how language models are deployed at the edge. Here's an analysis of the implications and disruptive potential of this new method:




Key Implications of the New Processing Method Using SSMs


1. Significant Gains in Energy and Compute Efficiency


  • Fewer Multiply-Accumulate Units: State-space models (SSMs) inherently require fewer MAC operations, drastically reducing energy usage and silicon real estate. This is crucial in edge deployments where power budgets are tight.
  • Read-Only Architecture: Eliminates the need for dynamic key-value cache mechanisms used in transformers, allowing use of non-volatile or low-power memory types (e.g., MRAM, ReRAM), further slashing power consumption.

2. Latency and Responsiveness


  • Precomputed Input Sequences: SSMs can leverage static structure to enable sub-100 ms time-to-first-token performance. This real-time capability is a major win for voice assistants, AR/VR systems, and mobile apps.

3. Compatibility and Transition Path


  • Distillation from Transformers (e.g., LLaMA): By enabling smooth migration from established transformer models, the SSM approach lowers the barrier to adoption for companies with large pretrained models or investment in transformer-based workflows.

4. Broader Hardware Support


  • The reduced need for large, flexible compute units means SSMs can run efficiently on less powerful MCUs, DSPs, or dedicated edge AI chips—dramatically widening the range of viable deployment targets.



Edge Computing Success Potential


SSMs seem highly promising for edge applications, especially in:


  • Battery-powered devices (wearables, IoT sensors)
  • Latency-sensitive interfaces (voice/user interaction)
  • Bandwidth-constrained environments (offline processing)
  • Cost-sensitive products where transformer demands are prohibitive

Their advantages directly address the key blockers for transformer adoption at the edge (power, memory, compute), making SSMs a potentially dominant architecture in this space within 1–3 years.




Disruption Potential for NVIDIA and Other Players


NVIDIA’s ecosystem is largely optimized for transformer-style architectures (e.g., large GPU memory, high-throughput MAC arrays, transformer-optimized CUDA libraries). Here's how this development could affect them:


Challenges for NVIDIA


  • Less dependence on GPUs: If SSMs run efficiently on CPUs, ASICs, or even microcontrollers, edge devices may increasingly bypass NVIDIA’s hardware.
  • New AI chip entrants gain ground: Startups or ARM-based players can capitalize on the lighter compute profile to offer cheaper, more efficient edge AI solutions.

Opportunities for NVIDIA


  • Adaptation potential: NVIDIA could pivot by offering cuDNN-like support for SSMs or incorporating them into TensorRT. They may also design more SSM-friendly hardware blocks in future Jetson modules.

Biggest Threat


  • If open-source ecosystems and chip vendors like Apple (Neural Engine), Google (Edge TPU), or Qualcomm fully embrace SSMs ahead of NVIDIA, they could seize a disproportionate share of the edge AI market—especially in consumer devices.



Conclusion


Tony Lewis’s presentation introduces a processing paradigm that could redefine edge AI, making high-performance language models feasible on low-power devices. This is not just an incremental improvement—it opens the door to widespread deployment of real-time LLMs far beyond cloud and high-end hardware. While not an immediate existential threat to NVIDIA, it does present a strategic inflection point that the company—and its competitors—must respond to.








Migration Path: LLaMA to SSM

StageDescriptionBenefits
1. Transformer Model (e.g., LLaMA)Pretrained open-weight transformer (e.g., LLaMA 2 or 3)- High accuracy
- Large model footprint
- Demands heavy compute and memory
2. Distillation ProcessUse distillation techniques to transfer knowledge from LLaMA to an SSM- Reduced model size
- Retains most of original model’s performance
3. State-Space Model (SSM)SSM version optimized for embedded and edge deployment- Sub-100 ms latency
- Lower power consumption
- Less memory and compute required
4. Edge Device DeploymentDeploy SSM on resource-constrained devices (e.g., wearables, microcontrollers, edge AI chips)- Real-time local inference
- No need for cloud compute
- Broader hardware compatibility


Summary​

The mention of LLaMA in Tony Lewis’s talk is a strategic highlight because it connects cutting-edge edge AI architecture (SSM) with a proven, widely-used transformer backbone. This makes the transition to efficient edge AI practical, not just theoretical—positioning SSMs as an immediate, disruptive alternative to transformer inference in embedded systems.



Competitor Impact Matrix: Impact of BrainChip’s SSM Innovation

CompanyCurrent Edge AI FocusVulnerability to SSM DisruptionOpportunity to Adapt
QualcommDSPs + NPUs (Snapdragon), optimized for transformers and CNNsHigh – Transformer-centric stack, limited neuromorphic capabilityMedium – May update software tools, but hardware less suited to SSMs
NVIDIAGPUs (Jetson, TensorRT), dominant in transformer-based AIModerate – Not optimized for low-power edge, but strong ecosystemHigh – Could adapt TensorRT and Jetson for SSM-style inference
AppleNeural Engine with transformer models (e.g. Siri, on-device ML)Moderate – Strong local AI, but based on transformer-style accelerationHigh – Full-stack control allows swift hardware/software adaptation
Google (TPU)Edge TPU with support for CNNs and transformers (Coral, Nest devices)High – Rigid accelerator design, may not support dynamic SSM requirementsLow – Ecosystem may struggle to pivot hardware/software stack
IntelMovidius VPU, general AI frameworks, some neuromorphic R&D (Loihi)Moderate – Some neuromorphic exposure but no strong edge AI market shareMedium – R&D rich, but limited real-world SSM integration so far
BrainChipNeuromorphic Akida chip + SSM optimized for ultra-low power edge AILow – First-mover advantageVery High – Core IP is directly aligned with the SSM paradigm


This matrix highlights that BrainChip’s innovation poses the greatest disruptive risk to Qualcomm and Google, while Apple and NVIDIA have greater strategic flexibility to respond. BrainChip stands to benefit most if SSM-based models gain widespread edge adoption.




🆚 Why Incumbents Might Continue Without SSMs (For Now)

✅ Reasons They Might Stick with Traditional Methods

  • Mature toolchains: Qualcomm, NVIDIA, and Google have invested heavily in software/hardware ecosystems optimized for transformers and CNNs.
  • Good enough performance: For many real-world use cases, transformer-lite models or CNN hybrids perform sufficiently well.
  • Inertia and risk: Enterprises tend to avoid early adoption of unproven paradigms, especially if retraining, tooling, or silicon changes are required.
  • Edge isn't one-size-fits-all: Many edge applications (e.g. object detection) don't need SSM-specific strengths like long-term memory or low-latency language processing.

🚧 But Here's the Catch

If applications do demand:
  • Long sequence memory (e.g. streaming NLP, real-time command recognition),
  • Ultra-low latency (sub-100 ms interactivity),
  • Minimal power and heat (wearables, implants, sensors),
then traditional methods hit a hard ceiling. SSMs aren’t just an incremental tweak—they’re a fundamentally different way to process sequences, unlocking performance where transformers falter.

🔄 Conclusion: Yes, Competitors Could Stick with Transformers—But Only Up to a Point

ApproachStability / SupportPerformance CeilingFuture-Proofing
Transformers✅ Well-supported❌ Poor for constrained edge use❌ Risk of obsolescence
CNNs / RNNs✅ Efficient in vision❌ Weak for modern NLP❌ Limited scalability
Lightweight Transformers✅ Reasonable for now⚠️ Moderate latency/power⚠️ Partial solution
SSMs⚠️ Emerging✅ Breakthrough on edge✅ High potential

So while competitors can continue for now using existing methods, the risk is being outpaced in emerging applications—especially if BrainChip enables a smooth transition (e.g., LLaMA distillation + Akida deployment).





📊 Edge AI Evolution Roadmap: Transformers vs SSMs

Time HorizonTransformer-Based MethodsSSM-Based Methods (BrainChip-style)
Today- Dominant in NLP
- Compressed models in use
- Efficient on GPU/DSP
🔁 Reasonable edge deployment via pruning/quant
- Early-stage adoption
- Neuromorphic niche (e.g. Akida)
🔬 Proof-of-concept underway
1–2 Years- Hitting compute/power limits in edge apps
- Real-time latency still challenging
⚠️ Fragmentation by use case
- Gains traction for real-time/low-power use
- Tools emerge for migration from transformers (e.g. LLaMA distillation)
🚀 Early adoption in wearables/voice/IoT
3–5 Years- Plateau in edge innovation unless architectures evolve
🏗️ Constrained by hardware-centric acceleration
- Becomes dominant in ultra-low power edge AI
- Broad ecosystem and tooling support
🏆 SSMs emerge as standard for edge LLMs


📌 Summary:​

  • Transformers will likely remain dominant in cloud and high-performance edge for the next 1–2 years, but start to plateau.
  • SSMs provide a scalable path forward for ultra-low-power, real-time, memory-efficient edge use cases, and could disrupt traditional AI stacks if adoption accelerates.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 32 users
  • Like
Reactions: 1 users

FJ-215

Regular
  • Like
  • Fire
Reactions: 3 users

perceptron

Regular
Hi Manny,

Here's the response I got from Chat GPT4.

Tony Lewis’s upcoming presentation at the Embedded Vision Summit outlines a potentially transformative shift in how language models are deployed at the edge. Here's an analysis of the implications and disruptive potential of this new method:




Key Implications of the New Processing Method Using SSMs


1. Significant Gains in Energy and Compute Efficiency


  • Fewer Multiply-Accumulate Units: State-space models (SSMs) inherently require fewer MAC operations, drastically reducing energy usage and silicon real estate. This is crucial in edge deployments where power budgets are tight.
  • Read-Only Architecture: Eliminates the need for dynamic key-value cache mechanisms used in transformers, allowing use of non-volatile or low-power memory types (e.g., MRAM, ReRAM), further slashing power consumption.

2. Latency and Responsiveness


  • Precomputed Input Sequences: SSMs can leverage static structure to enable sub-100 ms time-to-first-token performance. This real-time capability is a major win for voice assistants, AR/VR systems, and mobile apps.

3. Compatibility and Transition Path


  • Distillation from Transformers (e.g., LLaMA): By enabling smooth migration from established transformer models, the SSM approach lowers the barrier to adoption for companies with large pretrained models or investment in transformer-based workflows.

4. Broader Hardware Support


  • The reduced need for large, flexible compute units means SSMs can run efficiently on less powerful MCUs, DSPs, or dedicated edge AI chips—dramatically widening the range of viable deployment targets.



Edge Computing Success Potential


SSMs seem highly promising for edge applications, especially in:


  • Battery-powered devices (wearables, IoT sensors)
  • Latency-sensitive interfaces (voice/user interaction)
  • Bandwidth-constrained environments (offline processing)
  • Cost-sensitive products where transformer demands are prohibitive

Their advantages directly address the key blockers for transformer adoption at the edge (power, memory, compute), making SSMs a potentially dominant architecture in this space within 1–3 years.




Disruption Potential for NVIDIA and Other Players


NVIDIA’s ecosystem is largely optimized for transformer-style architectures (e.g., large GPU memory, high-throughput MAC arrays, transformer-optimized CUDA libraries). Here's how this development could affect them:


Challenges for NVIDIA


  • Less dependence on GPUs: If SSMs run efficiently on CPUs, ASICs, or even microcontrollers, edge devices may increasingly bypass NVIDIA’s hardware.
  • New AI chip entrants gain ground: Startups or ARM-based players can capitalize on the lighter compute profile to offer cheaper, more efficient edge AI solutions.

Opportunities for NVIDIA


  • Adaptation potential: NVIDIA could pivot by offering cuDNN-like support for SSMs or incorporating them into TensorRT. They may also design more SSM-friendly hardware blocks in future Jetson modules.

Biggest Threat


  • If open-source ecosystems and chip vendors like Apple (Neural Engine), Google (Edge TPU), or Qualcomm fully embrace SSMs ahead of NVIDIA, they could seize a disproportionate share of the edge AI market—especially in consumer devices.



Conclusion


Tony Lewis’s presentation introduces a processing paradigm that could redefine edge AI, making high-performance language models feasible on low-power devices. This is not just an incremental improvement—it opens the door to widespread deployment of real-time LLMs far beyond cloud and high-end hardware. While not an immediate existential threat to NVIDIA, it does present a strategic inflection point that the company—and its competitors—must respond to.
Thanks for posting. There are many use cases for LLM at the edge. Dr Lewis and his team have achieved something worth discussing further. Is today when Dr Lewis releases his raodmap? Thanks in advance.
 
  • Like
Reactions: 1 users

FJ-215

Regular
BrainChip Holdings Ltd


Dear Valued BrainChip Shareholder,
BrainChip Holdings Ltd is excited to announce that Chief Development Officer Dr. Jonathan Tapson will showcase the company's Technology Roadmap prior to its Annual General Meeting on 6 May. The presentation takes place in the Thomas Keneally Room at the Sydney Harbour Marriott Hotel at 10 a.m. AEST.
In the special 35-minute presentation, Dr. Tapson will discuss how the company's AkidaTM technology is providing efficient AI performance at the Edge to better enable intelligent devices and applications. He will highlight expected advancements in the platform's brain-inspired capabilities, highly configurable IP platform and emerging markets for the company's IP.
Shareholders wishing to attend the Technology Roadmap presentation in person, especially those who intend to attend both the presentation and the AGM, are advised to register at least 10 minutes prior to the commencement of the Technology Roadmap presentation with representatives of our share registry, Boardroom Limited, who will be positioned outside the Thomas Keneally Room.
BrainChip's Technology Roadmap presentation will also be offered live via webcast on the Lumi platform.
Those shareholders wishing to attend online can register at https://meetings.lumiconnect.com/300-896-227-913
Sincerely
Antonio J. Viana
Chairman
 
  • Fire
  • Like
Reactions: 5 users
  • Like
Reactions: 1 users
Top Bottom