Pom down under
Top 20
Ok I’ve just changed my mind about attending next years AGM as just been charged $14.30 for a scooner of great northern at the airport
It was an attendee seated mid room behind me. He was asking many questions as he is entitled to, I understand his frustration, but he was out of line imo. ( Claiming Peter had done nothing for the Co.)⁰
Who attacked Peter
Exactly! Those who talk the most shit were absent or if they were there they shied away when it mattered.Like T&J and the dean and lolci for example?
Ok I’ve just changed my mind about attending next years AGM as just been charged $14.30 for a scooner of great northern at the airport
View attachment 83868
Probably just as well Pom.Ok I’ve just changed my mind about attending next years AGM as just been charged $14.30 for a scooner of great northern at the airport
View attachment 83868
One of the Share holder for not backing the guy who nominated himself for director.⁰
Who attacked Peter
Oh, that's right. Something about how Liebeskind had "made" PVDM in the early days and now was stabbing him in the back, or something like that. Antonio shut him down, which was the correct thing to do.One of the Share holder for not backing the guy who nominated himself for director.
I would have to listen to it again but was the response in relation to a redomicile was being investigated and not a definite move to the US or anywhere else.A post from HC about the Antonio denying the redomcile was to the US...
I'm sure BRN willl just let this one go through to the keeper....
View attachment 83863
I think you are on the money Bravo with this.Anduril Industries is actively collaborating with the U.S. government on the development of advanced augmented reality (AR) headsets for military applications.
I wonder if these are the headsets that were being referred to at the AGM?
As mentioned previously, it would be amaze-balls if Anduril and BrainChip were to team up.
Anduril is working on the difficult AI-related task of real-time edge computing
![]()
Anduril Menace-T edge computing product · TechCrunch · Image Credits:Anduril
Julie Bort
Tue 6 May 2025 at 6:30 am AEST 3 min read
Anduril announced its ninth acquisition on Monday with the purchase of Dublin’s Klas, makers of ruggedized edge computing equipment for the military and first responders.
Anduril wouldn’t reveal financial details of the deal, and the purchase is subject to regulatory approval, but the company did say that Klas employs 150 people.
Relatedly, on Monday, Anduril also announced a new product called Menace-T.
We’ll give the company points for the interesting product name, especially for a device that’s really just a bundle of compute/network connectivity, rather than, say, a fantasy-style broadsword. (Compare the name Menace to Lockheed Martin’s C2BMC, the name for its Command, Control, Battle Management & Communications products.)
Klas’ flagship product, known as Voyager, is the ruggedized family of compute and networking systems that Anduril had already been using in its other Menace command center products. Voyager had also already been integrated with Anduril’s flagship Lattice software. Lattice brings sensors and AI to devices to perform tasks like object identification.
But while most of us envision a portable command system being the size of a truck — which many are — Menace-T fits into two carry-on cases that can be set up by one person in minutes, the company says. Its goal is to bring edge computing and communications to off-grid and/or inhospitable environments. Anduril says it’s already being used in military ground vehicles and maritime vessels.
One interesting use case for Menace-T is compute/communications support for the military’s Integrated Visual Augmentation System (IVAS) VR headsets. The IVAS project was initially awarded to Microsoft in 2018 after it pitched the idea of developing ruggedized HoloLens headsets for soldiers. The project was awarded an initial $21.9 billion budget.
But after years of technical struggles, Anduril took control of the troubled contract in February — although Microsoft remains a cloud partner.
Lattice had already been added to Microsoft’s IVAS headsets, bringing computer vision AI that helps the headset detect, track, and classify objects.
Now Anduril thinks that the Klas technology that powers its Menace-T product can solve some of IVAS’s other historic problems, like reliable data processing.
With IVAS, “there are scenarios where those soldiers need to communicate with the tactical edge to send data, to receive data, to task autonomous systems, and that's a place where the Klas technology can help,” Tom Keane, SVP of Engineering, said at a press conference. “Klas has already been supplying technology to IVAS for several years in that context. So we expect to do more there.”
![]()
Anduril is working on the difficult AI-related task of real-time edge computing
Relatedly, on Monday, Anduril also announced a new product called Menace-T. Klas’ flagship product, known as Voyager, is the ruggedized family of compute and networking systems that Anduril had already been using in its other Menace command center products.au.finance.yahoo.com
He made an absolute goose of himselfOne of the Share holder for not backing the guy who nominated himself for director.
Beat me to it Bravo!Anduril Industries is actively collaborating with the U.S. government on the development of advanced augmented reality (AR) headsets for military applications.
I wonder if these are the headsets that were being referred to at the AGM?
As mentioned previously, it would be amaze-balls if Anduril and BrainChip were to team up.
Anduril is working on the difficult AI-related task of real-time edge computing
![]()
Anduril Menace-T edge computing product · TechCrunch · Image Credits:Anduril
Julie Bort
Tue 6 May 2025 at 6:30 am AEST 3 min read
Anduril announced its ninth acquisition on Monday with the purchase of Dublin’s Klas, makers of ruggedized edge computing equipment for the military and first responders.
Anduril wouldn’t reveal financial details of the deal, and the purchase is subject to regulatory approval, but the company did say that Klas employs 150 people.
Relatedly, on Monday, Anduril also announced a new product called Menace-T.
We’ll give the company points for the interesting product name, especially for a device that’s really just a bundle of compute/network connectivity, rather than, say, a fantasy-style broadsword. (Compare the name Menace to Lockheed Martin’s C2BMC, the name for its Command, Control, Battle Management & Communications products.)
Klas’ flagship product, known as Voyager, is the ruggedized family of compute and networking systems that Anduril had already been using in its other Menace command center products. Voyager had also already been integrated with Anduril’s flagship Lattice software. Lattice brings sensors and AI to devices to perform tasks like object identification.
But while most of us envision a portable command system being the size of a truck — which many are — Menace-T fits into two carry-on cases that can be set up by one person in minutes, the company says. Its goal is to bring edge computing and communications to off-grid and/or inhospitable environments. Anduril says it’s already being used in military ground vehicles and maritime vessels.
One interesting use case for Menace-T is compute/communications support for the military’s Integrated Visual Augmentation System (IVAS) VR headsets. The IVAS project was initially awarded to Microsoft in 2018 after it pitched the idea of developing ruggedized HoloLens headsets for soldiers. The project was awarded an initial $21.9 billion budget.
But after years of technical struggles, Anduril took control of the troubled contract in February — although Microsoft remains a cloud partner.
Lattice had already been added to Microsoft’s IVAS headsets, bringing computer vision AI that helps the headset detect, track, and classify objects.
Now Anduril thinks that the Klas technology that powers its Menace-T product can solve some of IVAS’s other historic problems, like reliable data processing.
With IVAS, “there are scenarios where those soldiers need to communicate with the tactical edge to send data, to receive data, to task autonomous systems, and that's a place where the Klas technology can help,” Tom Keane, SVP of Engineering, said at a press conference. “Klas has already been supplying technology to IVAS for several years in that context. So we expect to do more there.”
![]()
Anduril is working on the difficult AI-related task of real-time edge computing
Relatedly, on Monday, Anduril also announced a new product called Menace-T. Klas’ flagship product, known as Voyager, is the ruggedized family of compute and networking systems that Anduril had already been using in its other Menace command center products.au.finance.yahoo.com
It excludes any subcontract/outsourcing fee amounts, as per Antonio's confirmation duriing the AGM...Can anyone comment on the $9,000,0000 booking target Sean has?
Is it $9mio US or AUD? Is it gross revenue for BRN or the customer, or is it the profit on sales?
He was talking about revenue. Not sure which currency.Can anyone comment on the $9,000,0000 booking target Sean has?
Is it $9mio US or AUD? Is it gross revenue for BRN or the customer, or is it the profit on sales?
Thanks Bravo, read it 3 times and saved a copy. AwesomeHi Manny,
Here's the response I got from Chat GPT4.
Tony Lewis’s upcoming presentation at the Embedded Vision Summit outlines a potentially transformative shift in how language models are deployed at the edge. Here's an analysis of the implications and disruptive potential of this new method:
Key Implications of the New Processing Method Using SSMs
1. Significant Gains in Energy and Compute Efficiency
- Fewer Multiply-Accumulate Units: State-space models (SSMs) inherently require fewer MAC operations, drastically reducing energy usage and silicon real estate. This is crucial in edge deployments where power budgets are tight.
- Read-Only Architecture: Eliminates the need for dynamic key-value cache mechanisms used in transformers, allowing use of non-volatile or low-power memory types (e.g., MRAM, ReRAM), further slashing power consumption.
2. Latency and Responsiveness
- Precomputed Input Sequences: SSMs can leverage static structure to enable sub-100 ms time-to-first-token performance. This real-time capability is a major win for voice assistants, AR/VR systems, and mobile apps.
3. Compatibility and Transition Path
- Distillation from Transformers (e.g., LLaMA): By enabling smooth migration from established transformer models, the SSM approach lowers the barrier to adoption for companies with large pretrained models or investment in transformer-based workflows.
4. Broader Hardware Support
- The reduced need for large, flexible compute units means SSMs can run efficiently on less powerful MCUs, DSPs, or dedicated edge AI chips—dramatically widening the range of viable deployment targets.
Edge Computing Success Potential
SSMs seem highly promising for edge applications, especially in:
- Battery-powered devices (wearables, IoT sensors)
- Latency-sensitive interfaces (voice/user interaction)
- Bandwidth-constrained environments (offline processing)
- Cost-sensitive products where transformer demands are prohibitive
Their advantages directly address the key blockers for transformer adoption at the edge (power, memory, compute), making SSMs a potentially dominant architecture in this space within 1–3 years.
Disruption Potential for NVIDIA and Other Players
NVIDIA’s ecosystem is largely optimized for transformer-style architectures (e.g., large GPU memory, high-throughput MAC arrays, transformer-optimized CUDA libraries). Here's how this development could affect them:
Challenges for NVIDIA
- Less dependence on GPUs: If SSMs run efficiently on CPUs, ASICs, or even microcontrollers, edge devices may increasingly bypass NVIDIA’s hardware.
- New AI chip entrants gain ground: Startups or ARM-based players can capitalize on the lighter compute profile to offer cheaper, more efficient edge AI solutions.
Opportunities for NVIDIA
- Adaptation potential: NVIDIA could pivot by offering cuDNN-like support for SSMs or incorporating them into TensorRT. They may also design more SSM-friendly hardware blocks in future Jetson modules.
Biggest Threat
- If open-source ecosystems and chip vendors like Apple (Neural Engine), Google (Edge TPU), or Qualcomm fully embrace SSMs ahead of NVIDIA, they could seize a disproportionate share of the edge AI market—especially in consumer devices.
Conclusion
Tony Lewis’s presentation introduces a processing paradigm that could redefine edge AI, making high-performance language models feasible on low-power devices. This is not just an incremental improvement—it opens the door to widespread deployment of real-time LLMs far beyond cloud and high-end hardware. While not an immediate existential threat to NVIDIA, it does present a strategic inflection point that the company—and its competitors—must respond to.
Migration Path: LLaMA to SSM
Stage Description Benefits 1. Transformer Model (e.g., LLaMA) Pretrained open-weight transformer (e.g., LLaMA 2 or 3) - High accuracy
- Large model footprint
- Demands heavy compute and memory2. Distillation Process Use distillation techniques to transfer knowledge from LLaMA to an SSM - Reduced model size
- Retains most of original model’s performance3. State-Space Model (SSM) SSM version optimized for embedded and edge deployment - Sub-100 ms latency
- Lower power consumption
- Less memory and compute required4. Edge Device Deployment Deploy SSM on resource-constrained devices (e.g., wearables, microcontrollers, edge AI chips) - Real-time local inference
- No need for cloud compute
- Broader hardware compatibility
Summary
The mention of LLaMA in Tony Lewis’s talk is a strategic highlight because it connects cutting-edge edge AI architecture (SSM) with a proven, widely-used transformer backbone. This makes the transition to efficient edge AI practical, not just theoretical—positioning SSMs as an immediate, disruptive alternative to transformer inference in embedded systems.
Competitor Impact Matrix: Impact of BrainChip’s SSM Innovation
Company Current Edge AI Focus Vulnerability to SSM Disruption Opportunity to Adapt Qualcomm DSPs + NPUs (Snapdragon), optimized for transformers and CNNs High – Transformer-centric stack, limited neuromorphic capability Medium – May update software tools, but hardware less suited to SSMs NVIDIA GPUs (Jetson, TensorRT), dominant in transformer-based AI Moderate – Not optimized for low-power edge, but strong ecosystem High – Could adapt TensorRT and Jetson for SSM-style inference Apple Neural Engine with transformer models (e.g. Siri, on-device ML) Moderate – Strong local AI, but based on transformer-style acceleration High – Full-stack control allows swift hardware/software adaptation Google (TPU) Edge TPU with support for CNNs and transformers (Coral, Nest devices) High – Rigid accelerator design, may not support dynamic SSM requirements Low – Ecosystem may struggle to pivot hardware/software stack Intel Movidius VPU, general AI frameworks, some neuromorphic R&D (Loihi) Moderate – Some neuromorphic exposure but no strong edge AI market share Medium – R&D rich, but limited real-world SSM integration so far BrainChip Neuromorphic Akida chip + SSM optimized for ultra-low power edge AI Low – First-mover advantage Very High – Core IP is directly aligned with the SSM paradigm
This matrix highlights that BrainChip’s innovation poses the greatest disruptive risk to Qualcomm and Google, while Apple and NVIDIA have greater strategic flexibility to respond. BrainChip stands to benefit most if SSM-based models gain widespread edge adoption.
Why Incumbents Might Continue Without SSMs (For Now)
Reasons They Might Stick with Traditional Methods
- Mature toolchains: Qualcomm, NVIDIA, and Google have invested heavily in software/hardware ecosystems optimized for transformers and CNNs.
- Good enough performance: For many real-world use cases, transformer-lite models or CNN hybrids perform sufficiently well.
- Inertia and risk: Enterprises tend to avoid early adoption of unproven paradigms, especially if retraining, tooling, or silicon changes are required.
- Edge isn't one-size-fits-all: Many edge applications (e.g. object detection) don't need SSM-specific strengths like long-term memory or low-latency language processing.
If applications do demand:
But Here's the Catch
then traditional methods hit a hard ceiling. SSMs aren’t just an incremental tweak—they’re a fundamentally different way to process sequences, unlocking performance where transformers falter.
- Long sequence memory (e.g. streaming NLP, real-time command recognition),
- Ultra-low latency (sub-100 ms interactivity),
- Minimal power and heat (wearables, implants, sensors),
Conclusion: Yes, Competitors Could Stick with Transformers—But Only Up to a Point
Approach Stability / Support Performance Ceiling Future-Proofing Transformers Well-supported
Poor for constrained edge use
Risk of obsolescence
CNNs / RNNs Efficient in vision
Weak for modern NLP
Limited scalability
Lightweight Transformers Reasonable for now
Moderate latency/power
Partial solution
SSMs Emerging
Breakthrough on edge
High potential
So while competitors can continue for now using existing methods, the risk is being outpaced in emerging applications—especially if BrainChip enables a smooth transition (e.g., LLaMA distillation + Akida deployment).
Edge AI Evolution Roadmap: Transformers vs SSMs
Time Horizon Transformer-Based Methods SSM-Based Methods (BrainChip-style) Today - Dominant in NLP
- Compressed models in use
- Efficient on GPU/DSP
Reasonable edge deployment via pruning/quant
- Early-stage adoption
- Neuromorphic niche (e.g. Akida)
Proof-of-concept underway
1–2 Years - Hitting compute/power limits in edge apps
- Real-time latency still challenging
Fragmentation by use case
- Gains traction for real-time/low-power use
- Tools emerge for migration from transformers (e.g. LLaMA distillation)
Early adoption in wearables/voice/IoT
3–5 Years - Plateau in edge innovation unless architectures evolve
Constrained by hardware-centric acceleration
- Becomes dominant in ultra-low power edge AI
- Broad ecosystem and tooling support
SSMs emerge as standard for edge LLMs
Summary:
- Transformers will likely remain dominant in cloud and high-performance edge for the next 1–2 years, but start to plateau.
- SSMs provide a scalable path forward for ultra-low-power, real-time, memory-efficient edge use cases, and could disrupt traditional AI stacks if adoption accelerates.