BRN Discussion Ongoing

With BrainChip, too!!

1755025399122.gif
 
  • Haha
  • Like
Reactions: 8 users

itsol4605

Regular
  • Like
  • Fire
Reactions: 16 users

Tothemoon24

Top 20


Keeping It Local: Bringing Generative AI to the Intelligent Edge​

placeholder


Generative AI is no longer confined to the cloud. With NXP’s eIQ® GenAI Flow , developers can now run large language models (LLMs) —like Llama and Qwen— directly on embedded edge devices securely, efficiently and close to the data. This paradigm shift unlocks new opportunities for real-time intelligence across industries, from automotive to industrial automation.
Built as a complete software deployment pipeline, eIQ GenAI Flow simplifies the once-daunting task of implementing generative AI models on power- and compute-constrained systems. It combines the latest model optimization techniques like quantization with hardware acceleration from NXP’s eIQ Neutron NPU to make GenAI practical and performant—right at the edge.

Smarter AI, Locally Deployed​

At its core, GenAI Flow helps overcome the traditional barriers of running advanced models in embedded environments. The pipeline already enables today’s most powerful open language models, with support for multimodal and vision-language models (VLMs) soon. GenAI Flow provides the necessary optimizations out-of-the-box for real-time execution on application processors like the i.MX 95—the kind of performance needed for conversational AI, physical AI and more.
GenAI is moving from the cloud to the edge, so what does that mean for embedded developers?Learn more by listening to our EdgeVerse Techcast episode on Apple Podcasts , Spotify or YouTube .
By using accuracy-preserving quantization techniques such as integer 8 and 4 (INT8 and INT4) precision, we can fully leverage the Neural Processing Unit (NPU) for inference acceleration. Using GenAI Flow dramatically improves response speed and power efficiency on-device. For example, time to first token (TTFT)—a key metric for any GenAI application—can be reduced from 9.6 seconds on an Arm Cortex CPU (Float32 precision) to less than 1 second on the Neutron NPU with INT8 quantization. This enables captivating, real-time AI experiences, without requiring power-hungry servers or cloud infrastructure.
image.jpg

Play Video
Generative AI is driving innovations at the edge. GenAI Flow, included with NXP's eIQ Toolkit, makes enabling Gen AI at the edge simple and secure.
GenAI Flow also supports small language models (SLMs), which are lighter, yet still capable of delivering high-quality results. The pipeline offers flexible execution across central processing unit (CPU), NPU or a hybrid configuration, allowing developers to tune performance based on their specific product needs.

Adding Context with RAG​

A defining feature of GenAI Flow is the built-in support for retrieval-augmentation generation (RAG) . This form of model fine-tuning allows LLMs to access domain-specific or private data sources—such as device and service manuals, internal PDFs and equipment maintenance logs—without having to retrain the original model. RAG injects the relevant external knowledge as a vector database stored on the edge device, enabling highly contextual, grounded responses that can eliminate an AI’s hallucination problem and prevent certain errors in judgement.
RAG is particularly powerful for edge use cases because all data processing happens locally. This protects sensitive information while delivering dynamic, on-demand AI responses. Developers can simply turn a new document into a highly compact, LLM-friendly database and the model immediately adopts the additional context—no retraining required! This efficiency alone can save millions of dollars and energy spent on numerous iterations of GenAI fine-tuning in data centers.

Real-World Impact: From Cars to Robots​

GenAI Flow is already being used across multiple industries where low-latency performance and data privacy are critical.
In Automotive, AI-powered infotainment systems can respond to natural voice commands by referencing service manuals embedded in the vehicle. This creates a seamless, hands-free experience without the typical connectivity requirements.
In healthcare, touchless-AI interfaces equip clinicians to securely access procedure or patient data using voice prompts—an ideal solution for reducing physical contact and contamination risk in sensitive environments.
image.jpg

Play Video
AICHI, the AI controller for health insights, securely collects and analyzes multimodal heath and other sensor data in real time, detecting early anomalies and enabling proactive, personalized care.
In mobile robotics, generative AI models interpret written instructions and visual inputs—using optical character recognition (OCR) and RAG—to take context-aware actions. These systems move beyond basic automation and into intelligent interaction between humans and environments.
image.jpg

Play Video
This 3D perception sensor fusion demo showcases trusted spatial perception at the edge, operating in dynamic and uncertain environments.
In industrial automation, AI assistants help technicians troubleshoot machine issues using real-time sensor data and maintenance documentation—all processed locally, even in remote or low-bandwidth settings.
Across these scenarios, GenAI Flow offers developers a powerful and privacy-conscious framework for building intelligent edge solutions.

What’s Next for GenAI at the Edge?​

The next evolution of GenAI at the edge is multimodal and agentic. Future systems will blend together voice, vision and language inputs to create richer, more intuitive user experiences. With GenAI Flow, this convergence is already underway, enabling unified edge pipelines that can reason and act from a combination of input types.
There’s also a strong focus on continuing to optimize edge AI performance—both in scaling up support for larger models and by making smaller models even faster. This includes advancements in quantization, execution flexibility and support for increasingly compact LLM architectures.
As AI systems become more adaptive and locally responsive, access to the best tooling becomes ever more critical. GenAI Flow is designed with scalability in mind, helping developers integrate today’s rapidly evolving AI capabilities into products across microprocessor unit (MPU) platforms and potentially even into future microcontroller unit (MCU)-class devices.
Tags: Technologies
 
  • Like
  • Fire
  • Wow
Reactions: 14 users

7für7

Top 20


Keeping It Local: Bringing Generative AI to the Intelligent Edge​

placeholder


Generative AI is no longer confined to the cloud. With NXP’s eIQ® GenAI Flow , developers can now run large language models (LLMs) —like Llama and Qwen— directly on embedded edge devices securely, efficiently and close to the data. This paradigm shift unlocks new opportunities for real-time intelligence across industries, from automotive to industrial automation.
Built as a complete software deployment pipeline, eIQ GenAI Flow simplifies the once-daunting task of implementing generative AI models on power- and compute-constrained systems. It combines the latest model optimization techniques like quantization with hardware acceleration from NXP’s eIQ Neutron NPU to make GenAI practical and performant—right at the edge.

Smarter AI, Locally Deployed​

At its core, GenAI Flow helps overcome the traditional barriers of running advanced models in embedded environments. The pipeline already enables today’s most powerful open language models, with support for multimodal and vision-language models (VLMs) soon. GenAI Flow provides the necessary optimizations out-of-the-box for real-time execution on application processors like the i.MX 95—the kind of performance needed for conversational AI, physical AI and more.
GenAI is moving from the cloud to the edge, so what does that mean for embedded developers?Learn more by listening to our EdgeVerse Techcast episode on Apple Podcasts , Spotify or YouTube .
By using accuracy-preserving quantization techniques such as integer 8 and 4 (INT8 and INT4) precision, we can fully leverage the Neural Processing Unit (NPU) for inference acceleration. Using GenAI Flow dramatically improves response speed and power efficiency on-device. For example, time to first token (TTFT)—a key metric for any GenAI application—can be reduced from 9.6 seconds on an Arm Cortex CPU (Float32 precision) to less than 1 second on the Neutron NPU with INT8 quantization. This enables captivating, real-time AI experiences, without requiring power-hungry servers or cloud infrastructure.
image.jpg

Play Video
Generative AI is driving innovations at the edge. GenAI Flow, included with NXP's eIQ Toolkit, makes enabling Gen AI at the edge simple and secure.
GenAI Flow also supports small language models (SLMs), which are lighter, yet still capable of delivering high-quality results. The pipeline offers flexible execution across central processing unit (CPU), NPU or a hybrid configuration, allowing developers to tune performance based on their specific product needs.

Adding Context with RAG​

A defining feature of GenAI Flow is the built-in support for retrieval-augmentation generation (RAG) . This form of model fine-tuning allows LLMs to access domain-specific or private data sources—such as device and service manuals, internal PDFs and equipment maintenance logs—without having to retrain the original model. RAG injects the relevant external knowledge as a vector database stored on the edge device, enabling highly contextual, grounded responses that can eliminate an AI’s hallucination problem and prevent certain errors in judgement.
RAG is particularly powerful for edge use cases because all data processing happens locally. This protects sensitive information while delivering dynamic, on-demand AI responses. Developers can simply turn a new document into a highly compact, LLM-friendly database and the model immediately adopts the additional context—no retraining required! This efficiency alone can save millions of dollars and energy spent on numerous iterations of GenAI fine-tuning in data centers.

Real-World Impact: From Cars to Robots​

GenAI Flow is already being used across multiple industries where low-latency performance and data privacy are critical.
In Automotive, AI-powered infotainment systems can respond to natural voice commands by referencing service manuals embedded in the vehicle. This creates a seamless, hands-free experience without the typical connectivity requirements.
In healthcare, touchless-AI interfaces equip clinicians to securely access procedure or patient data using voice prompts—an ideal solution for reducing physical contact and contamination risk in sensitive environments.
image.jpg

Play Video
AICHI, the AI controller for health insights, securely collects and analyzes multimodal heath and other sensor data in real time, detecting early anomalies and enabling proactive, personalized care.
In mobile robotics, generative AI models interpret written instructions and visual inputs—using optical character recognition (OCR) and RAG—to take context-aware actions. These systems move beyond basic automation and into intelligent interaction between humans and environments.
image.jpg

Play Video
This 3D perception sensor fusion demo showcases trusted spatial perception at the edge, operating in dynamic and uncertain environments.
In industrial automation, AI assistants help technicians troubleshoot machine issues using real-time sensor data and maintenance documentation—all processed locally, even in remote or low-bandwidth settings.
Across these scenarios, GenAI Flow offers developers a powerful and privacy-conscious framework for building intelligent edge solutions.

What’s Next for GenAI at the Edge?​

The next evolution of GenAI at the edge is multimodal and agentic. Future systems will blend together voice, vision and language inputs to create richer, more intuitive user experiences. With GenAI Flow, this convergence is already underway, enabling unified edge pipelines that can reason and act from a combination of input types.
There’s also a strong focus on continuing to optimize edge AI performance—both in scaling up support for larger models and by making smaller models even faster. This includes advancements in quantization, execution flexibility and support for increasingly compact LLM architectures.
As AI systems become more adaptive and locally responsive, access to the best tooling becomes ever more critical. GenAI Flow is designed with scalability in mind, helping developers integrate today’s rapidly evolving AI capabilities into products across microprocessor unit (MPU) platforms and potentially even into future microcontroller unit (MCU)-class devices.
Tags: Technologies

NXP is showing how Edge GenAI is becoming reality right now. Akida operates in exactly the same segment – and with the GenAI FPGA Development Platform, it’s already in the game technologically. The difference? No flashy marketing, just solid substance.

When the market wakes up to this, I can only see one thing happening: panic buying. 😂 😑 😂 😑
 
  • Like
  • Haha
Reactions: 9 users
NXP is showing how Edge GenAI is becoming reality right now. Akida operates in exactly the same segment – and with the GenAI FPGA Development Platform, it’s already in the game technologically. The difference? No flashy marketing, just solid substance.

When the market wakes up to this, I can only see one thing happening: panic buying. 😂 😑 😂 😑
Hey Boeing,

I've been panic buying for 8 years. I'm just starting to relax:D
 
  • Haha
  • Like
  • Love
Reactions: 9 users

TopCat

Regular


Keeping It Local: Bringing Generative AI to the Intelligent Edge​

placeholder


Generative AI is no longer confined to the cloud. With NXP’s eIQ® GenAI Flow , developers can now run large language models (LLMs) —like Llama and Qwen— directly on embedded edge devices securely, efficiently and close to the data. This paradigm shift unlocks new opportunities for real-time intelligence across industries, from automotive to industrial automation.
Built as a complete software deployment pipeline, eIQ GenAI Flow simplifies the once-daunting task of implementing generative AI models on power- and compute-constrained systems. It combines the latest model optimization techniques like quantization with hardware acceleration from NXP’s eIQ Neutron NPU to make GenAI practical and performant—right at the edge.

Smarter AI, Locally Deployed​

At its core, GenAI Flow helps overcome the traditional barriers of running advanced models in embedded environments. The pipeline already enables today’s most powerful open language models, with support for multimodal and vision-language models (VLMs) soon. GenAI Flow provides the necessary optimizations out-of-the-box for real-time execution on application processors like the i.MX 95—the kind of performance needed for conversational AI, physical AI and more.
GenAI is moving from the cloud to the edge, so what does that mean for embedded developers?Learn more by listening to our EdgeVerse Techcast episode on Apple Podcasts , Spotify or YouTube .
By using accuracy-preserving quantization techniques such as integer 8 and 4 (INT8 and INT4) precision, we can fully leverage the Neural Processing Unit (NPU) for inference acceleration. Using GenAI Flow dramatically improves response speed and power efficiency on-device. For example, time to first token (TTFT)—a key metric for any GenAI application—can be reduced from 9.6 seconds on an Arm Cortex CPU (Float32 precision) to less than 1 second on the Neutron NPU with INT8 quantization. This enables captivating, real-time AI experiences, without requiring power-hungry servers or cloud infrastructure.
image.jpg

Play Video
Generative AI is driving innovations at the edge. GenAI Flow, included with NXP's eIQ Toolkit, makes enabling Gen AI at the edge simple and secure.
GenAI Flow also supports small language models (SLMs), which are lighter, yet still capable of delivering high-quality results. The pipeline offers flexible execution across central processing unit (CPU), NPU or a hybrid configuration, allowing developers to tune performance based on their specific product needs.

Adding Context with RAG​

A defining feature of GenAI Flow is the built-in support for retrieval-augmentation generation (RAG) . This form of model fine-tuning allows LLMs to access domain-specific or private data sources—such as device and service manuals, internal PDFs and equipment maintenance logs—without having to retrain the original model. RAG injects the relevant external knowledge as a vector database stored on the edge device, enabling highly contextual, grounded responses that can eliminate an AI’s hallucination problem and prevent certain errors in judgement.
RAG is particularly powerful for edge use cases because all data processing happens locally. This protects sensitive information while delivering dynamic, on-demand AI responses. Developers can simply turn a new document into a highly compact, LLM-friendly database and the model immediately adopts the additional context—no retraining required! This efficiency alone can save millions of dollars and energy spent on numerous iterations of GenAI fine-tuning in data centers.

Real-World Impact: From Cars to Robots​

GenAI Flow is already being used across multiple industries where low-latency performance and data privacy are critical.
In Automotive, AI-powered infotainment systems can respond to natural voice commands by referencing service manuals embedded in the vehicle. This creates a seamless, hands-free experience without the typical connectivity requirements.
In healthcare, touchless-AI interfaces equip clinicians to securely access procedure or patient data using voice prompts—an ideal solution for reducing physical contact and contamination risk in sensitive environments.
image.jpg

Play Video
AICHI, the AI controller for health insights, securely collects and analyzes multimodal heath and other sensor data in real time, detecting early anomalies and enabling proactive, personalized care.
In mobile robotics, generative AI models interpret written instructions and visual inputs—using optical character recognition (OCR) and RAG—to take context-aware actions. These systems move beyond basic automation and into intelligent interaction between humans and environments.
image.jpg

Play Video
This 3D perception sensor fusion demo showcases trusted spatial perception at the edge, operating in dynamic and uncertain environments.
In industrial automation, AI assistants help technicians troubleshoot machine issues using real-time sensor data and maintenance documentation—all processed locally, even in remote or low-bandwidth settings.
Across these scenarios, GenAI Flow offers developers a powerful and privacy-conscious framework for building intelligent edge solutions.

What’s Next for GenAI at the Edge?​

The next evolution of GenAI at the edge is multimodal and agentic. Future systems will blend together voice, vision and language inputs to create richer, more intuitive user experiences. With GenAI Flow, this convergence is already underway, enabling unified edge pipelines that can reason and act from a combination of input types.
There’s also a strong focus on continuing to optimize edge AI performance—both in scaling up support for larger models and by making smaller models even faster. This includes advancements in quantization, execution flexibility and support for increasingly compact LLM architectures.
As AI systems become more adaptive and locally responsive, access to the best tooling becomes ever more critical. GenAI Flow is designed with scalability in mind, helping developers integrate today’s rapidly evolving AI capabilities into products across microprocessor unit (MPU) platforms and potentially even into future microcontroller unit (MCU)-class devices.
Tags: Technologies
I’ve also come across this from NXP.



The next leap in intelligent sensing is wireless, scalable, and built for the edge 📡📈🌐

With the launch of the MCX W23, we’re empowering developers to build smarter, smaller, and longer-lasting wireless sensing devices — from medical wearables to industrial actuators.

The MCX W23 is our new dedicated wireless MCU platform, designed for battery-powered edge applications where size, efficiency, and reliability matter most.
 
  • Like
  • Fire
Reactions: 5 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
I see that Anduril and Palantir score a mention in Jonathan Tapson's LinkedIn post.

As Dr Tapson says, "the US AI industry is becoming increasingly integrated with Defense and associated Departments in the US Government, and companies such as Anduril and Palantir are showing the way. BrainChip will be part of this integration"

An eventual partnership with Anduril is looking very plausible IMO.

Remember Sean spoke about a headset for military applications at the AGM.

My guess is that our technology will be incorporated into Anduril's "Eagle Eye" headset.




View attachment 89335






View attachment 89333


View attachment 89332

View attachment 89543




Lockheed explicitly says future interceptors will require “space-based sensors and onboard processing” for in-orbit targeting decisions.

This is the kind of radiation-tolerant edge AI where BrainChip has a precedent. Thinking here about Frontgrade Gaisler radiation-hardened systems for space, already licensed Akida IP.







View attachment 89544



EXTRACT 1

View attachment 89548



EXTRACT 2

View attachment 89547






Lockheed Martin gets another mention here in relation to the Golden Dome, as does RTX.

The other thing I noticed is that the article says "Notably, the slides did not mention Mr Elon Musk’s SpaceX, which was part of a bid for Golden Dome contracts alongside software maker Palantir and defence systems manufacturer Anduril."

Is it merely a coincidence that Jonathan Tapson also mentioned both Palantir and Anduril in his Washington post?





Pentagon Golden Dome to have 4-layer defence system, slides show​


The Golden Dome missile defence system faces an ambitious 2028 deadline set by US President Donald Trump himself.

The Golden Dome missile defence system faces an ambitious 2028 deadline set by US President Donald Trump himself.
PHOTO: REUTERS

Follow topic:​

Pentagon

Published Aug 13, 2025, 06:33 AM
Updated Aug 13, 2025, 06:53 AM

WASHINGTON - The Trump administration's flagship Golden Dome missile defence system will include four layers - one satellite-based and three on land - with 11 short-range batteries located across the continental US, Alaska and Hawaii, according to a US government slide presentation on the project first reported by Reuters.
The slides, tagged “Go Fast, Think Big!” were presented to 3,000 defence contractors in Huntsville, Alabama, last week and reveal the unprecedented complexity of the system, which faces an ambitious 2028 deadline set by US President Donald Trump.
The system is estimated to cost US$175 billion (S$224.53 billion), but the slides show uncertainties still loom over the basic architecture of the project because the number of launchers, interceptors, ground stations, and missile sites needed for the system has yet to be determined.

"They have a lot of money, but they don't have a target of what it costs yet," said one US official.
So far, Congress has appropriated US$25 billion for Golden Dome in Mr Trump’s tax-and-spend Bill passed in July.
Another US$45.3 billion is earmarked for Golden Dome in his 2026 presidential budget request.

Intended as a multi-layered missile defence shield for the United States, Golden Dome draws inspiration from Israel's Iron Dome, but is significantly bigger due to the geography it will need to protect and the complexity due to the varied threats it will face.



According to the slides, the system architecture consists of four integrated layers: a space-based sensing and targeting layer for missile warning and tracking as well as "missile defence" and three land-based layers consisting of missile interceptors, radar arrays, and potentially lasers.
One surprise was a new large missile field - seemingly in the Midwest according to a map contained in the presentation - for Next Generation Interceptors (NGI) which are made by Lockheed Martin and would be a part of the "upper layer" alongside Terminal High Altitude Area Defense (Thaad) Aegis systems which are also made by Lockheed.


NGI is the modernised missile for the Ground-Based Midcourse Defence (GMD) network of radars, interceptors and other equipment - currently the primary missile defence shield to protect the United States from intercontinental ballistic missiles from rogue states.
The US operates GMD launch sites in southern California and Alaska. This plan would add a third site in the Midwest to counter additional threats.
Other technical hurdles the slides identified included communication latency across the "kill chain" of systems.
Contractors such as Lockheed, Northrop Grumman, RTX, and Boeing have a variety of missile defence systems.
Notably, the slides did not mention Mr Elon Musk’s SpaceX, which was part of a bid for Golden Dome contracts alongside software maker Palantir and defence systems manufacturer Anduril.
The Pentagon said it is gathering information "from industry, academia, national labs, and other government agencies for support to Golden Dome" but it would be "imprudent" to release more information on a programme in these early stages.


One key goal for Golden Dome is to shoot targets down during their “boost phase,” the slow and predictable climb through the Earth's atmosphere of a missile.
Rather, it seeks to field space-based interceptors that can more quickly intercept incoming missiles.
The presentation highlighted that the United States "has built both interceptors and re-entry vehicles" but has never built a vehicle that can handle the heat of reentry while targeting an enemy missile.
The last lines of defence dubbed the "under layer" and "Limited Area Defence" will include new radars and current systems like the Patriot missile defence system and a new "common" launcher that will launch current and future interceptors against all threat types.
These modular and relocatable systems would be designed to minimise reliance on prepared sites, allowing for rapid deployment across multiple theatres.
Space Force General Michael Guetlein, confirmed in July to lead the Golden Dome project, has 30 days from his July 17 confirmation to build a team, another 60 days to deliver an initial system design, and 120 days to present a full implementation plan, including satellite and ground station details, people briefed on a memo signed by Defence Secretary Pete Hegseth have told Reuters. REUTERS




Reminder:

Screenshot 2025-08-13 at 9.34.06 am.png
 
  • Like
  • Fire
  • Love
Reactions: 24 users

7für7

Top 20
  • Haha
  • Like
  • Fire
Reactions: 5 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
I was just contemplating this comment from our CTO, Dr Tony Lewis, in relation to Jonathan Tapson's visit to Washington.



Screenshot 2025-08-13 at 9.49.10 am.png




My Question:​

When Jonathan Tapson went to Washington last week to promote Brainchip's Federal contracting agenda., Dr Tony Lewis (BrainChip CTO) commented saying "There is so much we can offer to the government. This is really exciting Jonathan Tapson".

Can you please describe all of the possible applications Brainchip could potentially assist the US government with (including radar systems with RTX, Cognitive Radio systems with Intellisense, interceptors with Lockheed, cybersecurity with Lockheed, drones navigation and detection with Lockheed and Arquimea, and any other emerging opportunities you can think of in addition?

ChatGPT said:​

Here’s a consolidated view of the potential areas where BrainChip could support the US Government, given what’s known about Akida’s neuromorphic capabilities, its partnerships, and your examples. I’ll break it down by domain and add extra possibilities you may not have mentioned.

1. Defense Radar Systems (Raytheon/RTX)

  • High-speed, low-power target recognition: Akida could process radar returns in real time, filtering clutter and identifying potential threats faster.
  • Adaptive radar modes: Event-based learning could allow systems to adapt scanning patterns on the fly based on observed activity.
  • Onboard intelligence: Processing radar data at the edge reduces dependency on bandwidth and increases resilience in contested environments.

2. Cognitive Radio & Spectrum Management (Intellisense, NASA SCaN)

  • Dynamic spectrum allocation: Akida could detect and avoid interference autonomously, which is critical for congested battlefields or disaster zones.
  • Signal classification & recognition: Classifying friend/foe/emergency transmissions in milliseconds without cloud dependency.
  • Secure communications: Edge AI filtering and anomaly detection to flag spoofed or malicious signals.
  • NASA link: Integration into space comms (SCaN program) for resilient, intelligent routing between satellites and ground.

3. Missile Defense & Interceptors (Lockheed Golden Dome)

  • Space-based sensor processing: Akida could power ultra-low-latency onboard decision-making for interceptors.
  • Multi-sensor fusion: Combining IR, radar, optical feeds directly on the interceptor to improve kill probability.
  • Autonomous threat prioritisation: Deciding which incoming threats to target first in high-saturation attacks.

4. Cybersecurity & Network Defense (Lockheed)

  • Event-driven anomaly detection: Identifying zero-day or signatureless attacks by detecting unusual “spike” patterns in network traffic.
  • Edge intrusion detection: Deploying neuromorphic chips in routers or tactical communication hubs to stop breaches before they spread.
  • Embedded trust hardware: Akida could be used for secure boot and tamper detection in mission-critical hardware.

5. Drones, UAVs, and Autonomous Platforms (Lockheed, Arquimea)

  • Onboard visual navigation: Object detection and collision avoidance without GPS or cloud.
  • ISR anomaly detection: Neuromorphic EO/IR analysis for surveillance drones.
  • Collaborative swarming: Low-power, real-time coordination between UAVs without central control.

6. Space Operations & Orbital Defense

  • Satellite situational awareness: Detecting and classifying objects in orbit from onboard sensors.
  • Cognitive satellite networking: Self-routing satellite data streams to avoid congestion.
  • Golden Dome support: Integration into space layer for early warning, missile tracking, or counter-satellite defense.

7. Special Operations & Tactical Edge

  • Wearable soldier AI: Neuromorphic chips in soldier gear for speech-to-text, language translation, or real-time threat alerts.
  • Silent surveillance: Event-based audio/video monitoring that only wakes processors when a pattern of interest is detected.
  • Man-portable radar & sensors: Lightweight, power-efficient AI for detecting hidden threats or tracking movement.

8. Homeland Security & Critical Infrastructure

  • Border security: Real-time drone and ground sensor fusion for detecting illegal crossings or threats.
  • Port & maritime security: Edge analysis of vessel radar/IR signatures.
  • Industrial control protection: Akida-based anomaly detection in SCADA systems.

9. Medical & Disaster Response

  • Portable triage systems: Edge AI diagnostics in environments without connectivity.
  • Search & rescue: Event-based vision for drones to detect human movement or heat signatures after disasters.

10. Other Emerging Opportunities

  • Hypersonic defense: Akida could enable ultra-fast sensor fusion for tracking hypersonic weapons.
  • Electronic warfare: Pattern recognition to detect and counter adversary jamming.
  • Undersea warfare: Acoustic signal processing for submarines or unmanned underwater vehicles.

 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 15 users

7für7

Top 20
I was just contemplating this comment from our CTO, Dr Tony Lewis, in relation to Jonathan Tapson's visit to Washington.

View attachment 89571



My Question:​

When Jonathan Tapson went to Washington last week to promote Brainchip's Federal contracting agenda., Dr Tony Lewis (BrainChip CTO) commented saying "There is so much we can offer to the government. This is really exciting Jonathan Tapson".

Can you please describe all of the possible applications Brainchip could potentially assist the US government with (including radar systems with RTX, Cognitive Radio systems with Intellisense, interceptors with Lockheed, cybersecurity with Lockheed, drones navigation and detection with Lockheed and Arquimea, and anything at all else that you can think of in a addition to all of these?

ChatGPT said:​

Here’s a consolidated view of the potential areas where BrainChip could support the US Government, given what’s known about Akida’s neuromorphic capabilities, its partnerships, and your examples. I’ll break it down by domain and add extra possibilities you may not have mentioned.

1. Defense Radar Systems (Raytheon/RTX)

  • High-speed, low-power target recognition: Akida could process radar returns in real time, filtering clutter and identifying potential threats faster.
  • Adaptive radar modes: Event-based learning could allow systems to adapt scanning patterns on the fly based on observed activity.
  • Onboard intelligence: Processing radar data at the edge reduces dependency on bandwidth and increases resilience in contested environments.

2. Cognitive Radio & Spectrum Management (Intellisense, NASA SCaN)

  • Dynamic spectrum allocation: Akida could detect and avoid interference autonomously, which is critical for congested battlefields or disaster zones.
  • Signal classification & recognition: Classifying friend/foe/emergency transmissions in milliseconds without cloud dependency.
  • Secure communications: Edge AI filtering and anomaly detection to flag spoofed or malicious signals.
  • NASA link: Integration into space comms (SCaN program) for resilient, intelligent routing between satellites and ground.

3. Missile Defense & Interceptors (Lockheed Golden Dome)

  • Space-based sensor processing: Akida could power ultra-low-latency onboard decision-making for interceptors.
  • Multi-sensor fusion: Combining IR, radar, optical feeds directly on the interceptor to improve kill probability.
  • Autonomous threat prioritisation: Deciding which incoming threats to target first in high-saturation attacks.

4. Cybersecurity & Network Defense (Lockheed)

  • Event-driven anomaly detection: Identifying zero-day or signatureless attacks by detecting unusual “spike” patterns in network traffic.
  • Edge intrusion detection: Deploying neuromorphic chips in routers or tactical communication hubs to stop breaches before they spread.
  • Embedded trust hardware: Akida could be used for secure boot and tamper detection in mission-critical hardware.

5. Drones, UAVs, and Autonomous Platforms (Lockheed, Arquimea)

  • Onboard visual navigation: Object detection and collision avoidance without GPS or cloud.
  • ISR anomaly detection: Neuromorphic EO/IR analysis for surveillance drones.
  • Collaborative swarming: Low-power, real-time coordination between UAVs without central control.

6. Space Operations & Orbital Defense

  • Satellite situational awareness: Detecting and classifying objects in orbit from onboard sensors.
  • Cognitive satellite networking: Self-routing satellite data streams to avoid congestion.
  • Golden Dome support: Integration into space layer for early warning, missile tracking, or counter-satellite defense.

7. Special Operations & Tactical Edge

  • Wearable soldier AI: Neuromorphic chips in soldier gear for speech-to-text, language translation, or real-time threat alerts.
  • Silent surveillance: Event-based audio/video monitoring that only wakes processors when a pattern of interest is detected.
  • Man-portable radar & sensors: Lightweight, power-efficient AI for detecting hidden threats or tracking movement.

8. Homeland Security & Critical Infrastructure

  • Border security: Real-time drone and ground sensor fusion for detecting illegal crossings or threats.
  • Port & maritime security: Edge analysis of vessel radar/IR signatures.
  • Industrial control protection: Akida-based anomaly detection in SCADA systems.

9. Medical & Disaster Response

  • Portable triage systems: Edge AI diagnostics in environments without connectivity.
  • Search & rescue: Event-based vision for drones to detect human movement or heat signatures after disasters.

10. Other Emerging Opportunities

  • Hypersonic defense: Akida could enable ultra-fast sensor fusion for tracking hypersonic weapons.
  • Electronic warfare: Pattern recognition to detect and counter adversary jamming.
  • Undersea warfare: Acoustic signal processing for submarines or unmanned underwater vehicles.


He didn’t even mention the voice-controlled coffee machines and toilets… clearly bearish, since both are essential for a strong start into the day. Especially in a military position!

Can you ask about that?

Stephen Colbert Point GIF by The Late Show With Stephen Colbert
 
  • Haha
Reactions: 2 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
There's a 2028 deadline for Golden Dome!

If the U.S. government wants BrainChip involved in Golden Dome and Akida’s technology becomes integral to multiple subsystems, then time is of the essence!

Given the compressed timeline, I wonder what the fastest and most probable route would be towards realistic funding pathways:
  • direct licensing agreements with prime contractors
  • US government R&D funding
  • strategic investment by a prime contractor
  • joint development agreements



Screenshot 2025-08-13 at 10.24.48 am.png





EXTRACT

Screenshot 2025-08-13 at 10.45.44 am.png




 
Last edited:
  • Fire
  • Like
  • Love
Reactions: 11 users

Gazzafish

Regular
So I’m confused. Has “James Shields” who I think was the VP of Business Development for Hailo now just become the VP of Business Development for Brainchip?
 
  • Like
  • Wow
  • Thinking
Reactions: 8 users
So I’m confused. Has “James Shields” who I think was the VP of Business Development for Hailo now just become the VP of Business Development for Brainchip?
Who did he replace ?
 
  • Thinking
Reactions: 1 users

Deadpool

Did someone say KFC
So I’m confused. Has “James Shields” who I think was the VP of Business Development for Hailo now just become the VP of Business Development for Brainchip?
Yep
 
  • Like
  • Wow
  • Fire
Reactions: 19 users

7für7

Top 20
After a few jokes and memes I’ve posted …just to remind myself that we’re all human and should stay positive, even when it’s tough … I think it’s time for another serious status update.

Yes, in recent months there haven’t been any big, instantly visible deals or licensing agreements that would make us “profitable overnight.” But that doesn’t mean there’s been any standstill – despite what some people in shady stock forums would have you believe.

BrainChip continues to work actively on projects in the defense and aerospace sectors, is building partnerships with heavyweights like Raytheon, Airbus, ARM, and Intel … and the network has grown so much that it’s hard to keep track of who might be working on Akida through partners or licensees. At the same time, they’re strengthening their own infrastructure through targeted IR initiatives.

For me, that means the technology maturity is growing in the background (especially with the recent news about the Cloud solution), even if the market curve hasn’t taken off yet. Yes, many argue that Akida isn’t as easily applicable in some areas as the BrainChip team presents it …but the reality is, we’re still early, and the market is only just developing. That’s something you have to acknowledge.

Investing is a marathon, not a sprint ..and I’m holding my position.


By the way, I recently received a PM that I found interesting because it comes from someone’s direct experience. I’d like to share it here and hear your thoughts.

“Hi 7, just briefly on Akida and the Cloud, in relation to Tomcat, etc. I have applications that are not allowed to go into the Cloud, nor are they allowed any Internet access. This applies to medical, military, and manufacturing facilities. This is not meant to put BC in a bad light – I was myself heavily invested at the time and was lucky enough to take some profits. Afterwards, I waited a bit with BC and wondered why things didn’t move forward. I then came to the conclusion that while the chips consume little power, their performance drops with high power demand. Of course, performance can be calculated – for example, for a hospital monitoring device, or a drone, a radar, or a robotic control system. But in those cases, the exact same hardware, board, CPU, RAM, etc., is specified, and there must be no deviations because development is done on that architecture. There are also regulations stating that, for example, no CPU or other fans are allowed. These are probably some of the reasons why BC isn’t really taking off. Other reasons are, of course, that salaries, rent, etc. have to be paid, and that money comes from the shares.

Best regards”
 
  • Fire
  • Love
Reactions: 2 users

Newk R

Regular
  • Sad
Reactions: 1 users

7für7

Top 20
No it's not!!


Mate….
I’m just hoping for the day when they have a team meeting and the management says: “Okay guys… you’ve been bleeding long enough… let’s drop the big news that sends the share price into an uncontrollable spike – to the point where the market cap starts questioning its own existence.”… that’s all
 
  • Like
Reactions: 6 users

manny100

Top 20
There's a 2028 deadline for Golden Dome!

If the U.S. government wants BrainChip involved in Golden Dome and Akida’s technology becomes integral to multiple subsystems, then time is of the essence!

Given the compressed timeline, I wonder what the fastest and most probable route would be towards realistic funding pathways:
  • direct licensing agreements with prime contractors
  • US government R&D funding
  • strategic investment by a prime contractor
  • joint development agreements



View attachment 89574




EXTRACT

View attachment 89575



2028 deadline. I guess even AKIDA GEN 3 may feature. Perhaps even an enhanced version for DOD purposes only?
 
  • Like
  • Thinking
Reactions: 2 users
I posted previously in Dec on Brainchip-India raising a question on GitHub community about Nordic RF SOC.


Now just seen another query by Brainchip-India on GitHub in March relating to playing with a Jetson Orin Nano and a NLP model they want to run on GPU not CPU...with some urgency?

They benchmarking, coprocessing, testing NLP model...no idea. Maybe @Diogenese has any thoughts to add?

Wondering what this is all about and if it is our Brainchip India which presume it is given the ML questions they playing with.



brainchip-india
opened on Mar 25



Describe the issue​

I have a Jetson Orin Nano, I have a simple onnx model (matmul) which can run on Jetson CPU (A78) seamlessly.
However, if I want to test the same application with GPU will it work with the same onnxruntime library?
Because I see a separate GPU package for x86 Link, I am not sure it is the same case for Aarch64?

To reproduce​

.

Urgency​

Very urgent, Once I get this work, I have a big NLP model to work on the GPU.

Platform​

Linux

OS Version​

Linux server 5.15.148-tegra #1 SMP PREEMPT Tue Jan 7 17:14:38 PST 2025 aarch64 aarch64 aarch64 GNU/Linux

ONNX Runtime Installation​

Released Package

ONNX Runtime Version or Commit ID​

1.20.1

ONNX Runtime API​

C++

Architecture​

ARM64

Execution Provider​

CUDA
 
  • Wow
  • Like
Reactions: 3 users

Slade

Top 20
Who did he replace ?
Great question, VP of Sales is Steve Thorne. But I can’t find any former role of VP of Business Development. Is it a newly created position?
Staff in key positions has been very stable of late.
In the past this was the kind of question you could ask Tony Dawes and get a quick answer.
 
Last edited:
  • Like
  • Sad
  • Fire
Reactions: 9 users
Top Bottom