Thank God someone posted something. I posted something and silence for 2 hours. I thought I had broken the friggin thread. Where is everyone? Bravo has an excuse. Hopefully she's out running. God knows the share price needs a kick along.
I just posted something to check if the forum just crashed…Thank God someone posted something. I posted something and silence for 2 hours. I thought I had broken the friggin thread. Where is everyone? Bravo has an excuse. Hopefully she's out running. God knows the share price needs a kick along.
SC
BrainChip actually has to make some noise and give shareholders something to talk about..Thank God someone posted something. I posted something and silence for 2 hours. I thought I had broken the friggin thread. Where is everyone? Bravo has an excuse. Hopefully she's out running. God knows the share price needs a kick along.
SC
Hi Bravo,The good news is: This article signals that neuromorphic computing is shifting from novelty to reality, with real-world products in development and adoption on the rise.
The bad news is: Sadly Tom’s Guide only highlights Innatera and doesn't mention BrainChip at all.
From a tech perspective, I believe BrainChip still has unique IP and an arguably more mature solution. But in a fast-emerging category like neuromorphic computing, public perception moves quickly, and if BrainChip isn’t actively owning the media conversation, Innatera (or another competitor) could grab the crown before the market fully matures.
Giddy up BrainChip!!!!
View attachment 89566
![]()
"We're building chips that think like the brain" — I got a front row seat to see how neuromorphic computing will transform your next smart device
“Think of it as AI that sleeps until it needs to wake up”www.tomsguide.com
With BrainChip, too!!
![]()
NVIDIA CEO: AI will create more millionaires than the internet did | Evolving AI posted on the topic | LinkedIn
NVIDIA CEO Jensen Huang says AI will create more millionaires in the next five years than the internet did in twenty. He calls AI “the greatest equalizer of our time” because it removes technical barriers and gives anyone with ideas the tools to build. Huang pointed out that in the internet...www.linkedin.com
![]()
Keeping It Local: Bringing Generative AI to the Intelligent Edge
Developers are moving generative AI rom the cloud to the edge with NXP’s eIQ® GenAI Flow. This software pipeline allows developers to run large language models on embedded devices, enhancing real-time intelligence across industries. It combines model optimization techniques and hardware...www.nxp.com
Keeping It Local: Bringing Generative AI to the Intelligent Edge
- August 7, 2025
- by Davis Sawyer
![]()
Generative AI is no longer confined to the cloud. With NXP’s eIQ® GenAI Flow , developers can now run large language models (LLMs) —like Llama and Qwen— directly on embedded edge devices securely, efficiently and close to the data. This paradigm shift unlocks new opportunities for real-time intelligence across industries, from automotive to industrial automation.
Built as a complete software deployment pipeline, eIQ GenAI Flow simplifies the once-daunting task of implementing generative AI models on power- and compute-constrained systems. It combines the latest model optimization techniques like quantization with hardware acceleration from NXP’s eIQ Neutron NPU to make GenAI practical and performant—right at the edge.
Smarter AI, Locally Deployed
At its core, GenAI Flow helps overcome the traditional barriers of running advanced models in embedded environments. The pipeline already enables today’s most powerful open language models, with support for multimodal and vision-language models (VLMs) soon. GenAI Flow provides the necessary optimizations out-of-the-box for real-time execution on application processors like the i.MX 95—the kind of performance needed for conversational AI, physical AI and more.
GenAI is moving from the cloud to the edge, so what does that mean for embedded developers?Learn more by listening to our EdgeVerse Techcast episode on Apple Podcasts , Spotify or YouTube .
By using accuracy-preserving quantization techniques such as integer 8 and 4 (INT8 and INT4) precision, we can fully leverage the Neural Processing Unit (NPU) for inference acceleration. Using GenAI Flow dramatically improves response speed and power efficiency on-device. For example, time to first token (TTFT)—a key metric for any GenAI application—can be reduced from 9.6 seconds on an Arm Cortex CPU (Float32 precision) to less than 1 second on the Neutron NPU with INT8 quantization. This enables captivating, real-time AI experiences, without requiring power-hungry servers or cloud infrastructure.
![]()
Play Video
Generative AI is driving innovations at the edge. GenAI Flow, included with NXP's eIQ Toolkit, makes enabling Gen AI at the edge simple and secure.
GenAI Flow also supports small language models (SLMs), which are lighter, yet still capable of delivering high-quality results. The pipeline offers flexible execution across central processing unit (CPU), NPU or a hybrid configuration, allowing developers to tune performance based on their specific product needs.
Adding Context with RAG
A defining feature of GenAI Flow is the built-in support for retrieval-augmentation generation (RAG) . This form of model fine-tuning allows LLMs to access domain-specific or private data sources—such as device and service manuals, internal PDFs and equipment maintenance logs—without having to retrain the original model. RAG injects the relevant external knowledge as a vector database stored on the edge device, enabling highly contextual, grounded responses that can eliminate an AI’s hallucination problem and prevent certain errors in judgement.
RAG is particularly powerful for edge use cases because all data processing happens locally. This protects sensitive information while delivering dynamic, on-demand AI responses. Developers can simply turn a new document into a highly compact, LLM-friendly database and the model immediately adopts the additional context—no retraining required! This efficiency alone can save millions of dollars and energy spent on numerous iterations of GenAI fine-tuning in data centers.
Real-World Impact: From Cars to Robots
GenAI Flow is already being used across multiple industries where low-latency performance and data privacy are critical.
In Automotive, AI-powered infotainment systems can respond to natural voice commands by referencing service manuals embedded in the vehicle. This creates a seamless, hands-free experience without the typical connectivity requirements.
In healthcare, touchless-AI interfaces equip clinicians to securely access procedure or patient data using voice prompts—an ideal solution for reducing physical contact and contamination risk in sensitive environments.
![]()
Play Video
AICHI, the AI controller for health insights, securely collects and analyzes multimodal heath and other sensor data in real time, detecting early anomalies and enabling proactive, personalized care.
In mobile robotics, generative AI models interpret written instructions and visual inputs—using optical character recognition (OCR) and RAG—to take context-aware actions. These systems move beyond basic automation and into intelligent interaction between humans and environments.
![]()
Play Video
This 3D perception sensor fusion demo showcases trusted spatial perception at the edge, operating in dynamic and uncertain environments.
In industrial automation, AI assistants help technicians troubleshoot machine issues using real-time sensor data and maintenance documentation—all processed locally, even in remote or low-bandwidth settings.
Across these scenarios, GenAI Flow offers developers a powerful and privacy-conscious framework for building intelligent edge solutions.
What’s Next for GenAI at the Edge?
The next evolution of GenAI at the edge is multimodal and agentic. Future systems will blend together voice, vision and language inputs to create richer, more intuitive user experiences. With GenAI Flow, this convergence is already underway, enabling unified edge pipelines that can reason and act from a combination of input types.
There’s also a strong focus on continuing to optimize edge AI performance—both in scaling up support for larger models and by making smaller models even faster. This includes advancements in quantization, execution flexibility and support for increasingly compact LLM architectures.
As AI systems become more adaptive and locally responsive, access to the best tooling becomes ever more critical. GenAI Flow is designed with scalability in mind, helping developers integrate today’s rapidly evolving AI capabilities into products across microprocessor unit (MPU) platforms and potentially even into future microcontroller unit (MCU)-class devices.
Tags: Technologies
Hey Boeing,NXP is showing how Edge GenAI is becoming reality right now. Akida operates in exactly the same segment – and with the GenAI FPGA Development Platform, it’s already in the game technologically. The difference? No flashy marketing, just solid substance.
When the market wakes up to this, I can only see one thing happening: panic buying.![]()
![]()
![]()
![]()
I’ve also come across this from NXP.![]()
Keeping It Local: Bringing Generative AI to the Intelligent Edge
Developers are moving generative AI rom the cloud to the edge with NXP’s eIQ® GenAI Flow. This software pipeline allows developers to run large language models on embedded devices, enhancing real-time intelligence across industries. It combines model optimization techniques and hardware...www.nxp.com
Keeping It Local: Bringing Generative AI to the Intelligent Edge
- August 7, 2025
- by Davis Sawyer
![]()
Generative AI is no longer confined to the cloud. With NXP’s eIQ® GenAI Flow , developers can now run large language models (LLMs) —like Llama and Qwen— directly on embedded edge devices securely, efficiently and close to the data. This paradigm shift unlocks new opportunities for real-time intelligence across industries, from automotive to industrial automation.
Built as a complete software deployment pipeline, eIQ GenAI Flow simplifies the once-daunting task of implementing generative AI models on power- and compute-constrained systems. It combines the latest model optimization techniques like quantization with hardware acceleration from NXP’s eIQ Neutron NPU to make GenAI practical and performant—right at the edge.
Smarter AI, Locally Deployed
At its core, GenAI Flow helps overcome the traditional barriers of running advanced models in embedded environments. The pipeline already enables today’s most powerful open language models, with support for multimodal and vision-language models (VLMs) soon. GenAI Flow provides the necessary optimizations out-of-the-box for real-time execution on application processors like the i.MX 95—the kind of performance needed for conversational AI, physical AI and more.
GenAI is moving from the cloud to the edge, so what does that mean for embedded developers?Learn more by listening to our EdgeVerse Techcast episode on Apple Podcasts , Spotify or YouTube .
By using accuracy-preserving quantization techniques such as integer 8 and 4 (INT8 and INT4) precision, we can fully leverage the Neural Processing Unit (NPU) for inference acceleration. Using GenAI Flow dramatically improves response speed and power efficiency on-device. For example, time to first token (TTFT)—a key metric for any GenAI application—can be reduced from 9.6 seconds on an Arm Cortex CPU (Float32 precision) to less than 1 second on the Neutron NPU with INT8 quantization. This enables captivating, real-time AI experiences, without requiring power-hungry servers or cloud infrastructure.
![]()
Play Video
Generative AI is driving innovations at the edge. GenAI Flow, included with NXP's eIQ Toolkit, makes enabling Gen AI at the edge simple and secure.
GenAI Flow also supports small language models (SLMs), which are lighter, yet still capable of delivering high-quality results. The pipeline offers flexible execution across central processing unit (CPU), NPU or a hybrid configuration, allowing developers to tune performance based on their specific product needs.
Adding Context with RAG
A defining feature of GenAI Flow is the built-in support for retrieval-augmentation generation (RAG) . This form of model fine-tuning allows LLMs to access domain-specific or private data sources—such as device and service manuals, internal PDFs and equipment maintenance logs—without having to retrain the original model. RAG injects the relevant external knowledge as a vector database stored on the edge device, enabling highly contextual, grounded responses that can eliminate an AI’s hallucination problem and prevent certain errors in judgement.
RAG is particularly powerful for edge use cases because all data processing happens locally. This protects sensitive information while delivering dynamic, on-demand AI responses. Developers can simply turn a new document into a highly compact, LLM-friendly database and the model immediately adopts the additional context—no retraining required! This efficiency alone can save millions of dollars and energy spent on numerous iterations of GenAI fine-tuning in data centers.
Real-World Impact: From Cars to Robots
GenAI Flow is already being used across multiple industries where low-latency performance and data privacy are critical.
In Automotive, AI-powered infotainment systems can respond to natural voice commands by referencing service manuals embedded in the vehicle. This creates a seamless, hands-free experience without the typical connectivity requirements.
In healthcare, touchless-AI interfaces equip clinicians to securely access procedure or patient data using voice prompts—an ideal solution for reducing physical contact and contamination risk in sensitive environments.
![]()
Play Video
AICHI, the AI controller for health insights, securely collects and analyzes multimodal heath and other sensor data in real time, detecting early anomalies and enabling proactive, personalized care.
In mobile robotics, generative AI models interpret written instructions and visual inputs—using optical character recognition (OCR) and RAG—to take context-aware actions. These systems move beyond basic automation and into intelligent interaction between humans and environments.
![]()
Play Video
This 3D perception sensor fusion demo showcases trusted spatial perception at the edge, operating in dynamic and uncertain environments.
In industrial automation, AI assistants help technicians troubleshoot machine issues using real-time sensor data and maintenance documentation—all processed locally, even in remote or low-bandwidth settings.
Across these scenarios, GenAI Flow offers developers a powerful and privacy-conscious framework for building intelligent edge solutions.
What’s Next for GenAI at the Edge?
The next evolution of GenAI at the edge is multimodal and agentic. Future systems will blend together voice, vision and language inputs to create richer, more intuitive user experiences. With GenAI Flow, this convergence is already underway, enabling unified edge pipelines that can reason and act from a combination of input types.
There’s also a strong focus on continuing to optimize edge AI performance—both in scaling up support for larger models and by making smaller models even faster. This includes advancements in quantization, execution flexibility and support for increasingly compact LLM architectures.
As AI systems become more adaptive and locally responsive, access to the best tooling becomes ever more critical. GenAI Flow is designed with scalability in mind, helping developers integrate today’s rapidly evolving AI capabilities into products across microprocessor unit (MPU) platforms and potentially even into future microcontroller unit (MCU)-class devices.
Tags: Technologies
I see that Anduril and Palantir score a mention in Jonathan Tapson's LinkedIn post.
As Dr Tapson says, "the US AI industry is becoming increasingly integrated with Defense and associated Departments in the US Government, and companies such as Anduril and Palantir are showing the way. BrainChip will be part of this integration"
An eventual partnership with Anduril is looking very plausible IMO.
Remember Sean spoke about a headset for military applications at the AGM.
My guess is that our technology will be incorporated into Anduril's "Eagle Eye" headset.
View attachment 89335
View attachment 89333
View attachment 89332
View attachment 89543
Lockheed explicitly says future interceptors will require “space-based sensors and onboard processing” for in-orbit targeting decisions.
This is the kind of radiation-tolerant edge AI where BrainChip has a precedent. Thinking here about Frontgrade Gaisler radiation-hardened systems for space, already licensed Akida IP.
View attachment 89544
EXTRACT 1
View attachment 89548
EXTRACT 2
View attachment 89547
![]()
Layered Defense, Orbital Advantage
Space is no longer a supporting player. It’s the heart of America’s multi-layered missile defense strategy and the cornerstone of America’s future defense.www.lockheedmartin.com