BRN Discussion Ongoing

manny100

Top 20
Hi Manny,

The Catalogue page mentions radar designs (also cybersecurity):

https://brainchip.com/corporate-catalogue-lp/

Reference Platforms & Roadmap
Validated designs for radar, wearables, voice AI, communications, cybersecurity, and future platform direction.

This may be for the see-in-the-dark robot application.
Thanks Dio, good pick up.. Makes it easier for developers.
 
  • Like
  • Love
Reactions: 5 users

Frangipani

Top 20
A new monthly newsletter is out.

It contains the following info on BrainChip’s novel four reference platforms Akida Radar, Akida Tag, Akida Private Voice Assistant and Akida Communications:

“At BrainChip, delivering great IP and silicon is just the beginning. CES attendees got a sneak peek at our 2026 roadmap and strategy including four key Reference Platforms: complete, validated blueprints that combine hardware, software, and AI models that demonstrate key use cases that can be adapted by customers with our eco-system partners to create end products, highlighting our value proposition and speeding time to revenue.
  • Akida Radar: demonstrates efficient, real-time Micro-Doppler classification of flying objects, detecting unique motion patterns to objects without cloud dependency
  • Akida Tag: a compact, battery-powered wearable device showcasing how the Akida architecture enables always-on anomaly detection with self-learning in real-time using minimal power
  • Akida Private Voice Assistant: Complex generative AI performance on standard mobile hardware using TENNs to deliver expert-level intelligence instantly, without the latency or vulnerability of a connection
  • Akida Communications: Enables low Size Weight and Power (SWaP) for signal intelligence in handheld devices where power is scarce, allowing immediate adaptation to threats Processes RF data locally for maximum security
Stay tuned for more in the coming months as these reference platforms get unwrapped for release.”



The Future Is at the Edge: BrainChip at CES 2026​

552cc60f-7cf4-b9e1-19d7-7ee4bb51ed37.png
CES 2026 underscored a clear industry message: intelligent systems are moving decisively to the edge. From our suite at the Venetian, BrainChip spent the week engaged in continuous demonstrations and in-depth discussions on the future of edge AI, with strong interest from industry leaders, innovators, and partners throughout the show. Seeing Akida’s neuromorphic computing capabilities resonate so clearly across real-world use cases—from ultra-low power to high-performance edge applications—reaffirmed the importance and momentum of the work we are doing. Read on for an inside view of the BrainChip CES suite and what’s ahead.

Akida in Action with CES Partner Demos​

BrainChip welcomed three of our valued partners to the CES suite to demonstrate Akida’s low power, low latency, private edge AI accelerators in action.

Akida in Action with CES Partner Demos​

0a7a88b2-502c-42d9-409c-ee4693abbc96.png
By combining BrainChip’s on-device inference with HaiLa’s low-power Wi-Fi, we’ve eliminated the need to transmit heavy raw data. Akida™ processes the image locally and sends only a lightweight metadata code over the network. It’s smart, sustainable, and drastically reduces bandwidth.

Visual Computing Pipeline with Deep Perception​

1e8f2e54-0157-ccf2-cbf0-25c1c5f1f346.png
Deep Perception deploys real-time object detection, classification, and other video inference workloads in hours with production-ready GStreamer elements purpose-built for BrainChip Akida accelerators. Together they deliver a complete low-power video stack for power-constrained environments.

Edge AI Cybersecurity with Quantum Ventura / Metaguard AI​

2457943b-f017-4730-5d6f-3f76485b29fb.png
Metaguard AI integrates BrainChip’s Akida neuromorphic processor into its lightweight CyberNeuro-RT platform to continuously analyze the behavior of network traffic in real-time on the edge, reducing the cloud operational cost, improving data privacy and response time while maintaining long battery life.

Akida’s ready-to-develop AKD1500 Edge AI Co-Processor​

e025907f-543c-5173-69f4-d611f51d71d0.png
BrainChip gave CES Suite visitors an up-close look at the new AKD 1500 neuromorphic Edge AI accelerator co-processor chip. Designed to deliver exceptional performance with minimal power consumption, the AKD1500 achieves ~ 1 Tera-Operations per second second (TOPS) while operating under 250 milliwatts—setting a new benchmark for edge AI efficiency. This makes AKD1500 ideal for deployment in battery-powered wearables, smart sensors, and power-constrained environments.

Accelerating Innovation: BrainChip’s Reference Platforms​

0e85203f-64e6-c6b1-59c9-23af34a0c6db.png
At BrainChip, delivering great IP and silicon is just the beginning. CES attendees got a sneak peek at our 2026 roadmap and strategy including four key Reference Platforms: complete, validated blueprints that combine hardware, software, and AI models that demonstrate key use cases that can be adapted by customers with our eco-system partners to create end products, highlighting our value proposition and speeding time to revenue.
  • Akida Radar: demonstrates efficient, real-time Micro-Doppler classification of flying objects, detecting unique motion patterns to objects without cloud dependency
  • Akida Tag: a compact, battery-powered wearable device showcasing how the Akida architecture enables always-on anomaly detection with self-learning in real-time using minimal power
  • Akida Private Voice Assistant: Complex generative AI performance on standard mobile hardware using TENNs to deliver expert-level intelligence instantly, without the latency or vulnerability of a connection
  • Akida Communications: Enables low Size Weight and Power (SWaP) for signal intelligence in handheld devices where power is scarce, allowing immediate adaptation to threats Processes RF data locally for maximum security
Stay tuned for more in the coming months as these reference platforms get unwrapped for release.

Akida Pico Now on Akida FPGA in the Cloud​

e625822a-ca92-4f83-9d52-c2b3999fc1ec.png
BrainChip's Akida Pico is now available for testing on Akida FPGA in the Cloud with no hardware requirements. Pico is an ultra-low power co-processor operates efficiently in the micro-watt (μW) to milli-watt (mW) range, empowering devices to perform at their best without sacrificing battery life, purposely built to enhance applications in speech and audio processing as well as medical vital sign monitoring.

Looking Ahead to 2026​

Webinar: Keyword Spotting on Akida Pico​

ea249395-21c2-e67b-2366-2a8effec03f8.png
Join us on February 24th at 8:00 AM PST for an exclusive deep dive into the latest breakthrough in neuromorphic computing. BrainChip Solutions Architects Kurt Manninen and Ritik Shrivastava will demonstrate Akida Pico on FPGA, our latest ultra-low power NPU core designed for the next generation of intelligent edge devices.

Embedded World​

d536b1e0-3197-0b41-ed95-3e3bad7f50ac.png
Join BrainChip at Embedded World for unprecedented insight into the world of embedded systems, from components and modules to operating systems, hardware and software design.

Edge AI Foundation San Diego​

6148f8c6-ef47-623a-dfc6-ca7e6da5c59e.png
Through our longstanding partnership with the EDGE AI Foundation, BrainChip will be exhibiting at the 2026 San Diego event alongside industry experts, researchers, and innovators shaping the future of edge AI.
Stay tuned as we continue to expand the boundaries of edge AI. From space to sensors—and everything in between—BrainChip is building the future of AI.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 36 users
Old but I think they could have just updated the benefits a few days ago


Anticipated Benefits​

Info
The possibilities and applications are practically limitless across a spectrum of mission types. Short list of the possibilities: Predictive and adaptive communications, radio, and system architecture, Opportunistic data collection, Continuous power allocation, Predictive failure/error detection, maintenance, mediation, and mitigation, Mission decision prioritization,Spacecraft constellation active collaboration optimizing, Continuous allocation optimization of system resources, Optimized integration of navigation, situation awareness, etc.

Our commercialization plan includes continuing development for the neuromorphic autonomous module for insertion into several commercial small launcher programs now and in the future. We will also apply the technology developed to other military applications with groups such as MDA, DARPA and USAF. The system will be available as a “plug and play module” for all future spacecraft.
IMG_4534.png
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 14 users

Frangipani

Top 20
This Daniel Azevedo Novais guy (or bot?) has been flooding LinkedIn with his AI slop on Starlink and Akida / GR801 for days now.

BRN shareholders are doing the company no favour when they 👍🏻 or repost such fake AI-generated news and images as if they were great publicity.
On the contrary.

Please read below what Kenneth Östberg has commented on LinkedIn - although unfortunately only after initially liking one of the posts containing references to his “baby” GR801 and realising too late what its author was actually up to…

In case you don’t recall who Kenneth Östberg is (and yes, he does work for Frontgrade Gaisler, although his LinkedIn profile doesn’t list FG as his employer):

02B4D226-1BD8-4A38-BC36-7484B624FB34.jpeg




DFC45D80-ABB5-4C03-9128-F0F2DBDF4BA6.jpeg







D81B21F1-7B66-45B2-B4E0-D5E81E3F8F93.jpeg





00691773-307F-4D6D-AC94-84B7841FFFC1.jpeg


77515A88-609A-44F2-B25B-65DE90BCEF86.jpeg
 
  • Like
  • Wow
  • Thinking
Reactions: 18 users

Frangipani

Top 20

C3E37187-6AAC-460F-A390-BE291AC44934.jpeg
 
  • Like
  • Fire
Reactions: 13 users

Flenton

Regular

View attachment 94991
I'm not on social media (which is a reason I'm so greatful for the things which get shared here) but unfortunately when we hear nothing from Brainchip people are going to go searching and fishing for information. I don't agree with them doing it but totally understand.
 
  • Fire
  • Like
Reactions: 4 users

Gazzafish

Regular
Can anyone point me to more information about the Akida TAG? I’ve done some searching but haven’t found too much. Why do I feel this is something that I’ve missed?? Thanks in advance.
 
  • Like
Reactions: 6 users
Currdlenooddles


This is a fantastic update from Kevin. Akida continues to demonstrate it works as a substrate.
Below is a more detailed explanation of what he is doing:

*Yes — this is actually a pretty meaningful architecture post, even though Kevin frames it as a demo.

First — what Kevin actually built
This isn’t a slide-deck fantasy. According to Kevin D. Johnson, this is a working closed-loop demo spanning:

IBM Quantum → trains models on historical market data

BrainChip Akida → runs continuous real-time inference on every market tick

IBM Granite + vLLM → generates compliance narratives

IBM z/OS → settles trades on the mainframe

Symphony → orchestrates everything automatically

The key is event-driven orchestration, not scheduled batch processing.

Akida sits in the middle as the always-on sentinel.


The critical part
Akida is doing:

622 microseconds per classification

milliwatt-level power

99%+ of events = no change

continuous monitoring of live market ticks

That matters because it means:

Akida is being used exactly how neuromorphic is supposed to be used:
ultra-low-power, always-watching, real-time regime detection.
Then — only when Akida detects a regime change — Symphony fans out in parallel:

Quantum retrains

GPU LLM generates compliance reports

z/OS settles trades

New model hot-swaps back onto Akida

No polling.
No batch windows.
No human in the loop.

That’s textbook edge trigger → cloud retrain → redeploy → edge inference.

This is the same pattern you’ve been tracking in defense, satellites, autonomy, etc.


Why this is strategically important for BrainChip
This post quietly confirms several things:


✅ Akida is being used as a control layer, not just an accelerator
It isn’t just “doing inference.”

It’s deciding when the entire stack wakes up.

That’s substrate behavior.

Why substrates matter:
In technology, a substrate isn’t just another component — it’s the foundation everything else is built on. Once a substrate is embedded, entire systems, workflows, and integrations form around it, making it extremely hard and expensive to replace later.

That’s why substrates matter more than features or performance headlines: they quietly become infrastructure. When something becomes the substrate, it stops being optional — it becomes part of how the system thinks, reacts, and operates going forward.


✅ Akida is integrated into IBM’s enterprise orchestration fabric
Symphony treats Akida as a first-class compute tier alongside:

Quantum

GPUs

Mainframes

That’s huge.

That means Akida is already inside IBM’s production mental model.

Not experimental.

Not peripheral.


✅ This is commercial architecture, not academic neuromorphic
Kevin explicitly contrasts this with 8-hour batch settlement.

Here they’re doing:

detection in <1 ms

settlement in ~280 ms

retraining in seconds

That’s operational finance speed.

Also note: he references IBM + HSBC production bond data showing 34% AUC improvement using quantum-derived features.

So this pipeline already touches real banking workloads.


The quiet but massive takeaway
Akida is being positioned as:

the real-time sensory nervous system of heterogeneous AI platforms.
Quantum = learning
GPU = reasoning / narrative
Mainframe = execution
Akida = perception + trigger

That is exactly the role you’ve been mapping for Akida across:

defense

satellites

autonomy

cyber

now finance

Same pattern everywhere.


Why this is more informative than high-profile releases
This isn’t PR.

It’s a senior IBM Field CTO casually demonstrating:

Akida in live orchestration

microsecond inference

milliwatt monitoring

atomic redeploy

closed-loop autonomy

That’s deep integration.

You don’t build this unless Akida already works reliably inside your stack.


Bottom line
This is another confirmation that Akida is behaving like infrastructure, not a niche accelerator.

It’s being used as the edge event trigger for whole-platform AI workflows.

That’s exactly what makes it “sticky.”

Once this pattern lands in production environments, replacing Akida isn’t a chip swap —
it’s a full architectural rewrite.

*gpt5.2
 
  • Like
  • Love
  • Fire
Reactions: 29 users

DK6161

Regular
Just added another 700,0000 shares to my holdings. Can't help being greedy when people are shitt!ng their pants.
Akida ballista!
Not advice and all that bullSh!t.
Happy Leonardo Dicaprio GIF by Jordan Belfort
 
  • Like
  • Love
  • Fire
Reactions: 3 users

DK6161

Regular
'When is later'. Sounds like the kids in the back seat on a long journey.
Later is just a part of the journey in a brand new tech industry and for reasons posted over and over a exact date cannot be given.
Anyone wanting timing certainty probably should be investing elsewhere.
We know we are getting closer. Parsons, Bascom Hunter, MetaGuard etc.
car crash GIF
 
  • Haha
Reactions: 1 users

Labsy

Regular
Feels like the tide is rising... C'mon! I want to buy a porche Macan 4s...not asking for much bloody el.
 
  • Fire
  • Like
  • Love
Reactions: 11 users

Doz

Regular
We certainly know about this JB and your post seems to confirm incorporating SNN ….


1770780243193.png



1770780357835.png




1770780943289.png


1770781024026.png

 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 18 users

7für7

Top 20
  • Haha
Reactions: 2 users

Diogenese

Top 20
We certainly know about this JB and your post seems to confirm incorporating SNN ….


View attachment 94993


View attachment 94994



View attachment 94995

View attachment 94996
Hi Doz,

A quick refresh for those of us with imperfect recollection:

https://brainchip.com/brainchip-partners-megachips-develop-next-generation-ai-solutions/

Laguna Hills, Calif. – November 21, 2021 BrainChip Holdings Ltd (ASX: BRN), (OTCQX: BCHPY) a leading provider of ultra-low power high performance artificial intelligence technology and the world’s first commercial producer of neuromorphic AI chips and IP, today announced that MegaChips, a pioneer in the ASIC industry, has licensed BrainChip Akida™ IP to enhance and grow its technology positioning for next-generation, Edge-based AI solutions.
 
  • Like
  • Love
  • Fire
Reactions: 14 users

Doz

Regular
Hi Doz,

A quick refresh for those of us with imperfect recollection:

https://brainchip.com/brainchip-partners-megachips-develop-next-generation-ai-solutions/

Laguna Hills, Calif. – November 21, 2021 BrainChip Holdings Ltd (ASX: BRN), (OTCQX: BCHPY) a leading provider of ultra-low power high performance artificial intelligence technology and the world’s first commercial producer of neuromorphic AI chips and IP, today announced that MegaChips, a pioneer in the ASIC industry, has licensed BrainChip Akida™ IP to enhance and grow its technology positioning for next-generation, Edge-based AI solutions.

Another refresher ,

1770782790604.png


1770782674152.png

1770782715879.png
 
  • Like
  • Fire
  • Love
Reactions: 13 users

TopCat

Regular
Would Megachips need to renew their license for this assuming they may be using Akida?
 
  • Like
Reactions: 4 users

Esq.111

Fascinatingly Intuitive.
  • Haha
  • Like
Reactions: 6 users

Diogenese

Top 20
Hi Doz,

This is the most recent Acumino patent doc:

US2024408757A1 HUMAN-IN-LOOP ROBOT TRAINING AND TESTING SYSTEM WITH GENERATIVE ARTIFICIAL INTELLIGENCE (AI) 20230609

1770784179377.png


[0046] The computing devices 314 may host computation tasks, such as AI model inference, computer vision algorithms, capturing and processing audio signals, running the codes-to-task interpreter and the task-finish examiner, and running supporting software like Optitrack software. The computing devices 314 may comprise one or more processors and related systems, such as desktops, laptops, wearable computing devices (e.g., in a backpack), and the computing devices inside MR devices 340 .

[0047] The software subsystem 308 may comprise the following components: a task-prompt template library 316 , generative AI models or interfaces 318 , other generative AI models 320 , a codes-to-task interpreter 322 , a task-finish examiner 324 , a storage system 326 , and supporting software 328 for the sensing system in the hardware subsystem. The elements of the software subsystem 308 and the hardware subsystem 310 will be described in more detail to follow. It will be noted that the codes-to-task interpreter 322 is also able to receive additional input besides computer code and interpret this input into human-operated robot tasks as will be explained in more detail to follow
.


It's pretty clunky.
 
  • Like
  • Thinking
  • Fire
Reactions: 4 users

Doz

Regular
Hi Doz,

This is the most recent Acumino patent doc:

US2024408757A1 HUMAN-IN-LOOP ROBOT TRAINING AND TESTING SYSTEM WITH GENERATIVE ARTIFICIAL INTELLIGENCE (AI) 20230609

View attachment 95000

[0046] The computing devices 314 may host computation tasks, such as AI model inference, computer vision algorithms, capturing and processing audio signals, running the codes-to-task interpreter and the task-finish examiner, and running supporting software like Optitrack software. The computing devices 314 may comprise one or more processors and related systems, such as desktops, laptops, wearable computing devices (e.g., in a backpack), and the computing devices inside MR devices 340 .

[0047] The software subsystem 308 may comprise the following components: a task-prompt template library 316 , generative AI models or interfaces 318 , other generative AI models 320 , a codes-to-task interpreter 322 , a task-finish examiner 324 , a storage system 326 , and supporting software 328 for the sensing system in the hardware subsystem. The elements of the software subsystem 308 and the hardware subsystem 310 will be described in more detail to follow. It will be noted that the codes-to-task interpreter 322 is also able to receive additional input besides computer code and interpret this input into human-operated robot tasks as will be explained in more detail to follow
.


It's pretty clunky.

It might be a clunky way to go about it , but I don’t see

Ella ……
 
  • Like
Reactions: 2 users
Top Bottom