BRN Discussion Ongoing

Rach2512

Regular
 
  • Like
  • Love
  • Fire
Reactions: 15 users

Rach2512

Regular
23k+ followers



Screenshot_20260313_162451_Samsung Internet.jpg
Screenshot_20260313_162459_Samsung Internet.jpg
Screenshot_20260313_162505_Samsung Internet.jpg
Screenshot_20260313_162512_Samsung Internet.jpg
Screenshot_20260313_162527_Samsung Internet.jpg
 

Attachments

  • 1773289545813.pdf
    12.6 MB · Views: 26
  • Like
  • Fire
  • Love
Reactions: 25 users

manny100

Top 20
OpenAI investor Vinod Khosla believes AI will be able to do 80% of all jobs by 2030. Here’s how life could be affordable after mass unemployment
Interesting article.
It does not matter whether its 80% as predicted or whatever the % from 2030 - the fact is we are heading towards AI taking jobs and we own a share (BRN) that will be providing the robotics etc that will play a part in the transition.

"
Khosla’s deflationary vision is built on a series of sectors collapsing in cost. Khosla believes AI and robotics will be able to produce most currently pricey goods very cheaply, creating a deflationary economy in which almost all labor and expertise will become free."
" He predicts that by 2040, $10,000 could buy you more than a $100,000 income could today, including your house, education, food, and health care. "
" According to Khosla, nations that resist AI adoption—he cited Germany in particular, where robots are currently prohibited from working in retail on Sundays under labor protection laws—risk falling catastrophically behind."
 
  • Like
  • Wow
  • Fire
Reactions: 10 users

Bravo

Meow Meow 🐾
OpenAI investor Vinod Khosla believes AI will be able to do 80% of all jobs by 2030. Here’s how life could be affordable after mass unemployment
Interesting article.
It does not matter whether its 80% as predicted or whatever the % from 2030 - the fact is we are heading towards AI taking jobs and we own a share (BRN) that will be providing the robotics etc that will play a part in the transition.

"
Khosla’s deflationary vision is built on a series of sectors collapsing in cost. Khosla believes AI and robotics will be able to produce most currently pricey goods very cheaply, creating a deflationary economy in which almost all labor and expertise will become free."
" He predicts that by 2040, $10,000 could buy you more than a $100,000 income could today, including your house, education, food, and health care. "
" According to Khosla, nations that resist AI adoption—he cited Germany in particular, where robots are currently prohibited from working in retail on Sundays under labor protection laws—risk falling catastrophically behind."


Hi Manny,

I think this post is positioning things in a way that’s a bit misleading.

"It does not matter whether its 80% as predicted or whatever the % from 2030 - the fact is we are heading towards AI taking jobs and we own a share (BRN) that will be providing the robotics that will play a part in the transition."

Predictions along the lines that “we will own part of the future of robotics that will play a part of the transition because we hold BRN shares” feels like a stretch.

As shareholders we own stock in a company that currently has very limited revenue and no large scale commrceial deployment yet!!!!!

Whether BrainChip ultimately becomes a meaningful player in edge AI is still something the company needs to prove.

I hope it does (that’s why many of us are invested), but presenting speculative outcomes as if they’re inevitable doesn’t really help the discussion in my humble opinion.
 
Last edited:
  • Like
  • Fire
Reactions: 7 users

ChrisBRN

Member

IMG_0718.jpeg

IMG_0719.jpeg
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 17 users

GazDix

Regular
Seemingly good recent partnerships and news about Kevin but the same single issue continues with this company - transparency.

There is a collective of AI projects called 'subnets' run by Bittnesor which uses substrate and open source code on its blockchain that is solving some of craziest problems in the world such as protein folding efficiency, shared GPU usage etc. Don't worry, they don't do neuromorphic, so BC is safe.
Bittensor is open sourced, and Brainchip is closed of course for a reason. Patents protect our IP and because of this, our company should be accountable to us shareholders with some kind of progress and increase the value of the company.
The SP is being held down because our company wants it down in my opinion. All the research here has proven that to be the case, especially seeing in adoption cycle models, neuromorphic is still a few years away to be like a ChatGPT moment Jan 2024 (many users in a ridiculous period of time).

The financial world which Brainchip lives is a speculative world. One that was highlighted with the MB 'partnership' new in early 2022 where we had a massive jump in share price in a short time. The speculative financial world relies on news in potential. Potential which I believe is being squandered to have any value if these 'NDAs, partnerships etc.' are hidden. In a closed source, listed publcially listed company like Brainchip that has massive potential.... throw a dog a bone please. The market is valuing BC correctly because with only secrets, you are not to be trusted.
 
  • Like
  • Love
  • Thinking
Reactions: 8 users

Diogenese

Top 20

View attachment 96159
View attachment 96161
They've cloned Anil's brain!
 
  • Haha
Reactions: 2 users

manny100

Top 20
Hi Manny,

I think this post is positioning things in a way that’s a bit misleading.

"It does not matter whether its 80% as predicted or whatever the % from 2030 - the fact is we are heading towards AI taking jobs and we own a share (BRN) that will be providing the robotics that will play a part in the transition."

Predictions along the lines that “we will own part of the future of robotics that will play a part of the transition because we hold BRN shares” feels like a stretch.

As shareholders we own stock in a company that currently has very limited revenue and no large scale commrceial deployment yet!!!!!

Whether BrainChip ultimately becomes a meaningful player in edge AI is still something the company needs to prove.

I hope it does (that’s why many of us are invested), but presenting speculative outcomes as if they’re inevitable doesn’t really help the discussion in my humble opinion.
Hi Bravo, not long ago we established that Megachips has been demonstrating Robots in its showroom that contain AKIDA since September 2025.
Megachips is producing Robots in response to Japan's projected labor shortages.
If you do not believe Robotics and AI is the future then not much point holding BRN.
Sean has said many times the AI revolution will make previous revolutions eg Industrial look like nothing.
Investing in the future is by its very nature speculation and that is what Brainchip has been from the very start.
Your post seems like its come from a 'defeated' holder IMO.
Who would have believed years ago airborne or sea Drones would be effective in warfare?
 
  • Like
  • Fire
  • Thinking
Reactions: 10 users
Hi Manny,

I think this post is positioning things in a way that’s a bit misleading.

"It does not matter whether its 80% as predicted or whatever the % from 2030 - the fact is we are heading towards AI taking jobs and we own a share (BRN) that will be providing the robotics that will play a part in the transition."

Predictions along the lines that “we will own part of the future of robotics that will play a part of the transition because we hold BRN shares” feels like a stretch.

As shareholders we own stock in a company that currently has very limited revenue and no large scale commrceial deployment yet!!!!!

Whether BrainChip ultimately becomes a meaningful player in edge AI is still something the company needs to prove.

I hope it does (that’s why many of us are invested), but presenting speculative outcomes as if they’re inevitable doesn’t really help the discussion in my humble opinion.
Bravo. The spirit of bravo is normally one of positive and joyful flair, yet every now and then that seem to be Debbie downer pops her head up 🤪. Speculative out comes and inevitable outcomes doesn't help.ummmm.
We're getting real real close old girl 👧 ( young girl ) keep your chin up were it Normally is. Robots and brainchips are like bangers and mash or toast and Vegemite, 🍓 and cream.
I hang my hat 2026 will be HUGE.
My old car number plate was BR8_026.
So expect Augustine to bare large fruit for the decent people in brainchip.
Good to day to all.
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 14 users

Frangipani

Top 20
[…] Most of you will be familiar with the term Brain-Computer-Interface (BCI) - sometimes also called Brain-Machine-Interface (BMI) or Mind-Machine-Interface (MMI) - but may not be fully aware what it actually means.

In a December 2015 publication, Christoph Guger (that’s, by the way, where the G in g.tec comes from - it stands for Guger Technologies) and two of his co-authors described a BCI as follows:

A BCI is a device that reads voluntary changes in brain activity, then translates these signals into a message or command in real-time (…) Most BCIs rely on the electroencephalogram (EEG). These signals (also called “brainwaves”) can be detected with electrodes on the surface of the head. Thus, these “noninvasive” sensors can detect brain activity with very little preparation. Some BCIs are “invasive”, meaning that they require neurosurgery to implant sensors. These BCIs can provide a much more detailed picture of brain activity, which can facilitate prosthetic applications or surgery for epilepsy and tumor removal.”


The implants used in clinical trials by Neuralink (founded in 2016 by Elon Musk and a team of eight scientists and engineers) are the most well-known examples of invasive BCIs. And while we BRN shareholders tend to roll our eyes when our company’s silicon gets confused with Musk’s “brain chips”, there is no doubt that BrainChip’s technology is also being evaluated in this field of BCIs.


In 2020, g.tec medical engineering introduced the BCI & Neurotechnology Spring School, a free ten-day virtual event - now held annually - which has become the world’s largest neurotech event, orchestrated from a small town in Austria called Schiedlberg. Participants can access 140 hours of cutting-edge education and even earn 14 ECTS* credits and an official exam certificate at no cost.
*ECTS = European Credit Transfer and Accumulation System

I noticed that one of last year’s 82,000 (!) participants was Temi Mohandespour, who used to work as a research scientist at BrainChip’s now closed Perth office from March 2021 until January 2025. She has since moved to Berlin and now works for Data4life, a non-profit organisation, whose mission is to digitalise health data for research (www.data4life.care/en/).

https://www.linkedin.com/posts/temi-mohandespour_here-is-a-big-thank-you-to-gtec-medical-activity-7193097495894208513-9euk?


View attachment 92732

Several of her colleagues at BrainChip gave her above “thank you” post a 👍🏻, including our CTO.

While I wasn’t able to find out anything concrete about what Temi Mohandespour may have been working on relating to BCIs during her last nine months at BrainChip post-Spring School, I happened to discover the LinkedIn profile of someone else who worked not only on one, but on two BCI projects utilising Akida - although not as an employee of BrainChip:

https://www.linkedin.com/in/hammouamri-ilyass/


View attachment 92729


Ilyass Hammouamri, who recently defended his PhD thesis at the Université de Toulouse (https://doctorat.univ-toulouse.fr/as/ed/cv.pl?mat=140961&site=EDT)
and whose PhD supervisor was Timothée Masquelier (one of the four co-inventors of the JAST patent that BrainChip first licensed and later acquired), was a part-time research engineer at Neurobus between September 2024 and April 2025.

It was during that time - still under Gregor Lenz as CTO - that he “developed a Proof of Concept solution for motor imagery classification from a Dry EEG Headset using a BrainChip Akida neuromorphic chip for robotic arm control”.


“Motor imagery (MI) is a mental process in which a subject vividly imagines performing a movement without any actual physical execution. MI is widely used in BCI systems to enable control of external devices, such as a cursor on a screen or a robotic arm, through brain activity.”

https://docs.medusabci.com/kernel/1.4/tutorials.php (by the Biomedical Engineering Group at the University of Valladolid, Spain)


I wonder whether this project may have been the continuation of the BMI* project that Neurobus’s first employee, Ljubica Cimeša, had developed in collaboration with Airbus, which also used EEG signals for robotic control:

*The terms Brain-Computer-Interface (BCI) and Brain-Machine-Interface (BMI) are often used interchangeably.


https://www.linkedin.com/in/cimesa-ljubica/

View attachment 92736 View attachment 92737

But his part-time contract job with Neurobus was not the first time Ilyass Hammouamri had been involved in BCI research using Akida: During his time at CNRS (Centre national de la recherche scientifique) CerCo (Centre de Recherche Cerveau et Cognition) in Toulouse, where he was as a PhD candidate in Timothée Masquelier‘s NeuroAI lab from to September 2021 to February 2025, he “worked on a joint project between different labs and BrainChip: Decoding speech from ECoG brain signals”.
Which means there must have been at least one more lab involved in that project, possibly more.

ECoG stands for electrocorticography. In contrast to EEG, it involves recording electrical activity directly from the surface of the brain und thus requires a craniotomy.


View attachment 92731

Here is a good illustration I found online, which happens to be from a video by g.tec medical engineering:

View attachment 92730

I have no idea whether or not any of g.tec medical engineering’s products (such as wearable EEG headsets, biosignal amplifiers) were actually used for either of the two BCI projects that Ilyass Hammouamri was involved in.

What I can tell you, though, is that they list Airbus under “Happy Customers” alongside quite a few other interesting names (https://www.gtec.at/).

Stumbled across more info today about the “joint project between different labs and BrainChip” (“Decoding speech from ECoG brain signals”) that Ilyass Hammouamri 👆🏻was involved in during his time at CNRS (Centre national de la recherche scientifique) CerCo (Centre de Recherche Cerveau et Cognition) in Toulouse, where he was as a PhD candidate in Timothée Masquelier’s NeuroAI lab from to September 2021 to February 2025:

The ANR (Agence Nationale de la Recherche / French National al Research Agency) BRAIN-Net project started in December 2020 and ran over a duration of four years, which means it ended about a year ago.

It was coordinated by Blaise Yvert from the Grenoble Institute of Neuroscience, whose goal it is to “restore speech to people who are paralyzed and who have lost their vocal abilities. Along with his team at the Grenoble Institute of Neuroscience, he is developing a system capable of decoding the brain signals associated with speech, so that it can be produced by an external device. This is referred to as a brain-computer interface.” (quoted from the article on Blaise Yvert below)




View attachment 93718 View attachment 93719


FYI: The linked article co-authored by researchers from France and Japan and published in Nature Communications - https://www.nature.com/articles/s41467-025-64231-2 - does not mention BrainChip or Akida.




View attachment 93720

Here is an interesting six-month old article about the research conducted by BRAIN-Net project coordinator Blaise Yvert, Inserm* Research Director and head of the Neurotechnologies and network dynamics team at Grenoble Institute of Neuroscience:

*INSERM (= Institut National de la Santé et de la Recherche Médicale) is the French National Institute of Health and Medical Research.



  • Blaise Yvert: Getting the Brain to Talk
  • PUBLISHED ON: 10/06/2025
  • READING TIME: 5 MIN
  • NEWS
Blaise Yvert has one goal – restore speech to people who are paralyzed and who have lost their vocal abilities. Along with his team at the Grenoble Institute of Neuroscience, he is developing a system capable of decoding the brain signals associated with speech, so that it can be produced by an external device. This is referred to as a brain-computer interface.

Blaise Yvert

Blaise Yvert is an Inserm Research Director and head of the Neurotechnologies and network dynamics team at Grenoble Institute of Neuroscience (unit 1216 Inserm/Grenoble-Alpes University) in Grenoble.

Will Yvert’s research restore speech to those who have lost it? This is what the Inserm Research Director leading the Neurotechnologies and Network Dynamics team at the Grenoble Institute of Neuroscience is hoping. For the past ten years, he has been working on the development of a brain-computer interface to decode the brain signals of speech and reproduce the words of people who are unable to utter them. His project was recently selected as part of the Impact Santé program, funded by the France 2030 investment plan and coordinated by Inserm. Brain Implant, the scientific consortium he has formed for this project, has received three million euros to develop a new brain implant that will improve the accuracy of speech reconstruction based on brain activity.

From engineer to researcher

This desire dates back to his engineering studies at École centrale de Lyon and Cornell University in the US. « I was drawn to research and wanted to develop health technologies, especially for people with disabilities. I knew several people with disabilities when I was younger and it’s a cause I hold dear », explains Yvert.

Once he graduated in 1993, the young engineer was hired by an Inserm human electrophysiology research unit in Lyon. « The team sought to mathematically locate the brain regions responsible for the signals recorded on the surface of the head. This was something I was particularly interested in », he recalls. During two postdocs, one in Finland and the other in Germany, the researcher used this approach to identify the auditory areas. But he realized that, even for very simple sounds, the cortex activation pattern is too complex to be finely understood with non-invasive recordings. « So I thought: let’s develop more sophisticated systems, for a more precise look at what happens in the neural networks. »

Towards a new technology

With this goal in mind, and after obtaining a research fellowship at Inserm, Yvert joined in 2003 a research unit in Bordeaux that focuses on neural networks in the developing spinal cord. There, he initiated a partnership with the French Alternative Energies and Atomic Energy Commission (CEA) in Grenoble and the ESIEE engineering school in Paris, which has academic laboratories, to develop microelectrode networks to enable detailed exploration of neural tissue activity in vitro. An initial prototype was finalized three years later. Through multiple collaborations, he continued to improve this technology, particularly with new materials to increase the performance of electrodes (platinum, diamond and, more recently, graphene).

Then, Yvert wanted to put his research to work for patients. With this project in mind, he spent a year at Brown University in the US, in a research unit that led the way in implantable brain-computer interfaces in humans. Back in France, he joined the Grenoble Institute of Neuroscience and began his project on decoding brain signals of speech. In particular, he collaborated with the Clinatec institute created by the CEA, « a unique environment for creating new rehabilitation strategies for people with paralysis », he believes.

The interface to which Yvert devoted his work is aimed particularly at people with “locked-in syndrome” (LIS). Although they cannot move or speak due to complete paralysis, their cognitive faculties are intact. “The cortical activities produced when they want to say something are always present, so if we can decode them with our implants, we can reproduce what they want to say”, hopes the researcher. An initial clinical trial is expected to start in 2025, « if the regulatory procedures go well », he warns. This trial will include people with LIS who will be equipped with an implant developed by Clinatec, positioned on the surface of the brain. “This device provides signals that are highly stable over the long term, with wireless transmission through the skin », he explains.

Pursue and accelerate development

At the same time, the scientist does not forget the fundamental aspect, which has always been a source of motivation in his work. « For example, we’re exploring the brain activity of a new animal model that is very vocal – the pig. This model allows us to test new, more efficient types of implants for potential future use in humans. It will also be possible to see whether there are similarities between the data collected in animals and humans ».

In order to finely decode brain activity, he believes that the devices will still need to be improved, by increasing the number of electrodes, and by innovating in materials and integrated electronics. This is the goal of the Brain Implant project. « We want to create a technological building block that would serve both basic research and to develop brain-computer interfaces for clinical use in different indications: to restore speech or other motor functions », he explains.

These developments and their challenges for people and society are inevitably accompanied by ethical questions around which Yvert has set up processes of reflection, conducted in collaboration with philosophers and patient organizations.

And as if all of this were not enough, the researcher has also led, since early 2025, the Grenoble Initiative in Medical Devices (LabEx GIMeD), a research partnership on medical devices. « The aim is to bring together multidisciplinary units that develop health technologies, including teams specialized in the humanities and social sciences, to reflect on the implications of these technologies. New projects are expected to emerge from this ecosystem », he outlines for the future.

Looking back, Yvert notes that risk-taking during his career has been successful. “Going from non-invasive brain recording in humans to the technological development of in vitro systems took me out of my comfort zone. But in the end, this leap was essential in preparing for the development of an interface that, I hope, will one day be able to provide real services to patients », he concluded.


Blaise Yvert is an Inserm Research Director and head of the Neurotechnologies and network dynamics team at Grenoble Institute of Neuroscience (unit 1216 Inserm/Grenoble-Alpes University) in Grenoble.

Back in November, I shared my discovery about BCI research in France that had utilised Akida. The LinkedIn profile of a researcher named Ilyass Hammouamri, which I had stumbled across, revealed two completely different projects he had been involved in.

During his time at CNRS (Centre national de la recherche scientifique) CerCo (Centre de Recherche Cerveau et Cognition) in Toulouse, where he was a PhD candidate in Timothée Masquelier‘s NeuroAI lab from to September 2021 to February 2025, Ilyass Hammouamri “worked on a joint project between different labs and BrainChip: Decoding speech from ECoG brain signals”.

A few weeks later, I found out more about this project, namely that BrainChip was one of the consortium partners of “BRAIN-Net: Spiking Neural Networks for Real-Time Processing of Brain Signals” (scheduled project duration: December 2020 - November 2024), which was coordinated by Blaise Yvert from Brain Tech Lab at the Grenoble Institute of Neurosciences. Besides the above-mentioned CeRCO, other consortium partners included INSERM and IMS Laboratory.

The second Akida BCI research project Ilyass Hammouamri had been involved in took place while he was working for BrainChip partner Neurobus as a part-time research engineer between September 2024 and April 2025. According to his LinkedIn profile, he “developed a Proof of Concept solution for motor imagery classification from a Dry EEG Headset using a BrainChip Akida neuromorphic chip for robotic arm control”.


It doesn’t come as a surprise, then, that BrainChip has now teamed up with a company developing medical solutions based on advanced BCI technology.
We found out about this earlier this month, when @ChrisBRN spotted two new logos that had appeared on the BrainChip Partners webpage overnight - one of them is the logo of Korea-based BCI technology company Gbrain (https://www.gbrainlife.com/).


Earlier today, James Shields liked one of Gbrain’s latest LinkedIn posts:



FD39557F-4503-4AF4-AD53-56D520747041.jpeg



Although we haven’t yet heard anything official from either company, the fact that BrainChip lists Gbrain under OEM Integration Partners (“OEM integration partners produce board and box level product solutions based on Akida silicon implementations that are suitable for end markets.”) suggests to me that our company is interested in the “contracted clinical-grade electrode manufacturing” services Gbrain is offering besides developing their own products, such as Phin Array™, an ECoG cortical electrode, and Phin Stim™, a next-generation wireless cortical implant for neurostimulation, initially targeting patients with Parkinson’s disease. These two products are still in clinical trials, though, and there is also a disclaimer on promotional material by Gbrain that Phin Array and Phin Stim are currently “intended for investigational use only and have not been cleared by the FDA for the treatment of neurological disorders”.


Here is Gbrain’s company profile on the MEDICA website, the International Trade Fair for Medical Technology and Healthcare:


F11F2A74-003B-4697-8EF7-6E7880435DDF.jpeg

12E5C73D-E492-46FC-B309-B2778C2BC4CC.jpeg


3D167AD8-A621-4F4E-A8D3-FFE9F9F09230.jpeg
2090CDF8-87FD-4E77-B201-2B639308B182.jpeg



And here is a photo Gbrain shared on LinkedIn that shows a poster at their CES 2026 booth:



C0D29F48-7DE2-44B6-9CAF-670ACE621559.jpeg



CES INNOVATION AWARDS®

Phin Stim™ for Parkinson’s Disease​



Gbrain
Phin Stim™ is a next-generation, fully implantable wireless neurostimulation system for treating Parkinson’s disease. It offers a safer, less invasive alternative when medications fail or patients are reluctant to undergo deep brain procedures. Unlike traditional deep brain stimulation (DBS), Phin Stim™ uses ultra-thin, flexible electrodes to stimulate the motor cortex through a minimally invasive surgical approach. The system is easy to install, remove, or replace, and provides precise surface stimulation. Phin Stim™ continuously monitors brain signals and delivers AI-powered adaptive stimulation to reduce tremors and slow movements in real time. It uses wireless power and data transmission for safe, daily use. Beyond symptom relief, it supports long-term neuroplasticity, helping the brain rewire itself to restore motor function. By combining bioelectronics, intelligent software, and digital therapeutics, Phin Stim™ delivers smarter, more responsive care—offering hope to patients seeking effective alternatives to traditional brain surgery.



Short CES 2026 interview with Gbrain Chief Device Officer Sung Q Lee:






On their website, Gbrain also provide some information on future projects:

- Next Generation Electrode Array: “Syringe-injectable surface multi-modal sensor array”

- Next Generation Wireless System: “Miniaturized wireless body-coupled communication”

- New Materials for Neural Implants:
“Graphene-coated electrode channels”



27959646-7A72-4521-B0A4-135D2780348E.jpeg
83D32FFE-930E-490D-AFC2-443E6C9D6772.jpeg






9FC56E25-8ED1-4B90-8990-0E448B2AEC71.jpeg



And if you’ve come this far, you might as well also read the following article:


At CES 2026, Gbrain's Phin Stim Signals a New Era for Implantable Brain Therapy​

I spoke with neuroscientists from an Incheon, Korea-based startup that's looking to minimize the symptoms of epilepsy or Parkinson's disease.

Headshot of Macy Meyer

Macy Meyer
Jan. 8, 2026 1:43 p.m. PT
3 min read

The G brain Phin Stim implant at their CES booth

Phin Stim is designed to help treat neurological conditions by gently stimulating the brain with precise electrical signals. Macy Meyer/CNET

CES has a unique rhythm. Fast footsteps on carpeted aisles. Neon slogans. Screens flashing promises about the future being smarter, faster, louder. Covering startups on the floor means learning to filter aggressively, to keep moving even when something looks interesting, because there's always another booth waiting.

And then, sometimes, something interrupts that rhythm.

In the middle of the noise, I found myself in the corner of the Las Vegas Convention Center at a booth for Gbrain, a Korean neurotechnology startup specializing in advanced brain-computer interface medical solutions and implantable brain-stimulation devices. No spectacle. No buzzwords shouted from a screen. Just precise hardware, clinical diagrams and conversations that felt unusually grounded for a show known for hype and an oversaturation of AI-nonsense.

It wasn't trying to be the future of everything. It was trying to fix something specific, and that's what made it stand out.

How the Phin Stim works on the human brain​

Phin Stim is designed to help treat neurological conditions by gently stimulating the brain with precise electrical signals.

The brain communicates through tiny electrical impulses. When those signals become irregular -- as they can in conditions like epilepsy or Parkinson's disease -- the results can be severe. Phin Stim works by monitoring brain activity and delivering targeted stimulation to help guide those signals back into healthier patterns.

Think of it less like controlling the brain and more like correcting interference on a signal line.

One of the key innovations is Gbrain's ultrathin, flexible electrodes, which sit on the surface of the brain rather than pressing into it like other brain implants. Because they're soft and adaptable, they conform to the brain's natural shape, improving signal quality while reducing irritation. It's the difference between wearing a rigid helmet and something that actually moves with you.

The long-term goal is a fully implantable system: something that can work continuously inside the body, monitoring brain activity and responding when intervention is needed, without bulky external hardware.

The future of Gbrain's work and innovation​

I spoke with Euiyoung Kim, a manager at Gbrain, who holds degrees in neuroscience, about the future of Phin Stim and Gbrain's innovations.

Gbrain is showcasing two versions of its flagship system, Phin Stim, at CES. The first, which is undergoing clinical trials, and the second, a prototype, are currently under review by a regulatory body in Korea, according to Kim. The earlier model was a CES 2025 Innovation Awards Honoree, while the updated version earned the same recognition for CES 2026. The newer Phin Stim is smaller, cleaner and more integrated -- less like a prototype and more like a medical device inching toward real-world use.

The G brain Phin Stim version 1 and 2 at G brain's booth at CES.


Gbrain is showcasing two versions of its flagship system Phin Stim at CES 2026.
Macy Meyer/CNET


"[The goal of the devices] is more towards minimizing the symptoms," Kim said. "It would be great if we could further get it to research where we discover the actual core causes of these diseases, but they currently focus more on making people's lives less hard, bringing everyday life back to patients."

What struck me most was how little Gbrain leaned into sci-fi narratives or overpromises. There were no grand claims about mind reading, enhancement or futuristic spectacle. This was neuroscience presented as medicine, not mythology. The focus was squarely on patients whose conditions don't respond well to medication alone and on giving clinicians more precise tools to help them. That restraint felt rare and refreshing on a show floor where ambition can outpace responsibility.

Rather than chasing attention, Gbrain seems focused on the unglamorous fundamentals: manufacturing standards, clinical validation, regulatory pathways and the intense work required to turn technology into treatment.

After hours of walking the CES floor, Gbrain was one of the booths I kept thinking about. In an industry obsessed with speed, Gbrain is moving at the pace medicine demands. And in a space crowded with promises about what technology might do someday, this was a reminder that some of the most meaningful innovation is focused on what technology can do now -- for people who actually need it.
 

Attachments

  • 83705740-702B-4A4C-9369-360301735E07.jpeg
    83705740-702B-4A4C-9369-360301735E07.jpeg
    465.7 KB · Views: 7
  • Like
  • Love
  • Fire
Reactions: 22 users

Frangipani

Top 20

EA193413-4809-44AF-8C15-B33C4C8BACF7.jpeg


D71B7C6F-6AC2-475E-8A4B-1CBE90431055.jpeg
CD8BF8DF-D024-4361-87E5-795C16A75141.jpeg
3CF799A8-B1BF-46C6-BEC4-9A54F43DA708.jpeg
EC65F4DB-908C-46CF-A9AD-96104A3A4D25.jpeg



“BrainChip demonstrated a radio frequency waveform classifier running on our AKD1500 device, when paired with a software defined radio, enabling deployment in drones, backpacks and remote battery powered locations.

We also demonstrated a micro-doppler radar using the AKD1500 that can be deployed in drones that can recognize the difference between birds and enemy drones.

The future of EW is intelligent, autonomous, and adaptive and we're building it.”
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 22 users
Definitely worth a read and some interesting thoughts:


"BrainChip’s Akida processor runs vision AI on less than 1 watt. And we’re at the very beginning of this potential shift in architecture."

"Training today’s massive models still requires exactly the kind of brute-force linear algebraic computation that Nvidia dominates. The current generation of AI needs Nvidia chips the way the year 1900 needed horses."

"The scale of capital being committed to the existing GPU paradigm is staggering. Hyperscalers and governments are projected to spend over $2 trillion on AI infrastructure through 2030, with Microsoft alone committing $80 billion in fiscal 2025, Meta pledging up to $65 billion and Google announcing $75 billion in capital expenditure for the year. These data centers have useful lives of 15 to 20 years, but the AI hardware inside depreciates on a three- to five-year cycle. If neuromorphic or sparse architectures mature within the next decade, the economics will collapse. Companies will have spent trillions building infrastructure optimized for a paradigm that gets displaced before a return on the investment is realized."

About the author:

 
  • Like
  • Fire
  • Love
Reactions: 18 users

Frangipani

Top 20
Gregor Lenz, Florian Corgnou and Karl Vetter from BrainChip’s partner Neurobus were part of a team that came in first 🥇 at the European Defense Tech Hackathon, which took place in Paris over the weekend.
Their winning solution titled Automatic event-based detection and tracking of UAVs and Shahed drones in challenging lighting conditions “showcased the ground-breaking potential of neuromorphic event-based cameras (…) paving the way for smarter, faster and more efficient defense-systems”.

As you may have guessed from the mentioning of the Iranian-designed Shahed drones (which are also known by their Russian designation Geran-2), the 34 projects in total were far from being destined for storage in an ivory tower of academia: European defense company Helsing AI was a key partner of that hackathon, which was also supported by the Ministry of Defence of Ukraine.

The challenges were based on real-world problems gathered from our partners, who have delivered solutions to the frontline, from building underwater reconnaissance systems to the interception of Shahed drones and helicopters and swarm coordination in GPS-denied environments.”





View attachment 73720


View attachment 73721



View attachment 73723

One and a half years after Karl Vetter, previously with Uni Tübingen’s Cognitive Systems Lab (https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-437994), joined BrainChip partner Neurobus, another neuromorphic researcher with first-hand experience of Akida (as well as other neuromorphic processors) has done likewise: Jules Lecomte, previously with fortiss, also a BrainChip partner.




View attachment 94385

[Screenshot taken about 10 hours ago]

Although for some weird reason, to this day fortiss still does not show up on the BrainChip Partners website.


View attachment 94390



View attachment 94386 View attachment 94387


In November, Jules Lecomte took part in the European Defense Hackathon in Paris, where their team - that came in 3rd place - was provided with hardware and guidance by Neurobus, who were also sponsors of the whole event:


View attachment 94388

One teammate, Shannah Santucci, had already won 1st place in the December 2024 European Defense Tech Hackathon with - among others - Florian Corgnou, Gregor Lenz and Karl Vetter, all from Neurobus (Gregor Lenz has since left the Paris and Toulouse-based startup and joined Paddington Robotics in London).

https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-442810





The inaugural European Defense Tech Hackathon took place in June 2024 in Munich, and since then many more have followed, supported by the Ukrainian Government and their Brave1 defence accelerator platform, which was launched in April 2023 in reaction to Russia’s invasion of Ukraine the previous year.

It is therefore not surprising that some of the the winning entries’ prototypes have been/are being tested by Ukraine. Among them the drone solution developed by the above-mentioned team around Neurobus in December 2024:


Neurobus – At the Paris hackathon, they integrated their neuromorphic chip with a camera to detect Shahed drones in low-light conditions—and won first place. They’ve since started working with BRAVE1 to test their prototype under real-world conditions.”

Neurobus CEO Florian Corgnou (at least I strongly assume it is him) even registered a second LinkedIn account in Kyiv sometime before September last year:




The above paragraph on Eurodefense.tech is not entirely accurate, though, as it sounds as if Neurobus had developed their own neuromorphic chip, and the author also failed to mention that the camera was an event-based one, a neuromorphic sensor, rather than a regular frame-based camera.


What Neurobus does is explained very well in this April 2025 article I shared last year:



NEUROBUS, A SOLUTION THAT COMBINES ENERGY EFFICIENCY AND INTELLIGENCE​

By observing technological advances at Tesla and SpaceX, and then participating in the Airbus Defense Space program, Florian established the groundwork for Neurobus. Immersion with engineers and space experts allowed him to pinpoint market trends and unmet needs, needs Neurobus was determined to address.

So, what does Neurobus offer? It's an embedded, frugal Artificial Intelligence – specifically, an AI engineered for minimal energy consumption and direct integration into host systems like drones and satellites. Data processing occurs locally, eliminating the costly energy expenditure of transferring data to centers.



Neurobus HEC paris


Neurobus's initial focus was the space sector, a field inherently linked to defense, with partners like Airbus Defense and Space, the European Space Agency, and the French Space Agency. However, the company adroitly adapted its promising technology to the drone sector, a rapidly expanding market with more immediate demands. Winning a European defense innovation competition further validated the potential of their solution for drone detection.

The core of Neurobus's innovation lies in its biologically-inspired approach: the neuromorphic system. This disruptive technology draws inspiration from the human brain and retina to create processors and sensors that are remarkably energy-efficient.
For Florian, the human brain serves as an unparalleled source of inspiration:

"The brain is one of the best computers that exists today because it delivers immense computing power with extremely low energy consumption."

DRONES: A TESTED AND VALIDATED FIELD OF APPLICATION TESTED AND VALIDATED​

Neurobus sidesteps the capital-intensive manufacturing of components like processors and sensors. Instead, its value proposition lies in assembling these components and developing tailored software layers to meet specific manufacturer needs. This positions the startup as both an integrator and a software publisher, streamlining the adoption of this cutting-edge technology.

As Florian Corgnou explains, "Neurobus operates precisely between the manufacturer and the industrialist. We don't create the hardware, but we assemble it into a product that specifically addresses our customers' requirements and develop software layers that cater to the unique applications of that industrialist."


Neurobus HEC paris


Though Space remains a core sector for Neurobus, its technology's practical application in the drone sector unlocks compelling possibilities for autonomy. Drones equipped with Neurobus's frugal AI can execute missions more independently, making real-time decisions with minimal human oversight. While human validation remains crucial for strategic actions, tasks like area surveillance can be managed autonomously.

For instance, a drone could autonomously evade an oncoming object at high speed. However, directing itself toward a target would require prior human authorization.

Although the present application is primarily focused on defense, driven by the current geopolitical climate and pressing demands, Neurobus also foresees a future in the civilian domain, particularly in applications like autonomous drone delivery services.





As far as I’m aware, we do not know for sure whether this particular drone solution that won 1st place at the 2024 European Defense Hackathon in Paris used Akida as a neuromorphic processor, although it seems reasonable to assume so, given that Neurobus was partnered with Prophesee, BrainChip as well as Intel at the time (and Loihi not yet being commercially available) and was already working with BrainChip and other partners on the NEURAVIS project.

Maybe we’ll find out more in March, when “Brave1, Ukraine’s defense innovation cluster, will conduct a US Roadshow — a two-week investment tour across several American cities aimed at presenting Ukrainian drone technologies and defense tech solutions to U.S. venture capital funds, corporations, family offices, and policymakers (…) According to Brave1, Ukraine’s defense tech ecosystem now includes thousands of companies and solutions, with up to 95% of battlefield engagements relying on domestically developed technologies.” (https://digitalstate.gov.ua/news/te...inian-drones-and-defense-tech-to-us-investors)
Well, we’ll see, maybe the Neurobus solution will also be presented, as a promising technology developed outside Ukraine?

Meanwhile Neurobus appear to have widened their collaboration with companies offering neuromorphic processors to also include IBM and SynSense - cf. this photo of a Neurobus presentation slide taken at the Future of Computing Conference in Paris, which took place on 6 November:


43d2c3d8-f1bb-4eae-b8e8-780327feb18c-jpeg.94394




Last but not least, here is a two month old interview with Florian Corgnou about Neurobus that I believe hasn’t been shared here before:



View attachment 94397 View attachment 94398


P.S.: Alf Kuchenbuch loves today’s post by Jules Lecomte and commented: “Wow, good choice, Jules!!”

View attachment 94401

Our partner Neurobus has been selected as one of 12 teams for the SPRIND Funke ANTI-DRONE RESPONSE 2.0 - what started as a concept and prototype at the Paris European Defense Tech Hackathon in December 2024 has evolved into a refined anti-drone response solution called Neurocatch!



6D33A20A-9A71-4E20-AE7F-5ABBE3B5E4B9.jpeg




SPRIN-D (Bundesagentur für Sprunginnovationen) is Germany’s “Federal Agency for Disruptive Innovation”

7957C49C-112D-4B4F-ADA8-DE32A62376A6.jpeg


905CDAA8-8745-43FB-B00D-D6909AF9FFBC.jpeg



And Vinnova is Sweden’s Innovation Agency





C31338B7-AFB0-47F7-B353-6FC00E3C74FC.jpeg
77BD41BB-B9DF-4AEC-8C1D-B9D0DCEC600D.jpeg
2B7DB009-7908-4F46-8AC3-CEE191D84145.jpeg
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 11 users

Bravo

Meow Meow 🐾
Hi Bravo, not long ago we established that Megachips has been demonstrating Robots in its showroom that contain AKIDA since September 2025.
Megachips is producing Robots in response to Japan's projected labor shortages.
If you do not believe Robotics and AI is the future then not much point holding BRN.
Sean has said many times the AI revolution will make previous revolutions eg Industrial look like nothing.
Investing in the future is by its very nature speculation and that is what Brainchip has been from the very start.
Your post seems like its come from a 'defeated' holder IMO.
Who would have believed years ago airborne or sea Drones would be effective in warfare?


Hi Manny,

I think you’ve missed the point I was making. What I objected to was the language in your post presenting something as a fact when it simply isn’t at this point in time.

You wrote: "The fact is we are heading towards AI taking jobs and we own a share (BRN) that will be providing the robotics that will play a part in the transition."

I'm sorry, but that isn’t a fact - it’s an assumption at this point in time. None of us know yet whether BrainChip will ultimately play a meaningful role in robotics or the broader AI transition.

I will be the first person to say that this is what I hope and pray for (when I'm wearing my prayer helmet 😝) and that I am confident, with the right set of circumstances taking place, that we can get there. As you know I’ve been incredibly supportive of the company for years and have posted extensively in favour of the technology.

But the reality today is that BrainChip still has several challenges to meet before what you’re describing becomes a fact. I don't think I need to remind anyone that we are currently sitting around 13 cents, the company still has very limited revenue, and the market is waiting to see genuine commercial scale adoption.

There are also some near-term realities investors have to consider. Index rebalancing can put pressure on the share price, and the ongoing LDA facility means additional shares may come into the market. If meaningful revenue doesn’t arrive in the next year or so, the company may also need to consider raising capital again, which would dilute existing shareholders even further. The biggest risk IMO is commercial adoption and revenue and if those don’t materialise within a reasonable timeframe, the business will struggle.

None of that means the technology won’t ultimately succeed, but it does mean the path from promising technology to large-scale commercial success still has hurdles to overcome. I don't think that pointing that out makes me a “defeated holder” and I don't think it means I should consider selling my shares. It just means I'm being realistic about where the company currently sits.

You can absolutely believe in the long-term potential of AI and robotics (as I do) while still acknowledging that holding BRN shares today does not automatically mean we “own part of the robotics future”. My point was hat outcome still needs to be proven through real commercial traction.

As for MegaChips, we still don’t have any confirmation that they've renewed their licence or that there is any meaningful production volumes tied to their demonstrations.

So my comment wasn’t meant to be read as negativity. I wasn't trying to be a Debbie Downer, I was simply pushing back on language that suggests something speculative is a certainty. As I've said numerous times, supporting a company and being realistic about its current position are not mutually exclusive.
 
  • Like
  • Love
  • Fire
Reactions: 14 users

Mt09

Regular

1773443605021.png


And this podcast from a couple of years ago, Joe Guerci talks about Akida and EW on an association of old crows podcast, makes for good listening.

 
  • Like
  • Love
  • Fire
Reactions: 15 users

Mt09

Regular

View attachment 96177

And this podcast from a couple of years ago, Joe Guerci talks about Akida and EW on an association of old crows podcast, makes for good listening.
On another note, looks like Joe (ISL) has passed away.

source:%20islinc.com
https://share.google/HcS6XelFN6Hml7odq
 
  • Sad
  • Like
Reactions: 5 users

Frangipani

Top 20

View attachment 96159
View attachment 96161

Pour le moment, nous avons utilisé leur solutions PCIe nommée AkidaTMPCIe Board :”
“Nous utilisons pour le moment la version PCIe de la puce.”

Sounds as if they were already planning on using Akida hardware different from the AKD1000 PCIe Board at some point…




Le projet THINK

Projet de R&T transverse IN2P3

cropped-neurons-1.png

Aller au contenu

Brainchip​

La société Brainchip commercialise depuis peu des processeurs neuronaux. Ils comportent 1,2 million de neurones et 10 milliards de synapses répartis autour de 80 Neural Processing Units (NPUs).

Pour le moment, nous avons utilisé leur solutions PCIe nommée AkidaTMPCIe Board :
PCIe-1.png


Le chip que l’on voit sur la AkidaTMPCIe Board est un AKD1000 :

AKD1000.png

Cette technologie vise à remplacer dans certains domaines l’utilisation du cloud. Le fait de traiter les données en local au niveau hardware au plus près du capteur présente plusieurs avantages :
  • Economie d’énergie : La puce Akida traite des réseaux de neurones à événements discrets ( des Spiking Neural Networks ) qui lui permette d’être efficace au niveau énergétique.
  • Sécurité accrue : Le traitement en local de données sensibles réduit le risque que ces dernières soient récupérées par une personne tiers.
  • Plus efficace : Ne pas avoir à échanger de données entre le capteur et le cloud permet de diminuer la latence de traitement.
Il est possible de programmer plusieurs réseaux en simultanés sur la puce.

Principe de fonctionnement​

Nous utilisons pour le moment la version PCIe de la puce. Afin de dialoguer avec tout se passe en Python. Des bibliothèques spécifiques sont mises à notre disposition par Brainchip. Le fonctionnement est grandement inspiré de Tensorflow Keras :

framework_metatf-1024x484.png
Akida MetaTF ML Framework

Le développement se déroule en plusieurs étapes, la première étape consiste à convertir un réseau classique en réseau compatible avec la carte :


CNN2SNN_Flow.png
Conversion workflow

Une fois que l’on a un modèle au bon format on peut réaliser l’inférence de données.

Performances sur les réseaux « challenges »​

La taille des données est un paramètre extrêmement important lorsque l’on travaille avec la puce Akida.
Les résultats sont donnés en fonction du nombre de bits des poids (weight_q) et du nombre de bits en sortie des couches d’activation (activ_q).

Précision​

precision_config_wordpress.png

Données traitées par seconde (FPS)​

fps_config_wordpress.png

Consommation Énergétique​

conso_config_wordpress.png

Conclusion​

Avantages :
  • Énormément de ressources : 1,2 million de neurones et 10 milliards de synapses
  • Une utilisation intuitive dès lors que l’on connait Keras: Une documentation fournie https://doc.brainchipinc.com/.
  • Faible consommation énergétique.
  • Une technologie qui évolue : Brainchip travaille actuellement sur la Akida 2.0 qui devrait supporter d’avantages de réseaux.
Inconvénients :
  • L’utilisation des NPUs : Un même NPU ne peut pas être assigné à l’exécution de plus d’une couche. De nombreux neurones et synapses restent la plupart du temps inutilisés et ce, même lorsque l’on implémente un design utilisant les 80 NPUs.
  • Faible bande passante en entrée : PCIe gen2 x1 : 4Gb/s
  • Des tailles de données en entrées très limitées : 8 bits avec une couche convolutive en entrée , 4 bits autrement.
  • Peu de couches supportées pour le moment.
  • Une architecture pensée pour le traitement d’images.
 
  • Like
  • Love
Reactions: 10 users

Bravo

Meow Meow 🐾

View attachment 96177

And this podcast from a couple of years ago, Joe Guerci talks about Akida and EW on an association of old crows podcast, makes for good listening.


Hi @Mt09,

The wording “drones, backpacks and remote battery powered locations” is quite telling.

The post was "liked" by Tom M from Raytheon.


Screenshot 2026-03-14 at 11.48.49 am.png


Screenshot 2026-03-14 at 11.48.22 am.png
 
  • Like
  • Fire
  • Love
Reactions: 14 users

Frangipani

Top 20

View attachment 96164

View attachment 96166 View attachment 96167 View attachment 96168 View attachment 96169
“BrainChip demonstrated a radio frequency waveform classifier running on our AKD1500 device, when paired with a software defined radio, enabling deployment in drones, backpacks and remote battery powered locations.


We also demonstrated a micro-doppler radar using the AKD1500 that can be deployed in drones that can recognize the difference between birds and enemy drones.

The future of EW is intelligent, autonomous, and adaptive and we're building it.”

The two demos BrainChip presented at the AOC Collaborative EW 2026 Summit must be the two mobile radar solutions Sean Hehir was hinting at in his October 2025 Stocks Down Under interview with Marc Kennis.

Listen from 6:48 min:





“We’re working with some very interesting [defense] clients on mobile radar solutions, and you can expect us to push harder into that market. So there’s some really good examples. We have two solutions right there.”

The micro-doppler radar of course relates to the AFRL SBIR contract award that was recently completed, for which Raytheon had been hired as subcontractor by BRN.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 12 users

Bravo

Meow Meow 🐾
Re the US Golden Dome program, Raytheon announced a new strategic collaboration with Amazon Web Services (AWS) in December 2025, to significantly advance satellite data processing and mission control operations.

As we all know Raytheon has partnered with BrainChip to help the US Air Force develop neuromorphic AI radar signal processing. I imagine this could be useful in future missile, satellite defence or drone detection systems, but, to be clear, there is no confirmed link as yet to the Golden Dome program.

I asked ChatGPT if drones could theoretically vplay a role in a Golden Dome–type architecture as an edge AI processor for sensors inside a system built by Raytheon and feeding data into AWS command networks and here's what it had to say.









ChatGPT's Response

Yes — drones are actually one of the most plausible platforms where something like BrainChip Holdings’s Akida processor could fit into a layered defence architecture involving primes like RTX Corporation (Raytheon) and cloud systems from Amazon Web Services.

But it helps to think of the system as three layers: the drone (edge), the defence platform, and the cloud/network.

1. The drone layer (edge AI)​

Drones often carry sensors such as:
  • small radars
  • RF receivers
  • cameras
  • acoustic sensors
These sensors generate large amounts of data. Sending all of that data back to a command centre can be inefficient or impossible in contested environments.

A low-power AI chip on the drone could:
  • classify radar or RF signals
  • detect enemy drones
  • identify missile launches or radar emitters
  • distinguish birds from drones (like the micro-doppler example)
Because drones have limited battery capacity, power-efficient AI hardware is very valuable here.

That’s the niche neuromorphic chips aim for.



2. The defence platform layer (Raytheon-type systems)​

Companies like RTX build systems such as:
  • radar networks
  • missile defence platforms
  • electronic warfare systems
  • drone defence systems
If a drone detects something important (for example an incoming missile or hostile drone swarm), it could transmit classified intelligence back to a larger defence system.

That defence system might then:
  • track the threat
  • coordinate interceptors
  • jam enemy communications
So the drone becomes part of a distributed sensing network.



3. The cloud / command layer (AWS-type infrastructure)​


Large defence architectures increasingly use cloud-style infrastructure for:
  • aggregating sensor data
  • coordinating responses
  • AI analysis across large datasets
  • battlefield command systems
AWS and similar providers offer secure infrastructure for defence networks.

In this architecture:
  • drones collect and filter data locally
  • key information is transmitted to command networks
  • the cloud aggregates information from many sensors


4. What this architecture might look like​

Drone sensor

Edge AI chip (Akida-type)

Threat classification

Secure transmission

Raytheon defence platform

AWS cloud / command network

This type of architecture is sometimes called distributed sensing or sensor fusion networks.



5. Why drones are attractive for this​

Drones are increasingly used as mobile sensor platforms.

They can:
  • patrol large areas
  • monitor airspace
  • track electronic signals
  • detect threats early
Instead of relying only on fixed radar stations, militaries deploy many smaller sensing nodes.

Each node might need local AI processing.



6. The important reality check​

Even though this architecture is technically logical:
  • there is no public evidence that BrainChip is currently involved in such systems
  • defence primes typically integrate many candidate technologies during evaluation phases
  • the path from demonstration to deployment can take many years
So this remains a potential application rather than a confirmed program.



✅ Bottom line

Yes — drones could theoretically use an ultra-low-power AI chip like Akida to perform onboard signal classification, threat detection, or sensor fusion, and then pass that information into larger defence systems run by companies like RTX and cloud infrastructure from AWS.

But at present this is a plausible architecture, not a confirmed deployment.
 
  • Like
  • Love
  • Fire
Reactions: 11 users
Top Bottom