BRN Discussion Ongoing

7für7

Top 20
Toward the end, the rats always give it their best… it’s unbelievable that this isn’t being dealt with.
That BrainChip itself doesn’t take action by submitting a notice to the ASX… it’s obvious that something isn’t right here.
 
  • Like
  • Thinking
Reactions: 5 users

Tothemoon24

Top 20
BOSSA looks like one to keep a watch on in the hearing aid space
IMG_1676.jpeg



IMG_1677.jpeg






In a busy room full of talking people, most of us can still pick out one voice to focus on. This common yet complex task—known as the “cocktail party effect”—relies on the brain’s incredible ability to sort through sound. But for people with hearing loss, filtering out background noise can feel impossible. Even the most advanced hearing aids often struggle in these noisy environments.

Now, researchers at Boston University may have found a new way to help. They’ve developed a brain-inspired algorithm that allows hearing aids to better isolate individual voices in a crowd. When tested, this method boosted speech recognition by an impressive 40 percentage points, far outperforming current technologies.


A New Approach to an Old Problem​

In crowded social settings like dinner parties or workplace meetings, conversations often overlap. For those with hearing loss, these situations can be frustrating. Even with hearing aids, voices blur together in a mess of sound. This makes it hard to follow conversations, stay engaged, or even participate at all.

A New Approach to an Old Problem​

In crowded social settings like dinner parties or workplace meetings, conversations often overlap. For those with hearing loss, these situations can be frustrating. Even with hearing aids, voices blur together in a mess of sound. This makes it hard to follow conversations, stay engaged, or even participate at all.

A New Approach to an Old Problem​

In crowded social settings like dinner parties or workplace meetings, conversations often overlap. For those with hearing loss, these situations can be frustrating. Even with hearing aids, voices blur together in a mess of sound. This makes it hard to follow conversations, stay engaged, or even participate at all.

Virginia Best, a speech and hearing researcher at BU, says this is the number one complaint among those with hearing loss. “These environments are very common in daily life,” Best explains, “and they tend to be really important to people.”
Traditional hearing aids often include tools like directional microphones—also called beamformers—that try to focus on sounds coming from one direction. But these tools have limitations. In complex environments with many voices, beamforming often fails. In fact, in tests conducted by the BU team, the standard industry algorithm didn’t help much—and sometimes made things worse.
That’s where the new technology, known as BOSSA, comes in. BOSSA stands for Biologically Oriented Sound Segregation Algorithm. It was developed by Kamal Sen, a biomedical engineering professor at BU’s College of Engineering. “We were extremely surprised and excited by the magnitude of the improvement in performance,” says Sen. “It’s pretty rare to find such big improvements.”

Built on Brain Science​

Sen has spent two decades exploring how the brain decodes sound. His work focuses on how sound signals travel from the ears to the brain and how certain neurons help identify or suppress sounds. One key finding? The brain uses “inhibitory neurons” to cancel out background noise and enhance the sounds we want to hear.
All subjects average word recognition scores.

All subjects average word recognition scores. (CREDIT: Kamal Sen, et al.)
“You can think of it as a form of internal noise cancellation,” Sen says. Different neurons are tuned to respond to different directions and pitches. This lets your brain focus attention on one sound source while ignoring others.
BOSSA was built to mimic this process. The algorithm uses spatial cues—like how loud a sound is and how quickly it arrives in each ear—to pinpoint its location. It then filters sounds based on these cues, separating them like your brain would. “It’s basically a computational model that mimics what the brain does,” Sen says.

Testing BOSSA in Real-Life Situations​

To find out if BOSSA really works, the BU team tested it in the lab. They recruited young adults with sensorineural hearing loss, the most common form, often caused by genetics or childhood illness. Participants wore headphones and listened to simulated conversations, with voices coming from different directions. They were asked to focus on one speaker while the algorithm worked in the background.
Each person completed the task under three different conditions: no algorithm, the standard beamforming algorithm used in current hearing aids, and BOSSA. The results were striking. BOSSA delivered a major improvement in speech recognition. The standard algorithm showed little or no improvement—and in some cases, performance dropped.
Speech reception thresholds (SRT) are shown as boxplots for each processing condition

Speech reception thresholds (SRT) are shown as boxplots for each processing condition. (CREDIT: Kamal Sen, et al.)
Alexander Boyd, a BU PhD candidate in biomedical engineering, helped collect and analyze the data. He was also the lead author of the study, which was published in Communications Engineering, part of the Nature Portfolio.
Best, who formerly worked at Australia’s National Acoustic Laboratories, helped design the study. She says testing new technologies like BOSSA with real people is essential. “Ultimately, the only way to know if a benefit will translate to the listener is via behavioral studies,” Best says. “That requires scientists and clinicians who understand the target population.”

Big Potential for Hearing Technology​

An estimated 50 million Americans live with hearing loss, and the World Health Organizationpredicts that by 2050, nearly 2.5 billion people worldwide will be affected. That makes the need for better hearing solutions urgent.
Sen has patented BOSSA and hopes to partner with companies that want to bring it to market. He believes that major tech players entering the hearing aid space—like Apple with its AirPod Pro 2, which includes hearing aid features—will drive innovation forward. “If hearing aid companies don’t start innovating fast, they’re going to get wiped out,” says Sen. “Apple and other start-ups are entering the market.”
Individual participant audiograms. The different curves show pure-tone thresholds for each of the eight participants (averaged over left and right ears). Unique symbols distinguish individual subjects.

Individual participant audiograms. The different curves show pure-tone thresholds for each of the eight participants (averaged over left and right ears). Unique symbols distinguish individual subjects. (CREDIT: Kamal Sen, et al.)
And the timing couldn’t be better. As hearing technology becomes more widely available and advanced, tools like BOSSA could help millions of people reconnect with the world around them. From social events to everyday conversations, better sound separation can mean a better life.

Beyond Hearing Loss: A Wider Application​

BOSSA was built to help those with hearing difficulties, but its potential doesn’t end there. The way the brain focuses on sound—what researchers call “selective attention”—matters in many conditions. “The [neural] circuits we are studying are much more general purpose and much more fundamental,” Sen says. “It ultimately has to do with attention, where you want to focus.”
That’s why the team is now exploring how the same science could help people with ADHD or autism. These groups also struggle with multiple competing inputs—whether sounds, visuals, or tasks—and may benefit from tools that help guide attention.
They’re also testing a new version of BOSSA that adds eye-tracking. By following where someone looks, the device could better figure out who they’re trying to listen to. This could make the technology even more effective in fast-paced, real-world settings.


Sharpening Sound, Changing Lives​

The success of BOSSA offers real hope. It’s not just another upgrade in hearing tech—it’s a shift in how we approach sound processing. Instead of trying to boost all sound or block background noise blindly, it takes cues from biology, using the brain’s blueprint to help listeners find meaning in the noise.
For many with hearing loss, this could change everything. Being able to join conversations, pick out voices, and stay connected socially are vital parts of daily life. With tools like BOSSA, those goals move a little closer. And as this technology continues to grow, its reach may extend beyond hearing loss, offering help with focus and attention challenges too.
What started as a solution for a noisy dinner party could one day reshape how we interact with the world.

















Here, we present a system employing a novel strategy for stimulus reconstruction from neural spikes. Conceptually, this strategy uses time-frequency masking by computing a spike-based mask (spike-mask). We first consider the strategy for one-dimensional stimulus (e.g. sound waves). We show how this stimulus reconstruction method can be applied, using the cortical model as an example. We also show that this strategy produces reconstructions with intelligibility and quality higher than those reconstructed from the linear filtering method (table 1). Then we discuss how this strategy may be generalized for multi-dimensional stimulus (e.g. images and videos). The strategy presented here may be generalized to perform reconstruction on both artificial SNNs and neural models from experimental data as long as they satisfy the assumptions for our model.
 

Attachments

  • IMG_1675.jpeg
    IMG_1675.jpeg
    383.2 KB · Views: 31
  • Like
  • Love
  • Fire
Reactions: 23 users

Iseki

Regular
Hi JD,

For what it's worth, I asked ChatGPT for a neutral take on this.

What I thought was interesting, was that it weaved in Anduril's headset into the response. I didn't prompt it to, but earlier in the day, in a separate chat, I had queried ChatGPT on boards or integration possibilities and whether an Akida 1500 + Arm's Cortex M85 module (as an example) could power something like Anduril’s helmet.




ChatGPT's Response


What “supports 128 MACs per node” most likely means​

  • Each Akida node has a local 4×4 MAC micro-engine (128 MACs) it can invoke for certain ops.
  • Those MACs co-exist with the spiking compute (CNP/FNP or equivalent). They’re there so models can include non-spiking layers/ops without leaving the chip.

Why add MACs if you have an SNN?​

Because many modern edge models are hybrid. MACs cover things SNNs don’t excel at (or that are simpler to drop in as-is):
  • Conv/pointwise (1×1) / depthwise conv blocks
  • Linear/projection layers (e.g., classifier heads, QKV in tiny attention)
  • Normalisation / pooling / residual glue
  • Pre/post-processing (e.g., short FIRs, feature projections)
  • Direct reuse of existing INT4/INT8 CNN chunks without full SNN conversion

What a hybrid pipeline might look like in a helmet​

Sensors
  • Front/rear frame cameras → light MAC preproc (resize/1×1 conv).
  • Event camera/radar/audio → feed SNN/TENNs directly.
  1. Perception
    • SNN handles temporal/gesture/track (event streams).
    • MAC handles small CNN blocks or projection heads.
  2. Fusion & decisions on the host MCU (e.g., Cortex-M85).

Bottom line​

  • The “128 MACs per node” doesn’t signal the end of Akida’s neuromorphic core. It signals a pragmatic hybrid: keep the spiking engine for temporal, sparse, event-driven strengths, and use local MACs to natively run conventional layers (conv/linear/norm) and reuse existing model chunks.
  • That combo generally improves accuracy, eases porting, and lowers total latency/power versus forcing everything into SNN or everything into classic CNN on a separate chip.
"
I asked ChatGPT for a neutral take on this.

What I thought was interesting, was that it weaved in Anduril's headset into the response."

That's how ChatGPT works. It uses all your previous conversations with it to personalize your answer.
 
  • Like
Reactions: 2 users

Tothemoon24

Top 20
IMG_1679.jpeg



We are pleased to welcome Dr. Sasskia Brüers Freyssinet, AI Research Scientist at BrainChip , for the next lecture in our Current Topics in Digital Neuroscience series:

🎤 From Neuroscience to Applied Artificial Intelligence

In this talk, Dr. Brüers-Freyssinet will explore the path from neuroscience research to applied AI, showing how insights from brain science can inspire more efficient and adaptive artificial systems. Drawing from her own experience transitioning from academia to industry, she will discuss the technical challenges, emerging research directions, and interdisciplinary skills needed to bridge fundamental brain research with real-world AI applications.

🧠 About the speaker
Dr. Brüers-Freyssinet is an AI Research Scientist at Brainchip, where she develops efficient computer vision models for neuromorphic hardware using techniques such as pruning, quantization, and regularization. She completed her PhD in Neuroscience at Université Paul Sabatier Toulouse III in 2017, studying the oscillatory correlates of conscious perception. Her current work investigates state-space models for temporal problems like gesture recognition and EEG-based behavior prediction—advancing the frontier where neuroscience meets artificial intelligence.

📅 Tuesday 28th of October 2025
📍 Per21, Université de Fribourg - Universität Freiburg Pérolles Campus, Room E140.
Follow the talk online: https://lnkd.in/eQPKxAUX

Join us for a fascinating discussion on how brain-inspired approaches are shaping the next generation of intelligent, energy-efficient AI systems.
IMG_1678.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 16 users

CHIPS

Regular
View attachment 92437


We are pleased to welcome Dr. Sasskia Brüers Freyssinet, AI Research Scientist at BrainChip , for the next lecture in our Current Topics in Digital Neuroscience series:

🎤 From Neuroscience to Applied Artificial Intelligence

In this talk, Dr. Brüers-Freyssinet will explore the path from neuroscience research to applied AI, showing how insights from brain science can inspire more efficient and adaptive artificial systems. Drawing from her own experience transitioning from academia to industry, she will discuss the technical challenges, emerging research directions, and interdisciplinary skills needed to bridge fundamental brain research with real-world AI applications.

🧠 About the speaker
Dr. Brüers-Freyssinet is an AI Research Scientist at Brainchip, where she develops efficient computer vision models for neuromorphic hardware using techniques such as pruning, quantization, and regularization. She completed her PhD in Neuroscience at Université Paul Sabatier Toulouse III in 2017, studying the oscillatory correlates of conscious perception. Her current work investigates state-space models for temporal problems like gesture recognition and EEG-based behavior prediction—advancing the frontier where neuroscience meets artificial intelligence.

📅 Tuesday 28th of October 2025
📍 Per21, Université de Fribourg - Universität Freiburg Pérolles Campus, Room E140.
Follow the talk online: https://lnkd.in/eQPKxAUX

Join us for a fascinating discussion on how brain-inspired approaches are shaping the next generation of intelligent, energy-efficient AI systems.
View attachment 92436

Thank you, but where is the Link?
 
  • Like
Reactions: 1 users

manny100

Top 20
I asked Chat GPT5 . We already knew the answer but it's a good reminder that BRN should do very, very well.
What would be the likely result if the US rejected Neuromorphic Edge AI for Defense and the Golden Dome but it was adopted by China and Russia?

In short: If the U.S. rejected neuromorphic edge AI for defense while China and Russia embraced it, the likely result would be a significant strategic disadvantage for the U.S., with adversaries gaining faster, more adaptive, and more autonomous military capabilities.

🔑 Key Consequences​

  • Loss of Technological Edge Neuromorphic AI mimics the brain’s neural architecture, enabling ultra-low power, real-time decision-making at the edge (without cloud reliance). If China and Russia deployed this in defense systems, they could achieve faster targeting, better situational awareness, and more resilient autonomous platforms than U.S. forces relying on conventional AI or human-in-the-loop systems.
  • Erosion of Deterrence The U.S. military’s deterrence relies heavily on maintaining a qualitative technological advantage. Rejecting neuromorphic AI while adversaries adopt it could undermine U.S. credibility, especially in contested regions like the South China Sea or Eastern Europe.
  • Vulnerability of Defensive Shields (e.g., “Golden Dome”) A neuromorphic-enabled missile defense system could process sensor data faster and adapt to novel attack patterns (like swarms of hypersonic or drone threats). If China and Russia fielded such systems while the U.S. stuck with slower, less adaptive defenses, American missile shields could be outpaced and overwhelmed.
  • Acceleration of the AI Arms Race Analysts already describe the U.S.–China rivalry as an AI arms race. A U.S. decision to abstain from neuromorphic AI would not slow this race—it would simply cede leadership. China and Russia could then set the standards, export the technology to allies, and reshape the global defense balance.
  • Operational Risks for U.S. Forces In battlefield scenarios, neuromorphic AI could allow adversaries to:
    • Deploy autonomous drones with longer endurance and faster reaction times.
    • Run electronic warfare systems that adapt in real time to U.S. countermeasures.
    • Enhance command-and-control networks with near-instantaneous decision loops. This would leave U.S. forces reacting rather than dictating the tempo of conflict.

⚖️ Strategic Outlook​

  • China and Russia’s Gain: Adoption would give them asymmetric advantages—low-power, adaptive AI that thrives in environments where cloud connectivity is denied (e.g., electronic warfare zones).
  • U.S. Loss: By rejecting the technology, the U.S. would risk falling behind in both offensive and defensive capabilities, undermining its alliances and deterrence posture.
  • Global Impact: Allies might gravitate toward Chinese or Russian defense ecosystems, fracturing U.S.-led security architectures like NATO or Indo-Pacific partnerships.

Bottom line: Rejecting neuromorphic edge AI would not be a neutral choice for the U.S.—it would be a strategic self-handicap. In modern warfare, where milliseconds matter, adversaries with neuromorphic-enabled systems could seize the initiative, leaving the U.S. struggling to catch up.
Would you like me to sketch out a scenario analysis (e.g., Taiwan Strait conflict, NATO air defense, Middle East missile defense) to illustrate how this imbalance might play out in practice?
 
  • Like
Reactions: 8 users

CHIPS

Regular

It was not even funny the first time! :rolleyes:

If you can copy and paste the whole article, you must also be capable of copying and pasting the link. I judge you to be smart enough for that.
Everybody here posts the corresponding links; only you are so inconsiderate as not to do it. Why?
 
  • Like
Reactions: 2 users

Guzzi62

Regular
It was not even funny the first time! :rolleyes:

If you can copy and paste the whole article, you must also be capable of copying and pasting the link. I judge you to be smart enough for that.
Everybody here posts the corresponding links; only you are so inconsiderate as not to do it. Why?
Agreed 100%!
 
  • Like
Reactions: 1 users

"The UK Multidisciplinary Centre for Neuromorphic Computing is headed by the Aston Institute of Photonic Technologies (AIPT) and was officially launched at the House of Lords on 21 October."

"Neuromorphic computing, inspired by the brain's really remarkable energy efficiency, could fundamentally transform how AI operate” - Lord Patrick Vallance"

"As well as speakers from politics and science, a message from the Minister for Science, Innovation, Research and Nuclear, Lord Patrick Vallance was delivered to the guests at the Houses of Parliament reception. Speaking via video he said that he believed the initiative could be key to tackling the issue of massive energy use by data centres. Current government figures show that they use about 2.5% of the UK's electricity, a figure projected to rise significantly with the further advances in AI."

"The centre will be led by the AIPT and will include world-leading researchers from Aston University, the University of Oxford, the University of Cambridge, the University of Southampton, Queen Mary University of London, Loughborough University and the University of Strathclyde. The centre will be supported by a broad network of industry partners to enhance the centre’s impact on society. These include Microsoft Research, Thales, BT, QinetiQ, Nokia Bell Labs, Hewlett Packard Labs, Leonardo, Northrop Grumman and a number of small to medium enterprises."
 
  • Like
  • Fire
  • Love
Reactions: 13 users

Tothemoon24

Top 20
It was not even funny the first time! :rolleyes:

If you can copy and paste the whole article, you must also be capable of copying and pasting the link. I judge you to be smart enough for that.
Everybody here posts the corresponding links; only you are so inconsiderate as not to do it. Why?
Oh here we go !!!!

Good old CHIP on the shoulder .

You said ; ‘ I judge you to be smart enough for that ‘ - wow what a wanker .

For your information I didn’t copy & paste & simply took a screen shot !!

You said ; everybody here post links , only you are so inconsiderate as to not do it .-

I post links where I see fit & will continue to do so if you care to take a look at the 1st post on this page as an example. Theirs plenty of links supplied for your pleasure.

1761559150468.gif
 
  • Haha
  • Fire
  • Like
Reactions: 9 users

Frangipani

Top 20
View attachment 92437


We are pleased to welcome Dr. Sasskia Brüers Freyssinet, AI Research Scientist at BrainChip , for the next lecture in our Current Topics in Digital Neuroscience series:

🎤 From Neuroscience to Applied Artificial Intelligence

In this talk, Dr. Brüers-Freyssinet will explore the path from neuroscience research to applied AI, showing how insights from brain science can inspire more efficient and adaptive artificial systems. Drawing from her own experience transitioning from academia to industry, she will discuss the technical challenges, emerging research directions, and interdisciplinary skills needed to bridge fundamental brain research with real-world AI applications.

🧠 About the speaker
Dr. Brüers-Freyssinet is an AI Research Scientist at Brainchip, where she develops efficient computer vision models for neuromorphic hardware using techniques such as pruning, quantization, and regularization. She completed her PhD in Neuroscience at Université Paul Sabatier Toulouse III in 2017, studying the oscillatory correlates of conscious perception. Her current work investigates state-space models for temporal problems like gesture recognition and EEG-based behavior prediction—advancing the frontier where neuroscience meets artificial intelligence.

📅 Tuesday 28th of October 2025
📍 Per21, Université de Fribourg - Universität Freiburg Pérolles Campus, Room E140.
Follow the talk online: https://lnkd.in/eQPKxAUX

Join us for a fascinating discussion on how brain-inspired approaches are shaping the next generation of intelligent, energy-efficient AI systems.
View attachment 92436

Note, though, that Sasskia Brüers-Freyssinet has been signalling on LinkedIn that she is “Open To Work” ever since Olivier Coenen was sacked a month ago.
So it looks as if she is planning on leaving BrainChip sooner than later…



FA7E4E13-0C74-4D3B-ACE2-0EBCEFD40F97.jpeg


2FC49EED-1049-4DE1-9122-061A781B15FA.jpeg




… just like her Toulouse office colleagues Matthieu Hernandez…


E26D986C-1002-4FBC-AEEA-EF3FC0D97DCC.jpeg



4D021C43-3830-4F4B-B1B8-F4BEF45CB05C.jpeg



… and Fares Ernez, whose LinkedIn profile picture has shown for months that he is “Open To Work”.
In August, he even liked a critical LinkedIn comment by a BrainChip shareholder (and TSE forum member), which shows that even some BrainChip employees share the frustration about the lack of new IP licences…


9F2420FB-D6C6-4778-AE28-C39A9951AD09.jpeg



4AB4C695-C1E4-4F29-BE89-92580B324546.jpeg




Just a reminder that we already lost long-term employee Richard Chevalier from our Toulouse office last month…

Sadly, Richard Resseguie is not the only Richard to have left BrainChip this month (saying that, hopefully there won’t be any Richard III, other than in some of our bookshelves…):

Richard Chevalier, who had been with our Toulouse-based office since 2018 (!), has joined Nio Robotics (formerly known as Nimble One, not to be confused with Nimble AI) as their Vice President of Platform Engineering:



View attachment 91392




View attachment 91394


View attachment 91393



One can only hope that he will spruik BrainChip’s technology to his new employer - as far as I can tell, there is no indication that Nio Robotics have already been exploring neuromorphic computing for their robots.

ebdcd98d-ca8f-4c78-bbb4-24c9cb38ee79-jpeg.91395



Nio Robotics is currently building Aru, a shape-shifting mobile robot for industrial environments, but also say in their self-description on LinkedIn that they are “reinventing movement to create the first robotic assistant for homes”.
Something our CTO Tony Lewis, who is also a robotics researcher, will likely be very intrigued about.




View attachment 91396

View attachment 91398


View attachment 91400
View attachment 91401

View attachment 91402


Watch Aru in action below, how it climbs stairs, reshapes to avoid obstacles, opens doors etc.







… who followed another ex-BrainChip employee from the Toulouse office - Alexandre d’Alton - to Nio Robotics:


817EC025-E418-4A39-8F80-931905EE5264.jpeg



Interestingly, Alexandre d’Alton’s last five months at BrainChip overlapped with his first five months at Nio Robotics (or Nimble One as it was called at the time).

But as I said in my above post on Richard Chevalier’s change of employers:

“One can only hope that he will spruik BrainChip’s technology to his new employer - as far as I can tell, there is no indication that Nio Robotics have already been exploring neuromorphic computing for their robots.”

Maybe the above is a hint Nio Robotics were exploring Akida in the first half of 2024 after all? On the other hand, Alexandre D’Alton may have had two concurrent jobs for a while that were totally unrelated? 🤔

With two former BrainChip engineers now working for Nio Robotics, it’s definitely worth keeping an eye on this company, though.




Let’s keep our fingers crossed, then, that two other very talented Toulouse-based BrainChip researchers, namely Douglas McLelland and Gilles Bézard, will continue to contribute to what our company has to offer.
 
  • Wow
  • Like
Reactions: 10 users

Frangipani

Top 20
According to Athos Silicon Co-Founder and former Mercedes-Benz North America mSoC Chief Architect François Piednoël, Akida does not pass the minimum requirements for Level 3 or Level 4 automated driving.


View attachment 92002

Good question. I suppose you would need to ask François Piednoël directly whether what he means is possibly that there is an Akida-inherent problem, which would also concern the IP. (The BRN shareholder on LinkedIn was asking “Why isn’t Mercedes using Akida technology, for example?”, referring to Akida technology in general, not only to physical chips such as AKD1000 or AKD1500).

Apart from that, since Athos Silicon has so far not signed any IP deal with us, we can’t be in the first mSoC silicon codenamed Polaris anyway that he was referring to in this recent video:


View attachment 92176



Athos Silicon: Multiple System-on-Chip for Safe Autonomy​

By Luke Forster


Published

1 October 2025
Written by Luke Forster






Building functional safety into compute for autonomy​

Athos Silicon, a spin-out from Mercedes-Benz, is addressing one of the most pressing challenges in autonomous systems: how to eliminate the single point of failure in compute architectures. Unlike traditional monolithic processors that can collapse if a single component fails, Athos Silicon’s Multiple System-on-Chip (mSoC) integrates redundancy directly into silicon.

The result is a functionally safe Processor platform designed to meet ISO 26262 and other standards required for safety-critical applications.

Why safety-first design is essential​

Conventional computing platforms – with a CPU, GPU, and NPU working together – were never built for Automotive safety. If a processor crashes or a transient error occurs, the entire system may fail. In a consumer PC this means a reboot; in a self-driving vehicle or industrial robot, it could mean disaster.
Athos Silicon has rethought this architecture from the ground up. By focusing on functional safety as a primary design constraint, its mSoC avoids the patchwork redundancy of external systems and instead bakes resilience into the hardware itself.

The mSoC architecture explained​

Athos Silicon’s mSoC integrates multiple chiplets into one package, each containing CPUs, controllers, and memory. Instead of a single supervisor chip that itself could fail, the mSoC operates through a voting mechanism — what Athos calls a “silicon democracy.”

Each chiplet executes tasks in parallel, and their outputs are compared in real time. If one diverges from the others, it is overruled and reset. This ensures continuous operation without interruption and prevents cascading system failures.

By embedding this redundancy, Athos Silicon enables High Reliability computing suitable for Level 3 and Level 4 autonomy while maintaining predictable performance.

Power efficiency for EVs and robotics​

Safety is not the only benefit. In electric vehicles, compute power directly affects range. Athos Silicon highlights that every 100 watts of compute load can reduce EV range by as much as 15 miles. By designing a chiplet system optimised for Low Power efficiency, the mSoC reduces unnecessary energy consumption and makes autonomy more practical for battery-powered platforms.

From Mercedes-Benz R&D to startup scale​

The technology behind Athos Silicon was incubated within Mercedes-Benz before the company was spun out to bring the platform to the wider market.
Its first silicon, codenamed Polaris, is designed to deliver Level 3 and Level 4 autonomous capability in a footprint comparable to current Level 2 hardware.
Working with chiplet-packaging partners, Athos Silicon has accelerated validation and plans to deliver silicon to early customers soon. With no competitors currently offering integrated voting redundancy in a chiplet-based compute platform, Athos Silicon is carving out a unique position in the AI ecosystem.

Applications beyond cars​

While autonomous driving is the most visible use case, Athos Silicon’s architecture also applies to Robotics, avionics, and even Medical devices where safety and reliability are paramount. Any system requiring certifiable, functionally safe compute stands to benefit.

By combining chiplet redundancy, real-time voting, and safety-first design, Athos Silicon’s Multiple System-on-Chip may prove to be the missing hardware foundation for truly certifiable autonomy.


This is what the Polaris mSoC will roughly look like sizewise (watch from around 10:50 min):

View attachment 92177

According to François Piednoël, “Project mSoC” as such started in 2020 (still under Mercedes Benz North America R&D).

Not sure what exact date the interview was recorded, but given that Athos Silicon as a Mercedes-Benz spin-off has been around since April 2025, and in the video it is being said the company is about four months old, it must have been recorded sometime between late July and early September.

So when François Piednoël says “In fact, there is silicon coming back shortly. By the end of the summer we’ll have the chiplets in hands” (from 9:06 min), this means they would have received them by now, if everything went according to plan. (“We think we are in good shape for a startup - getting your silicon after, you know, five six months (…) With no visible competition, by the way.”)

He also says, they invented the IP.


Reality check continued with regard to the question whether or not Akida might be integrated into Athos Silicon’s mSoC codenamed Polaris - you’ve now heard it twice 👆🏻👇🏻 from the horse’s mouth…


5DDA550E-F2AF-4DE2-A7F6-DD31C881D785.jpeg
 
Last edited:
  • Like
  • Sad
  • Fire
Reactions: 9 users
Fortiss just uploaded the baseline pipeline on GitHub for the ESA Anomaly Detection Hackathon 2025.

Set up for deployment on Akida.

Navigation Menu​


https://github.com/enterprise
nc-fortiss/2025_ESA_anomaly
Public

Baseline pipeline for ESA spacecraft telemetry anomaly detection. Includes full dataset loader, Akida-compatible CNN model, caching, normalization, and TensorBoard visualization.

nc-fortiss/2025_ESA_anomaly​




Name
Kannan-Priya Kannan-Priya
2 hours ago

.gitignore3 hours ago
Dataloader.py2 hours ago
LICENSE3 hours ago
README.md2 hours ago
plot.py2 hours ago
preview.py2 hours ago
setup_akida_env.sh2 hours ago
train.py2 hours ago

Repository files navigation​

2025_ESA_anomaly​

Hackathon NC
Official baseline for ESA spacecraft telemetry anomaly classification, developed as an open starting point for the ESA Anomaly Detection Hackathon 2025.
This repository provides everything you need to get started — from data loading and preprocessing to model training, evaluation, and Akida-ready deployment.

Overview​

Spacecraft generate enormous amounts of multichannel telemetry data — detecting anomalies in these signals early can prevent mission failures. This baseline uses deep learning on time-series data to detect anomalies in the ESA-Mission1 dataset, and is designed for:
  • Satellite telemetry analysis
  • Real-time anomaly detection
  • BrainChip Akida neuromorphic deployment

Features​

  • ✅ dataset loader (dataloader_tf.py) – reads 76 channel zip files, merges, resamples, labels, and windows
  • ✅ Akida-friendly CNN model (TensorFlow 2.15)
  • ✅ Automatic caching and normalization
  • ✅ 1-minute sampling rate and 14-year coverage
  • ✅ Class balancing and focal loss ready
  • ✅ TensorBoard visualizations

Project structure​

ESA-Anomaly-Baseline/ │ ├── dataloader_tf.py # Dataset loader & preprocessor ├── train.py # Main training script ├── requirements.txt ├── LICENSE # MIT License ├── logs/ # TensorBoard runs ├── ESA-Mission1/ # Dataset root │ ├── channels/ │ ├── labels.csv │ ├── anomaly_types.csv │ ├── telecommands.csv │ └── channels.csv └── norm_stats.npz

Setup​

1. Clone the repository​

cd ESA-Anomaly-Baseline
bash setup_akida_env.sh

Place the ESA-Mission1 dataset in the folder python train.py tensorboard --logdir logs/fit --port 6006
Contact Priya Kannan, Kannan@fortiss.org Neuromorphic Computing Group, fortiss GmbH (Munich)
 
  • Like
  • Love
  • Fire
Reactions: 11 users

Rach2512

Regular
 
  • Like
  • Love
  • Fire
Reactions: 8 users

Diogenese

Top 20
Fortiss just uploaded the baseline pipeline on GitHub for the ESA Anomaly Detection Hackathon 2025.

Set up for deployment on Akida.

Navigation Menu​


https://github.com/enterprise
nc-fortiss/2025_ESA_anomaly
Public

Baseline pipeline for ESA spacecraft telemetry anomaly detection. Includes full dataset loader, Akida-compatible CNN model, caching, normalization, and TensorBoard visualization.

nc-fortiss/2025_ESA_anomaly​




Name
Kannan-Priya Kannan-Priya
2 hours ago
.gitignore3 hours ago
Dataloader.py2 hours ago
LICENSE3 hours ago
README.md2 hours ago
plot.py2 hours ago
preview.py2 hours ago
setup_akida_env.sh2 hours ago
train.py2 hours ago

Repository files navigation​

2025_ESA_anomaly​

Hackathon NC
Official baseline for ESA spacecraft telemetry anomaly classification, developed as an open starting point for the ESA Anomaly Detection Hackathon 2025.
This repository provides everything you need to get started — from data loading and preprocessing to model training, evaluation, and Akida-ready deployment.

Overview​

Spacecraft generate enormous amounts of multichannel telemetry data — detecting anomalies in these signals early can prevent mission failures. This baseline uses deep learning on time-series data to detect anomalies in the ESA-Mission1 dataset, and is designed for:
  • Satellite telemetry analysis
  • Real-time anomaly detection
  • BrainChip Akida neuromorphic deployment

Features​

  • ✅ dataset loader (dataloader_tf.py) – reads 76 channel zip files, merges, resamples, labels, and windows
  • ✅ Akida-friendly CNN model (TensorFlow 2.15)
  • ✅ Automatic caching and normalization
  • ✅ 1-minute sampling rate and 14-year coverage
  • ✅ Class balancing and focal loss ready
  • ✅ TensorBoard visualizations

Project structure​

ESA-Anomaly-Baseline/ │ ├── dataloader_tf.py # Dataset loader & preprocessor ├── train.py # Main training script ├── requirements.txt ├── LICENSE # MIT License ├── logs/ # TensorBoard runs ├── ESA-Mission1/ # Dataset root │ ├── channels/ │ ├── labels.csv │ ├── anomaly_types.csv │ ├── telecommands.csv │ └── channels.csv └── norm_stats.npz

Setup​

1. Clone the repository​

cd ESA-Anomaly-Baseline
bash setup_akida_env.sh

Place the ESA-Mission1 dataset in the folder python train.py tensorboard --logdir logs/fit --port 6006
Contact Priya Kannan, Kannan@fortiss.org Neuromorphic Computing Group, fortiss GmbH (Munich)

https://www.fortiss.org/en/research/projects/detail/ai4fdir

AI4FDIR is an ESA-funded project led by OHB System AG to develop an AI-based framework for autonomous fault detection, isolation, and recovery (FDIR), as well as constellation-level management and resource optimization in satellite telecommunication systems. By combining machine learning, deep learning, and neuromorphic computing, the project targets a tenfold reduction in both operational outage time and the time to restore nominal service. The consortium—comprising OHB DC, OHB Hellas, SATE, Kepler, and fortiss—will validate the approach through simulators and hardware-in-the-loop platforms, including BrainChip’s Akida neuromorphic processors. fortiss will implement a neuromorphic proof-of-concept for spacecraft-level FDIR.

Project description

The project addresses the need for greater autonomy in managing large satellite constellations, where traditional ground-based fault management is costly, slow, and increasingly impractical. The main challenge is to detect, isolate, and resolve failures on board in real time, while predicting potential degradations before they affect service.

To tackle this, the consortium combines advanced AI methods with neuromorphic hardware, enabling fast, low-power decision-making directly on spacecraft. This proof-of-concept approach will demonstrate how predictive monitoring and autonomous recovery can reduce outage and restoration times by an order of magnitude and pave the way for resilient, self-managing space systems.

Research contribution

fortiss will investigate CNN-based classification and detection algorithms on the BrainChip Akida platform. In addition, we will explore pattern-matching techniques, encoders, and complementary network architectures to process heterogeneous multi-channel inputs. As a first step, we will use the public ESA anomaly dataset to identify suitable network models for fault detection. Building on these results, we will validate the implemented algorithms using real-time data from Kepler and the OHB constellation simulator.

Project duration​

01.09.2025 - 31.08.2027

The project runs for 2 years, so that leaves the door open for Akida 2 & 3 SoCs.
 
  • Like
  • Fire
  • Love
Reactions: 12 users

Frangipani

Top 20
Fortiss just uploaded the baseline pipeline on GitHub for the ESA Anomaly Detection Hackathon 2025.

Set up for deployment on Akida.

Navigation Menu​


https://github.com/enterprise
nc-fortiss/2025_ESA_anomaly
Public

Baseline pipeline for ESA spacecraft telemetry anomaly detection. Includes full dataset loader, Akida-compatible CNN model, caching, normalization, and TensorBoard visualization.

nc-fortiss/2025_ESA_anomaly​




Name
Kannan-Priya Kannan-Priya
2 hours ago
.gitignore3 hours ago
Dataloader.py2 hours ago
LICENSE3 hours ago
README.md2 hours ago
plot.py2 hours ago
preview.py2 hours ago
setup_akida_env.sh2 hours ago
train.py2 hours ago

Repository files navigation​

2025_ESA_anomaly​

Hackathon NC
Official baseline for ESA spacecraft telemetry anomaly classification, developed as an open starting point for the ESA Anomaly Detection Hackathon 2025.
This repository provides everything you need to get started — from data loading and preprocessing to model training, evaluation, and Akida-ready deployment.

Overview​

Spacecraft generate enormous amounts of multichannel telemetry data — detecting anomalies in these signals early can prevent mission failures. This baseline uses deep learning on time-series data to detect anomalies in the ESA-Mission1 dataset, and is designed for:
  • Satellite telemetry analysis
  • Real-time anomaly detection
  • BrainChip Akida neuromorphic deployment

Features​

  • ✅ dataset loader (dataloader_tf.py) – reads 76 channel zip files, merges, resamples, labels, and windows
  • ✅ Akida-friendly CNN model (TensorFlow 2.15)
  • ✅ Automatic caching and normalization
  • ✅ 1-minute sampling rate and 14-year coverage
  • ✅ Class balancing and focal loss ready
  • ✅ TensorBoard visualizations

Project structure​

ESA-Anomaly-Baseline/ │ ├── dataloader_tf.py # Dataset loader & preprocessor ├── train.py # Main training script ├── requirements.txt ├── LICENSE # MIT License ├── logs/ # TensorBoard runs ├── ESA-Mission1/ # Dataset root │ ├── channels/ │ ├── labels.csv │ ├── anomaly_types.csv │ ├── telecommands.csv │ └── channels.csv └── norm_stats.npz

Setup​

1. Clone the repository​

cd ESA-Anomaly-Baseline
bash setup_akida_env.sh

Place the ESA-Mission1 dataset in the folder python train.py tensorboard --logdir logs/fit --port 6006
Contact Priya Kannan, Kannan@fortiss.org Neuromorphic Computing Group, fortiss GmbH (Munich)

It was uploaded to GitHub in preparation for the Munich Neuromorphic Hackathon 2025 that will take place from 7-8 and 10-12 November and is being co-organised by NeuroTUM and fortiss:

Last year, a student team using Akida won the Munich Neuromorphic Hackathon, organised by neuroTUM (a student club based at TU München / Technical University of Munich for students interested in the intersection of neuroscience and engineering) and our partner fortiss (who to this day have never officially been acknowledged as a partner from our side, though).

Will Akida again help one of the teams to win this year’s challenge?!
The 2025 Munich Neuromorphic Hackathon will take place from 7-12 November.

“The teams will face interesting industry challenges posed by German Aerospace Center (DLR), Simi Reality Motion Systems and fortiss, working with Brain-inspired computing methods towards the most efficient neuromorphic processor.”

Simi Reality Motion Systems (part of the ZF Group) has been collaborating with fortiss on several projects, such as SpikingBody (“Neuromorphic AI meets tennis. Real-time action recognition implemented on Loihi 2) and EMMANÜELA (AR/VR).


View attachment 91444



View attachment 91452 View attachment 91453 View attachment 91454

View attachment 91445 View attachment 91446



View attachment 91447

View attachment 91448



View attachment 91450 View attachment 91451


Our partner fortiss (although the partnership has so far only been acknowledged by them, not by BrainChip, cf. https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-454015), is now also promoting the Munich Neuromorphic Hackathon 2025 (7-8, 10-12 November) on LinkedIn:


View attachment 91859


More on Simi Reality Motion Systems in my tagged post.

Sponsors of the Munich Hackathon 2025 are the Fraunhofer Institut für Techno- und Wirtschaftsmathematik (ITWM) in Kaiserslautern - one of two Fraunhofer Institutes collaborating on the STANCE (Strategic Alliance for Neuromorphic Computing and Engineering) project - as well as gAIn - Next Generation AI Computing, a joint initiative between Ludwig-Maximilians-Universität Munich (LMU), Technical University of Dresden and Technical University of Munich (TUM), supported by the state governments of Bavaria and Saxony.



View attachment 91860

View attachment 91861



Which reminded me of this September 2024 post, when I spotted a Fraunhofer ITWM researcher liking a BrainChip post:

https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-435802

View attachment 91866





View attachment 91863



View attachment 91867 View attachment 91868



E930D3E4-CBC5-4131-9C52-630B0F8BC641.jpeg

3903997E-BF80-4269-A9E6-C1EDEA260A6D.jpeg


Priya Kannan, who is named as the contact person on GitHub, will also be one of three fortiss researchers giving a presentation titled “Neuromorphic Computing for Industrial Use Cases” on the Hackathon’s first day:


89773D89-7AF5-427D-8ACD-CD71DEF7512A.jpeg




ACC88D80-48FF-46E7-9920-D984522DF7F5.jpeg
 
  • Like
  • Love
Reactions: 10 users

Frangipani

Top 20
BrainChip will also be at Edge AI Taipei 2025 on 11 November:



View attachment 92052


View attachment 92332 View attachment 92333


17D86FAB-3FB0-429A-A9D3-921D2EBA045A.jpeg
 
  • Like
  • Fire
Reactions: 6 users

stockduck

Regular
Reality check continued with regard to the question whether or not Akida might be integrated into Athos Silicon’s mSoC codenamed Polaris - you’ve now heard it twice 👆🏻👇🏻 from the horse’s mouth…


View attachment 92449
I don`t know why I can`t find mercedes as partners on brainchip webpage, maybe I have to do a better search, but I think akida IP is not made for processing the whole data, it is more a team player with another more energyconsumpting device. Therefore, akida IP will decide if the other device has to be wake up or let it sleep, when a signal comes in at the very front edge region????
That is how I understand this explanation:

"....Jonathan: Yeah, sure. Obviously, we're a neuromorphic chip company. The customers, by and large, don't much care what inspired the technology. They care only about: does it give them a significant advantage in their system? One of the things we have is that our neural compute engine can run without any intervention at all from a host processor. So almost all of these systems consist of a host processor that's doing the high-level stuff and a compute engine that does computation efficiently. But in our case, they can put the host processor to sleep. Our compute engine can just run in the background. That's kind of our secret for, as you mentioned, these situations with tiny form factor and long battery life. The host is asleep and the Akida chip or IP is just running in the background, waiting for the particular vent that will cause it to wake up, the larger system. And so, as Sean was alluding to, we're now looking at a couple of extremely small form factor systems, particularly the one we call Pico, which will be this kind of tiny little piece of silicon. They can run in the background and wake up the big system appropriately when that is required. And so I think that's a tremendous competitive advantage. And as I say, customers don't much care how it does it. They just know that they're getting a tremendous performance advantage.
...."


So Akida IP needs always a "big brother" to compute the whole thing, right? But with Akida IP everything can be done a factor x more efficient..?
I`m not a technical related person, but that is how I understand it until now.
 
  • Like
  • Love
  • Fire
Reactions: 6 users
Top Bottom