BRN Discussion Ongoing

7für7

Top 20
Toward the end, the rats always give it their best… it’s unbelievable that this isn’t being dealt with.
That BrainChip itself doesn’t take action by submitting a notice to the ASX… it’s obvious that something isn’t right here.
 
  • Like
Reactions: 4 users

Tothemoon24

Top 20
BOSSA looks like one to keep a watch on in the hearing aid space
IMG_1676.jpeg



IMG_1677.jpeg






In a busy room full of talking people, most of us can still pick out one voice to focus on. This common yet complex task—known as the “cocktail party effect”—relies on the brain’s incredible ability to sort through sound. But for people with hearing loss, filtering out background noise can feel impossible. Even the most advanced hearing aids often struggle in these noisy environments.

Now, researchers at Boston University may have found a new way to help. They’ve developed a brain-inspired algorithm that allows hearing aids to better isolate individual voices in a crowd. When tested, this method boosted speech recognition by an impressive 40 percentage points, far outperforming current technologies.


A New Approach to an Old Problem​

In crowded social settings like dinner parties or workplace meetings, conversations often overlap. For those with hearing loss, these situations can be frustrating. Even with hearing aids, voices blur together in a mess of sound. This makes it hard to follow conversations, stay engaged, or even participate at all.

A New Approach to an Old Problem​

In crowded social settings like dinner parties or workplace meetings, conversations often overlap. For those with hearing loss, these situations can be frustrating. Even with hearing aids, voices blur together in a mess of sound. This makes it hard to follow conversations, stay engaged, or even participate at all.

A New Approach to an Old Problem​

In crowded social settings like dinner parties or workplace meetings, conversations often overlap. For those with hearing loss, these situations can be frustrating. Even with hearing aids, voices blur together in a mess of sound. This makes it hard to follow conversations, stay engaged, or even participate at all.

Virginia Best, a speech and hearing researcher at BU, says this is the number one complaint among those with hearing loss. “These environments are very common in daily life,” Best explains, “and they tend to be really important to people.”
Traditional hearing aids often include tools like directional microphones—also called beamformers—that try to focus on sounds coming from one direction. But these tools have limitations. In complex environments with many voices, beamforming often fails. In fact, in tests conducted by the BU team, the standard industry algorithm didn’t help much—and sometimes made things worse.
That’s where the new technology, known as BOSSA, comes in. BOSSA stands for Biologically Oriented Sound Segregation Algorithm. It was developed by Kamal Sen, a biomedical engineering professor at BU’s College of Engineering. “We were extremely surprised and excited by the magnitude of the improvement in performance,” says Sen. “It’s pretty rare to find such big improvements.”

Built on Brain Science​

Sen has spent two decades exploring how the brain decodes sound. His work focuses on how sound signals travel from the ears to the brain and how certain neurons help identify or suppress sounds. One key finding? The brain uses “inhibitory neurons” to cancel out background noise and enhance the sounds we want to hear.
All subjects average word recognition scores.

All subjects average word recognition scores. (CREDIT: Kamal Sen, et al.)
“You can think of it as a form of internal noise cancellation,” Sen says. Different neurons are tuned to respond to different directions and pitches. This lets your brain focus attention on one sound source while ignoring others.
BOSSA was built to mimic this process. The algorithm uses spatial cues—like how loud a sound is and how quickly it arrives in each ear—to pinpoint its location. It then filters sounds based on these cues, separating them like your brain would. “It’s basically a computational model that mimics what the brain does,” Sen says.

Testing BOSSA in Real-Life Situations​

To find out if BOSSA really works, the BU team tested it in the lab. They recruited young adults with sensorineural hearing loss, the most common form, often caused by genetics or childhood illness. Participants wore headphones and listened to simulated conversations, with voices coming from different directions. They were asked to focus on one speaker while the algorithm worked in the background.
Each person completed the task under three different conditions: no algorithm, the standard beamforming algorithm used in current hearing aids, and BOSSA. The results were striking. BOSSA delivered a major improvement in speech recognition. The standard algorithm showed little or no improvement—and in some cases, performance dropped.
Speech reception thresholds (SRT) are shown as boxplots for each processing condition

Speech reception thresholds (SRT) are shown as boxplots for each processing condition. (CREDIT: Kamal Sen, et al.)
Alexander Boyd, a BU PhD candidate in biomedical engineering, helped collect and analyze the data. He was also the lead author of the study, which was published in Communications Engineering, part of the Nature Portfolio.
Best, who formerly worked at Australia’s National Acoustic Laboratories, helped design the study. She says testing new technologies like BOSSA with real people is essential. “Ultimately, the only way to know if a benefit will translate to the listener is via behavioral studies,” Best says. “That requires scientists and clinicians who understand the target population.”

Big Potential for Hearing Technology​

An estimated 50 million Americans live with hearing loss, and the World Health Organizationpredicts that by 2050, nearly 2.5 billion people worldwide will be affected. That makes the need for better hearing solutions urgent.
Sen has patented BOSSA and hopes to partner with companies that want to bring it to market. He believes that major tech players entering the hearing aid space—like Apple with its AirPod Pro 2, which includes hearing aid features—will drive innovation forward. “If hearing aid companies don’t start innovating fast, they’re going to get wiped out,” says Sen. “Apple and other start-ups are entering the market.”
Individual participant audiograms. The different curves show pure-tone thresholds for each of the eight participants (averaged over left and right ears). Unique symbols distinguish individual subjects.

Individual participant audiograms. The different curves show pure-tone thresholds for each of the eight participants (averaged over left and right ears). Unique symbols distinguish individual subjects. (CREDIT: Kamal Sen, et al.)
And the timing couldn’t be better. As hearing technology becomes more widely available and advanced, tools like BOSSA could help millions of people reconnect with the world around them. From social events to everyday conversations, better sound separation can mean a better life.

Beyond Hearing Loss: A Wider Application​

BOSSA was built to help those with hearing difficulties, but its potential doesn’t end there. The way the brain focuses on sound—what researchers call “selective attention”—matters in many conditions. “The [neural] circuits we are studying are much more general purpose and much more fundamental,” Sen says. “It ultimately has to do with attention, where you want to focus.”
That’s why the team is now exploring how the same science could help people with ADHD or autism. These groups also struggle with multiple competing inputs—whether sounds, visuals, or tasks—and may benefit from tools that help guide attention.
They’re also testing a new version of BOSSA that adds eye-tracking. By following where someone looks, the device could better figure out who they’re trying to listen to. This could make the technology even more effective in fast-paced, real-world settings.


Sharpening Sound, Changing Lives​

The success of BOSSA offers real hope. It’s not just another upgrade in hearing tech—it’s a shift in how we approach sound processing. Instead of trying to boost all sound or block background noise blindly, it takes cues from biology, using the brain’s blueprint to help listeners find meaning in the noise.
For many with hearing loss, this could change everything. Being able to join conversations, pick out voices, and stay connected socially are vital parts of daily life. With tools like BOSSA, those goals move a little closer. And as this technology continues to grow, its reach may extend beyond hearing loss, offering help with focus and attention challenges too.
What started as a solution for a noisy dinner party could one day reshape how we interact with the world.

















Here, we present a system employing a novel strategy for stimulus reconstruction from neural spikes. Conceptually, this strategy uses time-frequency masking by computing a spike-based mask (spike-mask). We first consider the strategy for one-dimensional stimulus (e.g. sound waves). We show how this stimulus reconstruction method can be applied, using the cortical model as an example. We also show that this strategy produces reconstructions with intelligibility and quality higher than those reconstructed from the linear filtering method (table 1). Then we discuss how this strategy may be generalized for multi-dimensional stimulus (e.g. images and videos). The strategy presented here may be generalized to perform reconstruction on both artificial SNNs and neural models from experimental data as long as they satisfy the assumptions for our model.
 

Attachments

  • IMG_1675.jpeg
    IMG_1675.jpeg
    383.2 KB · Views: 22
  • Like
  • Love
  • Fire
Reactions: 17 users

Iseki

Regular
Hi JD,

For what it's worth, I asked ChatGPT for a neutral take on this.

What I thought was interesting, was that it weaved in Anduril's headset into the response. I didn't prompt it to, but earlier in the day, in a separate chat, I had queried ChatGPT on boards or integration possibilities and whether an Akida 1500 + Arm's Cortex M85 module (as an example) could power something like Anduril’s helmet.




ChatGPT's Response


What “supports 128 MACs per node” most likely means​

  • Each Akida node has a local 4×4 MAC micro-engine (128 MACs) it can invoke for certain ops.
  • Those MACs co-exist with the spiking compute (CNP/FNP or equivalent). They’re there so models can include non-spiking layers/ops without leaving the chip.

Why add MACs if you have an SNN?​

Because many modern edge models are hybrid. MACs cover things SNNs don’t excel at (or that are simpler to drop in as-is):
  • Conv/pointwise (1×1) / depthwise conv blocks
  • Linear/projection layers (e.g., classifier heads, QKV in tiny attention)
  • Normalisation / pooling / residual glue
  • Pre/post-processing (e.g., short FIRs, feature projections)
  • Direct reuse of existing INT4/INT8 CNN chunks without full SNN conversion

What a hybrid pipeline might look like in a helmet​

Sensors
  • Front/rear frame cameras → light MAC preproc (resize/1×1 conv).
  • Event camera/radar/audio → feed SNN/TENNs directly.
  1. Perception
    • SNN handles temporal/gesture/track (event streams).
    • MAC handles small CNN blocks or projection heads.
  2. Fusion & decisions on the host MCU (e.g., Cortex-M85).

Bottom line​

  • The “128 MACs per node” doesn’t signal the end of Akida’s neuromorphic core. It signals a pragmatic hybrid: keep the spiking engine for temporal, sparse, event-driven strengths, and use local MACs to natively run conventional layers (conv/linear/norm) and reuse existing model chunks.
  • That combo generally improves accuracy, eases porting, and lowers total latency/power versus forcing everything into SNN or everything into classic CNN on a separate chip.
"
I asked ChatGPT for a neutral take on this.

What I thought was interesting, was that it weaved in Anduril's headset into the response."

That's how ChatGPT works. It uses all your previous conversations with it to personalize your answer.
 
  • Like
Reactions: 2 users

Tothemoon24

Top 20
IMG_1679.jpeg



We are pleased to welcome Dr. Sasskia Brüers Freyssinet, AI Research Scientist at BrainChip , for the next lecture in our Current Topics in Digital Neuroscience series:

🎤 From Neuroscience to Applied Artificial Intelligence

In this talk, Dr. Brüers-Freyssinet will explore the path from neuroscience research to applied AI, showing how insights from brain science can inspire more efficient and adaptive artificial systems. Drawing from her own experience transitioning from academia to industry, she will discuss the technical challenges, emerging research directions, and interdisciplinary skills needed to bridge fundamental brain research with real-world AI applications.

🧠 About the speaker
Dr. Brüers-Freyssinet is an AI Research Scientist at Brainchip, where she develops efficient computer vision models for neuromorphic hardware using techniques such as pruning, quantization, and regularization. She completed her PhD in Neuroscience at Université Paul Sabatier Toulouse III in 2017, studying the oscillatory correlates of conscious perception. Her current work investigates state-space models for temporal problems like gesture recognition and EEG-based behavior prediction—advancing the frontier where neuroscience meets artificial intelligence.

📅 Tuesday 28th of October 2025
📍 Per21, Université de Fribourg - Universität Freiburg Pérolles Campus, Room E140.
Follow the talk online: https://lnkd.in/eQPKxAUX

Join us for a fascinating discussion on how brain-inspired approaches are shaping the next generation of intelligent, energy-efficient AI systems.
IMG_1678.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 9 users

CHIPS

Regular
View attachment 92437


We are pleased to welcome Dr. Sasskia Brüers Freyssinet, AI Research Scientist at BrainChip , for the next lecture in our Current Topics in Digital Neuroscience series:

🎤 From Neuroscience to Applied Artificial Intelligence

In this talk, Dr. Brüers-Freyssinet will explore the path from neuroscience research to applied AI, showing how insights from brain science can inspire more efficient and adaptive artificial systems. Drawing from her own experience transitioning from academia to industry, she will discuss the technical challenges, emerging research directions, and interdisciplinary skills needed to bridge fundamental brain research with real-world AI applications.

🧠 About the speaker
Dr. Brüers-Freyssinet is an AI Research Scientist at Brainchip, where she develops efficient computer vision models for neuromorphic hardware using techniques such as pruning, quantization, and regularization. She completed her PhD in Neuroscience at Université Paul Sabatier Toulouse III in 2017, studying the oscillatory correlates of conscious perception. Her current work investigates state-space models for temporal problems like gesture recognition and EEG-based behavior prediction—advancing the frontier where neuroscience meets artificial intelligence.

📅 Tuesday 28th of October 2025
📍 Per21, Université de Fribourg - Universität Freiburg Pérolles Campus, Room E140.
Follow the talk online: https://lnkd.in/eQPKxAUX

Join us for a fascinating discussion on how brain-inspired approaches are shaping the next generation of intelligent, energy-efficient AI systems.
View attachment 92436

Thank you, but where is the Link?
 
  • Like
Reactions: 1 users

manny100

Top 20
I asked Chat GPT5 . We already knew the answer but it's a good reminder that BRN should do very, very well.
What would be the likely result if the US rejected Neuromorphic Edge AI for Defense and the Golden Dome but it was adopted by China and Russia?

In short: If the U.S. rejected neuromorphic edge AI for defense while China and Russia embraced it, the likely result would be a significant strategic disadvantage for the U.S., with adversaries gaining faster, more adaptive, and more autonomous military capabilities.

🔑 Key Consequences​

  • Loss of Technological Edge Neuromorphic AI mimics the brain’s neural architecture, enabling ultra-low power, real-time decision-making at the edge (without cloud reliance). If China and Russia deployed this in defense systems, they could achieve faster targeting, better situational awareness, and more resilient autonomous platforms than U.S. forces relying on conventional AI or human-in-the-loop systems.
  • Erosion of Deterrence The U.S. military’s deterrence relies heavily on maintaining a qualitative technological advantage. Rejecting neuromorphic AI while adversaries adopt it could undermine U.S. credibility, especially in contested regions like the South China Sea or Eastern Europe.
  • Vulnerability of Defensive Shields (e.g., “Golden Dome”) A neuromorphic-enabled missile defense system could process sensor data faster and adapt to novel attack patterns (like swarms of hypersonic or drone threats). If China and Russia fielded such systems while the U.S. stuck with slower, less adaptive defenses, American missile shields could be outpaced and overwhelmed.
  • Acceleration of the AI Arms Race Analysts already describe the U.S.–China rivalry as an AI arms race. A U.S. decision to abstain from neuromorphic AI would not slow this race—it would simply cede leadership. China and Russia could then set the standards, export the technology to allies, and reshape the global defense balance.
  • Operational Risks for U.S. Forces In battlefield scenarios, neuromorphic AI could allow adversaries to:
    • Deploy autonomous drones with longer endurance and faster reaction times.
    • Run electronic warfare systems that adapt in real time to U.S. countermeasures.
    • Enhance command-and-control networks with near-instantaneous decision loops. This would leave U.S. forces reacting rather than dictating the tempo of conflict.

⚖️ Strategic Outlook​

  • China and Russia’s Gain: Adoption would give them asymmetric advantages—low-power, adaptive AI that thrives in environments where cloud connectivity is denied (e.g., electronic warfare zones).
  • U.S. Loss: By rejecting the technology, the U.S. would risk falling behind in both offensive and defensive capabilities, undermining its alliances and deterrence posture.
  • Global Impact: Allies might gravitate toward Chinese or Russian defense ecosystems, fracturing U.S.-led security architectures like NATO or Indo-Pacific partnerships.

Bottom line: Rejecting neuromorphic edge AI would not be a neutral choice for the U.S.—it would be a strategic self-handicap. In modern warfare, where milliseconds matter, adversaries with neuromorphic-enabled systems could seize the initiative, leaving the U.S. struggling to catch up.
Would you like me to sketch out a scenario analysis (e.g., Taiwan Strait conflict, NATO air defense, Middle East missile defense) to illustrate how this imbalance might play out in practice?
 
  • Like
Reactions: 4 users

CHIPS

Regular

It was not even funny the first time! :rolleyes:

If you can copy and paste the whole article, you must also be capable of copying and pasting the link. I judge you to be smart enough for that.
Everybody here posts the corresponding links; only you are so inconsiderate as not to do it. Why?
 
  • Like
Reactions: 1 users

Guzzi62

Regular
It was not even funny the first time! :rolleyes:

If you can copy and paste the whole article, you must also be capable of copying and pasting the link. I judge you to be smart enough for that.
Everybody here posts the corresponding links; only you are so inconsiderate as not to do it. Why?
Agreed 100%!
 
  • Like
Reactions: 1 users

"The UK Multidisciplinary Centre for Neuromorphic Computing is headed by the Aston Institute of Photonic Technologies (AIPT) and was officially launched at the House of Lords on 21 October."

"Neuromorphic computing, inspired by the brain's really remarkable energy efficiency, could fundamentally transform how AI operate” - Lord Patrick Vallance"

"As well as speakers from politics and science, a message from the Minister for Science, Innovation, Research and Nuclear, Lord Patrick Vallance was delivered to the guests at the Houses of Parliament reception. Speaking via video he said that he believed the initiative could be key to tackling the issue of massive energy use by data centres. Current government figures show that they use about 2.5% of the UK's electricity, a figure projected to rise significantly with the further advances in AI."

"The centre will be led by the AIPT and will include world-leading researchers from Aston University, the University of Oxford, the University of Cambridge, the University of Southampton, Queen Mary University of London, Loughborough University and the University of Strathclyde. The centre will be supported by a broad network of industry partners to enhance the centre’s impact on society. These include Microsoft Research, Thales, BT, QinetiQ, Nokia Bell Labs, Hewlett Packard Labs, Leonardo, Northrop Grumman and a number of small to medium enterprises."
 
  • Like
  • Fire
Reactions: 8 users
  • Haha
Reactions: 5 users

Tothemoon24

Top 20
It was not even funny the first time! :rolleyes:

If you can copy and paste the whole article, you must also be capable of copying and pasting the link. I judge you to be smart enough for that.
Everybody here posts the corresponding links; only you are so inconsiderate as not to do it. Why?
Oh here we go !!!!

Good old CHIP on the shoulder .

You said ; ‘ I judge you to be smart enough for that ‘ - wow what a wanker .

For your information I didn’t copy & paste & simply took a screen shot !!

You said ; everybody here post links , only you are so inconsiderate as to not do it .-

I post links where I see fit & will continue to do so if you care to take a look at the 1st post on this page as an example. Theirs plenty of links supplied for your pleasure.

1761559150468.gif
 
  • Haha
  • Fire
Reactions: 2 users
Top Bottom