BRN Discussion Ongoing

7für7

Top 20
Toward the end, the rats always give it their best… it’s unbelievable that this isn’t being dealt with.
That BrainChip itself doesn’t take action by submitting a notice to the ASX… it’s obvious that something isn’t right here.
 
  • Like
Reactions: 4 users

Tothemoon24

Top 20
BOSSA looks like one to keep a watch on in the hearing aid space
IMG_1676.jpeg



IMG_1677.jpeg






In a busy room full of talking people, most of us can still pick out one voice to focus on. This common yet complex task—known as the “cocktail party effect”—relies on the brain’s incredible ability to sort through sound. But for people with hearing loss, filtering out background noise can feel impossible. Even the most advanced hearing aids often struggle in these noisy environments.

Now, researchers at Boston University may have found a new way to help. They’ve developed a brain-inspired algorithm that allows hearing aids to better isolate individual voices in a crowd. When tested, this method boosted speech recognition by an impressive 40 percentage points, far outperforming current technologies.


A New Approach to an Old Problem​

In crowded social settings like dinner parties or workplace meetings, conversations often overlap. For those with hearing loss, these situations can be frustrating. Even with hearing aids, voices blur together in a mess of sound. This makes it hard to follow conversations, stay engaged, or even participate at all.

A New Approach to an Old Problem​

In crowded social settings like dinner parties or workplace meetings, conversations often overlap. For those with hearing loss, these situations can be frustrating. Even with hearing aids, voices blur together in a mess of sound. This makes it hard to follow conversations, stay engaged, or even participate at all.

A New Approach to an Old Problem​

In crowded social settings like dinner parties or workplace meetings, conversations often overlap. For those with hearing loss, these situations can be frustrating. Even with hearing aids, voices blur together in a mess of sound. This makes it hard to follow conversations, stay engaged, or even participate at all.

Virginia Best, a speech and hearing researcher at BU, says this is the number one complaint among those with hearing loss. “These environments are very common in daily life,” Best explains, “and they tend to be really important to people.”
Traditional hearing aids often include tools like directional microphones—also called beamformers—that try to focus on sounds coming from one direction. But these tools have limitations. In complex environments with many voices, beamforming often fails. In fact, in tests conducted by the BU team, the standard industry algorithm didn’t help much—and sometimes made things worse.
That’s where the new technology, known as BOSSA, comes in. BOSSA stands for Biologically Oriented Sound Segregation Algorithm. It was developed by Kamal Sen, a biomedical engineering professor at BU’s College of Engineering. “We were extremely surprised and excited by the magnitude of the improvement in performance,” says Sen. “It’s pretty rare to find such big improvements.”

Built on Brain Science​

Sen has spent two decades exploring how the brain decodes sound. His work focuses on how sound signals travel from the ears to the brain and how certain neurons help identify or suppress sounds. One key finding? The brain uses “inhibitory neurons” to cancel out background noise and enhance the sounds we want to hear.
All subjects average word recognition scores.

All subjects average word recognition scores. (CREDIT: Kamal Sen, et al.)
“You can think of it as a form of internal noise cancellation,” Sen says. Different neurons are tuned to respond to different directions and pitches. This lets your brain focus attention on one sound source while ignoring others.
BOSSA was built to mimic this process. The algorithm uses spatial cues—like how loud a sound is and how quickly it arrives in each ear—to pinpoint its location. It then filters sounds based on these cues, separating them like your brain would. “It’s basically a computational model that mimics what the brain does,” Sen says.

Testing BOSSA in Real-Life Situations​

To find out if BOSSA really works, the BU team tested it in the lab. They recruited young adults with sensorineural hearing loss, the most common form, often caused by genetics or childhood illness. Participants wore headphones and listened to simulated conversations, with voices coming from different directions. They were asked to focus on one speaker while the algorithm worked in the background.
Each person completed the task under three different conditions: no algorithm, the standard beamforming algorithm used in current hearing aids, and BOSSA. The results were striking. BOSSA delivered a major improvement in speech recognition. The standard algorithm showed little or no improvement—and in some cases, performance dropped.
Speech reception thresholds (SRT) are shown as boxplots for each processing condition

Speech reception thresholds (SRT) are shown as boxplots for each processing condition. (CREDIT: Kamal Sen, et al.)
Alexander Boyd, a BU PhD candidate in biomedical engineering, helped collect and analyze the data. He was also the lead author of the study, which was published in Communications Engineering, part of the Nature Portfolio.
Best, who formerly worked at Australia’s National Acoustic Laboratories, helped design the study. She says testing new technologies like BOSSA with real people is essential. “Ultimately, the only way to know if a benefit will translate to the listener is via behavioral studies,” Best says. “That requires scientists and clinicians who understand the target population.”

Big Potential for Hearing Technology​

An estimated 50 million Americans live with hearing loss, and the World Health Organizationpredicts that by 2050, nearly 2.5 billion people worldwide will be affected. That makes the need for better hearing solutions urgent.
Sen has patented BOSSA and hopes to partner with companies that want to bring it to market. He believes that major tech players entering the hearing aid space—like Apple with its AirPod Pro 2, which includes hearing aid features—will drive innovation forward. “If hearing aid companies don’t start innovating fast, they’re going to get wiped out,” says Sen. “Apple and other start-ups are entering the market.”
Individual participant audiograms. The different curves show pure-tone thresholds for each of the eight participants (averaged over left and right ears). Unique symbols distinguish individual subjects.

Individual participant audiograms. The different curves show pure-tone thresholds for each of the eight participants (averaged over left and right ears). Unique symbols distinguish individual subjects. (CREDIT: Kamal Sen, et al.)
And the timing couldn’t be better. As hearing technology becomes more widely available and advanced, tools like BOSSA could help millions of people reconnect with the world around them. From social events to everyday conversations, better sound separation can mean a better life.

Beyond Hearing Loss: A Wider Application​

BOSSA was built to help those with hearing difficulties, but its potential doesn’t end there. The way the brain focuses on sound—what researchers call “selective attention”—matters in many conditions. “The [neural] circuits we are studying are much more general purpose and much more fundamental,” Sen says. “It ultimately has to do with attention, where you want to focus.”
That’s why the team is now exploring how the same science could help people with ADHD or autism. These groups also struggle with multiple competing inputs—whether sounds, visuals, or tasks—and may benefit from tools that help guide attention.
They’re also testing a new version of BOSSA that adds eye-tracking. By following where someone looks, the device could better figure out who they’re trying to listen to. This could make the technology even more effective in fast-paced, real-world settings.


Sharpening Sound, Changing Lives​

The success of BOSSA offers real hope. It’s not just another upgrade in hearing tech—it’s a shift in how we approach sound processing. Instead of trying to boost all sound or block background noise blindly, it takes cues from biology, using the brain’s blueprint to help listeners find meaning in the noise.
For many with hearing loss, this could change everything. Being able to join conversations, pick out voices, and stay connected socially are vital parts of daily life. With tools like BOSSA, those goals move a little closer. And as this technology continues to grow, its reach may extend beyond hearing loss, offering help with focus and attention challenges too.
What started as a solution for a noisy dinner party could one day reshape how we interact with the world.

















Here, we present a system employing a novel strategy for stimulus reconstruction from neural spikes. Conceptually, this strategy uses time-frequency masking by computing a spike-based mask (spike-mask). We first consider the strategy for one-dimensional stimulus (e.g. sound waves). We show how this stimulus reconstruction method can be applied, using the cortical model as an example. We also show that this strategy produces reconstructions with intelligibility and quality higher than those reconstructed from the linear filtering method (table 1). Then we discuss how this strategy may be generalized for multi-dimensional stimulus (e.g. images and videos). The strategy presented here may be generalized to perform reconstruction on both artificial SNNs and neural models from experimental data as long as they satisfy the assumptions for our model.
 

Attachments

  • IMG_1675.jpeg
    IMG_1675.jpeg
    383.2 KB · Views: 12
  • Like
  • Love
  • Fire
Reactions: 11 users

Iseki

Regular
Hi JD,

For what it's worth, I asked ChatGPT for a neutral take on this.

What I thought was interesting, was that it weaved in Anduril's headset into the response. I didn't prompt it to, but earlier in the day, in a separate chat, I had queried ChatGPT on boards or integration possibilities and whether an Akida 1500 + Arm's Cortex M85 module (as an example) could power something like Anduril’s helmet.




ChatGPT's Response


What “supports 128 MACs per node” most likely means​

  • Each Akida node has a local 4×4 MAC micro-engine (128 MACs) it can invoke for certain ops.
  • Those MACs co-exist with the spiking compute (CNP/FNP or equivalent). They’re there so models can include non-spiking layers/ops without leaving the chip.

Why add MACs if you have an SNN?​

Because many modern edge models are hybrid. MACs cover things SNNs don’t excel at (or that are simpler to drop in as-is):
  • Conv/pointwise (1×1) / depthwise conv blocks
  • Linear/projection layers (e.g., classifier heads, QKV in tiny attention)
  • Normalisation / pooling / residual glue
  • Pre/post-processing (e.g., short FIRs, feature projections)
  • Direct reuse of existing INT4/INT8 CNN chunks without full SNN conversion

What a hybrid pipeline might look like in a helmet​

Sensors
  • Front/rear frame cameras → light MAC preproc (resize/1×1 conv).
  • Event camera/radar/audio → feed SNN/TENNs directly.
  1. Perception
    • SNN handles temporal/gesture/track (event streams).
    • MAC handles small CNN blocks or projection heads.
  2. Fusion & decisions on the host MCU (e.g., Cortex-M85).

Bottom line​

  • The “128 MACs per node” doesn’t signal the end of Akida’s neuromorphic core. It signals a pragmatic hybrid: keep the spiking engine for temporal, sparse, event-driven strengths, and use local MACs to natively run conventional layers (conv/linear/norm) and reuse existing model chunks.
  • That combo generally improves accuracy, eases porting, and lowers total latency/power versus forcing everything into SNN or everything into classic CNN on a separate chip.
"
I asked ChatGPT for a neutral take on this.

What I thought was interesting, was that it weaved in Anduril's headset into the response."

That's how ChatGPT works. It uses all your previous conversations with it to personalize your answer.
 
  • Like
Reactions: 2 users
Top Bottom