Maybe from general interest:
I have just heard an interesting R&D report on the radio about Ai improved noise cancelling. It was noted that other big companies are certainly still developing this exciting technology too.
The idea is to let sounds through the cancelling. Until now, there has been no system that allows the user to individually set, define or let learn which sounds the Ai should filter out. The headphones have to learn what the person wants to hear, without cloud, obvious. To do this, the team uses the time differences between the left and right headphones and the noise source. This team solves this as follows: if the person with noise cancelling headphones points their face in the direction of what they want to hear despite the suppression, the Ai or electronics learns within around three seconds that the source is being targeted because it recognises the runtime differences from left to right and lets these sounds through.
So far with app on the smartphone. He also says that the team is working on button (? small) headphones, which they want to introduce in about 6 to 8 months.
Up to now this is being done with the phone he said, but I can very well imagine that the neural network will be placed directly in the headphones, drastically reducing latency even further.
I'm on the road and my research options with the phone are limited, but it's about Shyam Gollakota's team at the University of Washington.
KEYWORDS
Augmented hearing, auditory perception, spatial computing
PDF:
______
Older status:
A team led by researchers at the University of Washington has developed deep-learning algorithms that let users pick which sounds filter through their headphones in real time. Either through voice...
www.washington.edu
_______
Shyam Gollakota is a professor at UW CSE focusing on mobile intelligence.
homes.cs.washington.edu
___
Webuild an end-to-end hardware system that integrates a noisecanceling headset (Sony WH-1000XM4), a pair of binaural microphones (Sonic Presence SP15C) with our real-time target speech hearing network running on an embedded IoT CPU (Orange Pi 5B).
We deploy our neural network on the embedded device by converting the PyTorch model into an ONNX model using a nightly
PyTorch version (2.1.0.dev20230713+cu118) and we use the python package onnxsim to simplify the resulting ONNX model.