Hi Bravo,
As always, I was ready to apply the wet blanket, and I would ov too, until Sean disclosed our algorithm product line:
US2022210586A1 AUTOMATIC TINNITUS MASKER FOR AN EAR-WEARABLE ELECTRONIC DEVICE 20220630 Starkey
View attachment 70667
[0075] …
the controller comprises, or is operatively coupled to, a processor configured with instructions to classify, via a first neural network, the acoustic environment of the wearer as a specified one of a plurality of disparate acoustic environments, and process one or more of the physiologic sensor signals, the non-physiologic sensor signals, and the contextual factor data, via a second neural network, to adjust the tinnitus masking sound produced by the sound generator using the one or more of the physiologic sensor signals, the non-physiologic sensor signals, and the contextual factor data, and parameter values associated with the specified acoustic environment.
[0080] Example Ex44. The device according to Ex43, wherein the neural network comprises one or more of a deep neural network (DNN), a feedforward neural network (FNN), a recurrent neural network (RNN), a long short-term memory (LSTM), gated recurrent units (GRU), light gated recurrent units (LiGRU), a convolutional neural network (CNN), and a spiking neural network.
[0095]
In accordance with any of the embodiments disclosed herein, the controller 120 can include, or be coupled to, a machine learning processor 124 configured to execute computer code or instructions (e.g., firmware, software) including one or more machine learning algorithms 126 . The machine learning processor 124 is configured to process one or more of the physiologic sensor signals, non-physiologic sensor signals, microphone signals, and contextual factor data via one or more machine learning algorithms 126 to detect one or more of presence, absence, and severity of tinnitus of the wearer of the hearing device 100 . Sensor, contextual factor data, and/or wearer input (e.g., manual overrides) received by the machine learning processor 124 are used to inform and refine one or more machine learning algorithms 126 executable by the machine learning processor 124 to automatically enhance and customize tinnitus detection and mitigation implemented by the hearing device 100 for a particular hearing device wearer.
This is looking very promising indeed!
Starkey CEO Brandon Sawalich Talks New Edge AI Hearing Aid In New Interview
Steven Aquino
Oct 23, 2024,02:34pm EDT
The hearing aid market truly is having a banger of a
last few weeks.
Earlier this month, hearing aid maker Starkey
announced its all-new
Edge AI hearing aid.
On its website, the company describes it as using “cutting-edge technology mimics the brain’s auditory cortex” in an effort to repair the so-called “broken process” that occurs in the brain’s auditory cortex when someone has hearing loss. According to Starkey, artificial intelligence helps classify complex soundscapes, enhance speech, and reduce noise—all things done in real-time. Like the other hearing aids in its fleet, Starkey’s Edge AI hearing aid integrates with the company’s
My Starkey companion app on iOS and Android, as well as the newly-released version on watchOS for Apple Watch.
In a recent interview with me conducted over email, Starkey president and CEO Brandon Sawalich explained the company is steadfastly committed to “innovating our technology as quickly as science will allow” with the overarching goal of “always [pushing] the edge of what’s possible.” The Edge AI, he told me, is the next generation of “intelligent hearing technology and far surpasses anything else on the market today.” The company’s work in the AI area dates back to 2018, a time when AI wasn’t nearly as en vogue and top of mind as it is today. Since those headier days, Sawalich boasted Starkey has cemented itself as a leader in the industry when it comes to meshing hearing aids with AI.
“Whether someone has a mild loss or a severe one, they are tech-savvy or prefer a hands-off experience, Edge AI provides better hearing for all—with a long list of hearing health features to help anyone live a better, healthier and more full life,” Sawalich said.
“With Edge AI, we want people to be the best they can be each and every day.”
In technical terms, Sawalich said Edge AI builds on the success of Starkey’s Genesis AI technology. (I interviewed him and chief hearing officer Dave Fabry about Genesis AI
earlier this year.) provide an added boost in any listening situation said Starkey’s new Neuro Sound Technology 2.0 includes what he called the Deep Neural Network Enhanced Sound Manager.
The company’s AI technology, he added, is “always on” and “30% more accurate” compared to previous versions of the technology at detecting speech. Moreover, he said Starkey’s enhanced its Edge Mode+ feature such that it uses the aforementioned neural network to “provide an added boost in any listening situation.” The augmentation prioritizes clearer speech or listening comfort, whichever the wearer prefers. Edge Mode+ automatically scans for, and adapts to, the user’s changes in environment, Sawalich said to me.
Additionally, Sawalich said Starkey also gave a boost to its onboard digital assistant. The feature, he said, enables people to use their voice to interact with the assistant through the hearing aid itself. The software can answer questions on topics such as the day’s weather and more.
All told, Sawalich told me Edge AI has Starkey’s “most advanced processor.” Furthermore, due to the company’s continued development on its dedicated NPU, or neural processing unit, the company can focus on delivering better performance without commensurately compromising on battery life. Starkey, Sawalich said, is really proud to boast an “industry-leading” number of more than 51 hours of battery life.
Sawalich talked about the new Apple Watch app, saying there’s value in having the ability to control things like the hearing aid’s volume right from one’s wrist. Similarly, he said Edge AI users can do computer-y tasks like take calls, stream music and podcasts, and even enjoy real-time translation of 78 languages alongside the My Starkey app.
When asked about feedback, Sawalich said. Starkey tapped 560 patients to test Edge AI and give feedback, all in the name of “[confirming] we were delivering the very best hearing care before even one hearing aid went out the door.”
People have been “blown away” by the difference Edge AI has made, with Sawalich saying sounds are clearer and crisper. The technology, he added, “allows them to hear more of their surroundings so it’s a full sound experience.” What’s more, Sawalich said some testers have reported being able to hear things they heretofore couldn’t. Plus, the battery life—everybody loves better battery life—and waterproofing is appreciated for everyday usage in different places.
Sawalich is exceedingly proud of his team’s work. He said Edge AI features what he described as “the world’s first and only use of sensors onboard the hearing aid to perform an accurate balance assessment.” The functionality, on which Starkey collaborated with Stanford University to assess the algorithms’ accuracy, is a great addition because, as Sawalich told me, it helps identify and manage balance before falls become recurring for people who are prone to it. Starkey and Stanford were able to conclude through their research that the algorithms are “comparable to that of a trained clinician,” Sawalich said. He went on to say Starkey was the first hearing aid manufacturer to add 3D sensors for health and wellness tracking. That work has been ongoing, as Sawalich told me there are many different types of activities in the tracker system.
“We lead the industry in incorporating overall health and wellness into the hearing aid,” Sawalich said.