Here's a LinkedIn post from Cerence two days ago about Mercedes "wake-up word".
If Mercedes do continue working with Cerence in the future, doesn't that by default mean we'll eventually be working with them together (likewise with all the other companies that have technology intertwined in the Mercedes infotainment/voice control side of things such as OpenAI's 'sChatGPT, Apple Car Play, Google Maps).?
After-all Magnus Ostberg's confirmed in his Linkedin post from a month ago, that Mercedes are determined to be the first to use neuromorphic computing in the automotive industry.
Ahhhhh, remember good ol' the days when every second word that I uttered was "Cerence".
View attachment 61158
Cerence AI on LinkedIn: #heymercedes #wearecerence
A cool moment for those who are passionate about advancing AI- and voice-powered interaction in the car: Mercedes-Benz' MBUX wake-up word #HeyMercedes was…www.linkedin.com
View attachment 61157
[
URL unfurl="true"]https://www.linkedin.com/in/magnus-%C3%B6stberg/recent-activity/all/[/URL]
View attachment 61160
Cerence's canoe was, at least until recently, paddleless up the scatological creek.
Their voice recognition system was internet based.
This patent is a continuation of an application filed in 2013 which was updated in 2022:
US11676600B2
[0002] This application is a continuation of U.S. application Ser. No. 15/238,238, filed Aug. 16, 2016, which is a continuation of U.S. application Ser. No. 13/795,933, filed Mar. 12, 2013, the disclosures of which are hereby incorporated in their entirety by reference herein.
1. A mobile device comprising:
at least one input configured to receive acoustic input from an environment of the mobile device while the mobile device is operating in a low power mode; and
at least one processor configured to
perform, while in the low power mode, at least one processing stage on the acoustic input to evaluate whether the acoustic input includes a voice command, the at least one processing stage including
transmitting at least a portion of the acoustic input from the mobile device to an automatic speech recognition (ASR) server via a network for processing by the ASR server to convert the at least a portion of the acoustic input into a text, and
transmitting the text from the ASR server to a natural language processing (NLP) server for processing by the NLP server to determine whether the text includes a voice command; and
responsive to receiving the voice command from the NLP server, initiate responding to the voice command.
Even as recently as 2021, Cerence were hedging their bets on in-car v web-based voice processing, without anything more than a nod to NNs:
US2022343906A1 FLEXIBLE-FORMAT VOICE COMMAND 20210426
[0066] Referring to FIG. 3, in one implementation, the system is integrated into a vehicle 310 , with the microphone 101 , camera 103 , etc., monitoring the driver (not shown). The components illustrated in FIG. 1 are hosted on a computer 312 (i.e., an embedded computation system with a general purpose or special purpose processor), which may include non-transitory storage holding instructions to be executed by a processor of the computer to implement the functions described above. In some examples, the computer 312 connects to a user's smartphone 314 , which may host the assistant or one of the sub-assistants known to the system. In some examples, the smartphone 314 provides a communication link via a communication system 320 (e.g., via a cellular telephone system, e.g., a “5G” system) to a remote assistant 330 , such as a home-based assistant in the user's home remote from the vehicle. It should be understood that certain functions may be implemented in dedicated hardware rather than being performed by a processor, for example, with audio processing being performed by a hardware-based (i.e., dedicated circuitry) component. In some implementations, some or all of the functions that might be performed by the computer 312 are performed by a remote server computer in communication with components hosted in the vehicle.
How to do wake word and speech recognition at the edge?
[ PYMNTS: In-Car Voice Makes Itself Heard at CES - BrainChip
https://brainchip.com/in-car-voice-makes-itself-heard/ ]
In-Car Voice Makes Itself Heard at CES BY PYMNTS | JANUARY 4, 2022
Before CES even opened to the industry, in-car voice technology was making itself heard. On Monday (Jan. 3), the first of two media days preceding the opening of the show, connected mobility supplier Cerence received an award for its in-car voice assistant, Mercedes-Benz released details about two voice technologies featured on its new prototype electric car and the Consumer Technology Association (CTA) projected that auto tech will grow 7% in 2022.
Read more: Connected Cars to Strut Their Stuff at CES
These companies are among more than 200 from the transportation and vehicle technology industry — a record number for the event — represented at this year’s edition of the annual tech event. A total of 2,200 companies are taking part in person or in the event’s digital venues.
Proactively Delivering Information to Drivers
On Monday, Cerence received a CES 2022 Innovation Award for Cerence Co-Pilot, its new in-car voice assistant that is powered by artificial intelligence (AI).
The assistant not only responds to voice commands, but also uses data from the car’s sensors to understand situations inside and outside the vehicle and proactively deliver information when it’s needed. For example, as the vehicle nears the driver’s home, Cerence Co-Pilot may ask if they’d like it to initiate a smart home routine. This in-car voice assistant also integrates with cloud services.
“AI is deeply fundamental to the future of mobility, and we see our role as critical, not only in bringing convenient, enjoyable and safe experiences to drivers, but also giving OEMs the ability to maintain control of their brands and data while still giving drivers the secure, seamless and personalized connected experiences they want,” Cerence CEO Stefan Ortmanns said in a press release.
Sounding Impressively Real, Natural and Intuitive
On the same day, Mercedes-Benz previewed two voice technologies that will be displayed on its VISION EQXX, a research prototype car featuring an electric drivetrain and advanced software. The automaker says this prototype demonstrates its transformation into “an all-electric and software-driven company.”
One voice technology featured on the VISION EQXX makes it “fun to talk to,” Mercedes-Benz says. Developed in collaboration with voice synthesis experts Sonantic and with the help of machine learning, this version of the “Hey Mercedes” voice assistant has a distinctive character and personality.
“As well as sounding impressively real, the emotional expression places the conversation between driver and car on a completely new level that is more natural and intuitive,” Mercedes-Benz said in a press release.
The second voice-related technology previewed by Mercedes-Benz features neuromorphic computing, a form of information processing that reduces energy consumption, and AI software and hardware from BrainChip that is five to 10 times more efficient than conventional voice control.
“Although neuromorphic computing is still in its infancy, systems like these will be available on the market in just a few years,” Mercedes-Benz said. “When applied on scale throughout a vehicle, they have the potential to radically reduce the energy needed to run the latest AI technologies.”