Not sure of date on this article, but suggests DNN embedded and cloud?
New capabilities such as enhanced voice recognition and synthetic speech serve as the foundation for a safer, more enjoyable journey for everyone
iot-automotive.news
CERENCE INTRODUCES NEW FEATURES IN CERENCE DRIVE, THE WORLDāS LEADING TECHNOLOGY AND SOLUTIONS PORTFOLIO FOR AUTOMAKERS AND CONNECTED CARS
New capabilities such as enhanced voice recognition and synthetic speech serve as the foundation for a safer, more enjoyable journey for everyone
BURLINGTON, Mass. ā Cerence Inc., AI for a world in motion, today introduced new innovations in Cerence Drive, its technology and solutions portfolio for automakers and IoT providers to build high-quality, intelligent voice assistant experiences and speech-enabled applications. Cerence Drive today powers AI-based, voice-enabled assistants in approximately 300 million cars from nearly every major automaker in the world, including Audi, BMW, Daimler, Ford, Geely, GM, SAIC, Toyota, and many more.
The Cerence Drive portfolio offers a distinct, hybrid approach with both on-board and cloud-based technologies that include voice recognition, natural language understanding (NLU), text-to-speech (TTS), speech signal enhancement (SSE), and more. These technologies can be deployed and tightly integrated with the wide variety of systems, sensors and interfaces found in todayās connected cars. The latest version of Cerence Drive includes a variety of new features to elevate the in-car experience:
> Enhanced, active voice recognition and assistant activation that goes beyond the standard push-to-talk buttons and wake-up words. The voice assistant is always listening for a relevant utterance, question or command, much like a personal assistant would, creating a more natural experience. In addition, Cerenceās voice recognition can run throughout the car, both embedded and in the cloud, distributing the technical load and delivering a faster user experience for drivers.
> New, deep neural net (DNN)-based NLU engine built on one central technology stack with 23 languages available both embedded and in the cloud. This streamlined approach creates new standards for scalability and flexibility between embedded and cloud applications and domains for simpler integration, faster innovation, and a more seamless in-car experience, regardless of connectivity.
> TTS and synthetic voice advancements that deliver new customizations, including a non-gender-specific voice for the voice assistant, and emotional output, which enables automakers to adjust an assistantās speaking style based on the information delivered or tailored to a specific situation. In addition, the introduction of deep learning delivers a more natural and human-like voice with an affordable computational footprint.
> Improved, more intelligent speech signal enhancement that includes multi-zone processing with quick and simple speaker identification; passenger interference cancelation that blocks out background noise as well as voices from others in the car; and a deep neural net-based approach for greater noise suppression and better communication.
āImproving the experience for drivers and creating curated technology that feels unique and harmonious with our partnersā brands have been true motivators since we started our new journey as Cerence, and that extends to our latest innovations in Cerence Drive,ā said Sanjay Dhawan, CEO, Cerence. āCerence Drive, our flagship offering, is the driving force behind our promise of a truly moving in-car experience for our customers and their drivers, and our new innovations announced today are core to making that mission a reality. ā
Cerence Driveās newest features are available now for automakers worldwide. To learn more about Cerence Drive, visit
www.cerence.com/solutions.
Also a 2022 pdf spiel on their overall solutions package.
HERE
Guess we have to remember we also have a patent granted on 2018 on neuromorphic application via PVDM.
US-10157629-B2 - Low Power Neuromorphic Voice Activation System and Method
Abstract
The present invention provides a system and method for controlling a device by recognizing voice commands through a spiking neural network. The system comprises a spiking neural adaptive processor receiving an input stream that is being forwarded from a microphone, a decimation filter and then an artificial cochlea. The spiking neural adaptive processor further comprises a first spiking neural network and a second spiking neural network. The first spiking neural network checks for voice activities in output spikes received from artificial cochlea. If any voice activity is detected, it activates the second spiking neural network and passes the output spike of the artificial cochlea to the second spiking neural network that is further configured to recognize spike patterns indicative of specific voice commands. If the first spiking neural network does not detect any voice activity, it halts the second spiking neural network.