BRN Discussion Ongoing

Damo4

Regular
12 months of this, and the board will be vacated for re-election. Is that what you want? I don't.
Deals aren't made on the golf course. They are made when they see a market leader buy your product, and followers follow.

Red days are hard aren't they?

Don't worry, Sean has time to attend these important events and also lead his teams.

jUrLfriP2mrzucNhqlOb2MH21Bw=.gif
 
  • Haha
  • Like
Reactions: 8 users

HopalongPetrovski

I'm Spartacus!
12 months of this, and the board will be vacated for re-election. Is that what you want? I don't.
Deals aren't made on the golf course. They are made when they see a market leader buy your product, and followers follow.
You're projecting.
Of course there is more to building and running a successful business than what Sean will be doing at this event.
But I also think you are discounting its value and I'll disagree with you about deals and golf courses, dinners and hand shakes.
All sorts of launches, unveilings and rubber hitting the road is already scheduled between now and the next AGM, let alone what's coming that we are not yet privy too.
And NO.
I didn't vote for the spill that just happened, and also spoke against it here, both before and after the event.
As for market leaders, I think Renesas and Megachips are a pretty good start, let alone the host of others revealed over the past few months.
 
  • Like
  • Love
  • Fire
Reactions: 18 users

wilzy123

Founding Member
12 months of this, and the board will be vacated for re-election. Is that what you want? I don't.
Deals aren't made on the golf course. They are made when they see a market leader buy your product, and followers follow.
This is an excellent post. Thank you.

72884c7f98149bd422e488510277f2b0b9-20-dumpster-fire.rsquare.w700.gif
 
  • Haha
  • Like
  • Fire
Reactions: 14 users

Dr E Brown

Regular
Ok, I’m getting incredibly confused now. Is somebody able to explain to me how this impacts upon the Akida benefits spruiked by Mercedes last year please.
 
  • Like
  • Thinking
  • Fire
Reactions: 14 users

robsmark

Regular
Ok, I’m getting incredibly confused now. Is somebody able to explain to me how this impacts upon the Akida benefits spruiked by Mercedes last year please.

I’m not really sure what you’re hoping for in a response here… one poster could state that it wouldn’t impact anything, and another could state that it would impact everything, but the reality is that nobody has a clue!

The company hasn’t even announced Mercedes as a customer yet and people think it’s a sure thing. Until we receive a commercial deal announcement, it should be considered as nothing.
 
  • Like
  • Fire
  • Thinking
Reactions: 17 users

Dr E Brown

Regular
I’m not really sure what you’re hoping for in a response here… one poster could state that it wouldn’t impact anything, and another could state that it would impact everything, but the reality is that nobody has a clue!

The company hasn’t even announced Mercedes as a customer yet and people think it’s a sure thing. Until we receive a commercial deal announcement, it should be considered as nothing.
Robsmark, it is a genuine question. You are correct that Mercedes are not announced as a customer. However, they are on the trusted by section of our website, implies using Akida for future opportunities. They have openly stated that use of Akida for, I understand, to be this system reduces power consumption by 5x, or words to that effect.
My question is - are Cerence using Akida in their offering or is there a pivot, or am I confusing apples and oranges.

Perhaps it is my lack of understanding that is leading to my confusion. The tweet and information at least gave me cause for pause. I honestly need help to understand if there is anything here or not.
And if given all the continued chat about Mercedes announcement by the top dogs in the company, should I really be considering that as nothing?!
 
  • Like
  • Fire
  • Love
Reactions: 23 users

Iseki

Regular
You're projecting.
Of course there is more to building and running a successful business than what Sean will be doing at this event.
But I also think you are discounting its value and I'll disagree with you about deals and golf courses, dinners and hand shakes.
All sorts of launches, unveilings and rubber hitting the road is already scheduled between now and the next AGM, let alone what's coming that we are not yet privy too.
And NO.
I didn't vote for the spill that just happened, and also spoke against it here, both before and after the event.
As for market leaders, I think Renesas and Megachips are a pretty good start, let alone the host of others revealed over the past few months.
Sure, let them play golf. That'll really get the shareholders going.
Either that or he can tell us how far off the new chip is going to be. Why the secrecy when silence creates malaise?
 
  • Like
  • Haha
Reactions: 5 users

TopCat

Regular
  • Like
  • Fire
Reactions: 6 users

HopalongPetrovski

I'm Spartacus!
Sure, let them play golf. That'll really get the shareholders going.
Either that or he can tell us how far off the new chip is going to be. Why the secrecy when silence creates malaise?
Ahhhhh, I see, you're looking for an argument.
That room is three doors on the left down the hall. 🤣
 
  • Haha
  • Like
  • Love
Reactions: 22 users

robsmark

Regular
Robsmark, it is a genuine question. You are correct that Mercedes are not announced as a customer. However, they are on the trusted by section of our website, implies using Akida for future opportunities. They have openly stated that use of Akida for, I understand, to be this system reduces power consumption by 5x, or words to that effect.
My question is - are Cerence using Akida in their offering or is there a pivot, or am I confusing apples and oranges.

Perhaps it is my lack of understanding that is leading to my confusion. The tweet and information at least gave me cause for pause. I honestly need help to understand if there is anything here or not.
And if given all the continued chat about Mercedes announcement by the top dogs in the company, should I really be considering that as nothing?!
I stand by my comment, until we have a signed deal then it’s nothing. Chat, partnerships, discussions, articles, and tweets mean diddly squat unless they sign, start using Akida, and revenue starts flowing. Until then they have zero commitment towards Brainchip. That’s just the way it is.
 
  • Like
  • Love
  • Thinking
Reactions: 30 users

manny100

Top 20
I’m not really sure what you’re hoping for in a response here… one poster could state that it wouldn’t impact anything, and another could state that it would impact everything, but the reality is that nobody has a clue!

The company hasn’t even announced Mercedes as a customer yet and people think it’s a sure thing. Until we receive a commercial deal announcement, it should be considered as nothing.
Check the About/Investor relations area of the BRN website.
BRN list 7 reasons under Why Invest.
One of them is:
"Marquee brands include Mercedes, Valeo, Vorago, and NASA, and commercial IP licenses with Renesas and MegaChips.
Commercial availability of semiconductor chips, IP, tools, and boards."
They would not be including these companies on their website 'in writing' if there was no connection.
They are all subject to NDA's and from engagement to revenue is a fair time.
Patience is the key.
My bold.
 
  • Like
  • Love
  • Fire
Reactions: 38 users

Damo4

Regular
Robsmark, it is a genuine question. You are correct that Mercedes are not announced as a customer. However, they are on the trusted by section of our website, implies using Akida for future opportunities. They have openly stated that use of Akida for, I understand, to be this system reduces power consumption by 5x, or words to that effect.
My question is - are Cerence using Akida in their offering or is there a pivot, or am I confusing apples and oranges.

Perhaps it is my lack of understanding that is leading to my confusion. The tweet and information at least gave me cause for pause. I honestly need help to understand if there is anything here or not.
And if given all the continued chat about Mercedes announcement by the top dogs in the company, should I really be considering that as nothing?!

Push-to-talk buttons and wake-up words have long been the standard ways that users begin their conversations with their virtual assistants, both in the car and in the home. Cerence JustTalk revolutionizes interactions with in-car assistants, leveraging AI-powered technology that has the intelligence to detect when drivers are talking to the assistant – and to keep quiet when they’re not. With Cerence JustTalk, drivers can simply begin speaking when they have a question or command for the assistant. They don’t need to push a button or say “Hey;” the system knows when it’s being summoned based on the words uttered, how the driver spoke them, and the context of the conversation. Cerence JustTalk’s intelligent system capabilities also make use of additional sensor data and contextual information that enables it to avoid awakening the assistant during a conversation amongst the car’s inhabitants.

“Being the architects of MB.OS as well as the MBUX voice assistant, we are able to utilize innovative technology such as JustTalk to bring the intuitiveness of using voice as first interaction modality to the next level,” said Andreas Biehl, Senior Manager Voice Assistant, Mercedes-Benz AG.

With numerous patents in the field of voice assistant activation, Cerence is uniquely positioned to deliver this innovation. Building on its core competencies in conversational AI and its more than two decades of experience in automotive and mobility user interfaces, Cerence is uniquely positioned to create this intelligent and natural way of activating the in-car assistant – an innovation that no other voice system offers in this form



It's pretty clear isn't it? The current MBUX is using Cerence JustTalk.
Akida was only in the EQXX and was a "Hey Mercedes" wake function.
Some people here theorised that the "always on" function of Akida 1.5 could be used for such a things but I think that's put to bed with this statement:
an innovation that no other voice system offers in this form

Also regarding the trusted section of the website, could just mean those who found some solution with Akida: the hey Mercedes function
 
  • Like
  • Thinking
Reactions: 11 users

Damo4

Regular
Push-to-talk buttons and wake-up words have long been the standard ways that users begin their conversations with their virtual assistants, both in the car and in the home. Cerence JustTalk revolutionizes interactions with in-car assistants, leveraging AI-powered technology that has the intelligence to detect when drivers are talking to the assistant – and to keep quiet when they’re not. With Cerence JustTalk, drivers can simply begin speaking when they have a question or command for the assistant. They don’t need to push a button or say “Hey;” the system knows when it’s being summoned based on the words uttered, how the driver spoke them, and the context of the conversation. Cerence JustTalk’s intelligent system capabilities also make use of additional sensor data and contextual information that enables it to avoid awakening the assistant during a conversation amongst the car’s inhabitants.

“Being the architects of MB.OS as well as the MBUX voice assistant, we are able to utilize innovative technology such as JustTalk to bring the intuitiveness of using voice as first interaction modality to the next level,” said Andreas Biehl, Senior Manager Voice Assistant, Mercedes-Benz AG.

With numerous patents in the field of voice assistant activation, Cerence is uniquely positioned to deliver this innovation. Building on its core competencies in conversational AI and its more than two decades of experience in automotive and mobility user interfaces, Cerence is uniquely positioned to create this intelligent and natural way of activating the in-car assistant – an innovation that no other voice system offers in this form



It's pretty clear isn't it? The current MBUX is using Cerence JustTalk.
Akida was only in the EQXX and was a "Hey Mercedes" wake function.
Some people here theorised that the "always on" function of Akida 1.5 could be used for such a things but I think that's put to bed with this statement:
an innovation that no other voice system offers in this form

Also regarding the trusted section of the website, could just mean those who found some solution with Akida: the hey Mercedes function
BTW I'm not suggesting we aren't still working with Mercedes, just that it's not on the always on speech recognition.
We have tonnes of NDAs and Sean has confirmed we are still working with all EAP's.
Also some criticised Akida for it's Narrow use cases, I can't think of anything more narrow than Cerence' solution for voice recognition.
Akida has heaps more to offer, and is better suited for pure edge devices.
 
Last edited:
  • Like
Reactions: 7 users

Diogenese

Top 20
Robsmark, it is a genuine question. You are correct that Mercedes are not announced as a customer. However, they are on the trusted by section of our website, implies using Akida for future opportunities. They have openly stated that use of Akida for, I understand, to be this system reduces power consumption by 5x, or words to that effect.
My question is - are Cerence using Akida in their offering or is there a pivot, or am I confusing apples and oranges.

Perhaps it is my lack of understanding that is leading to my confusion. The tweet and information at least gave me cause for pause. I honestly need help to understand if there is anything here or not.
And if given all the continued chat about Mercedes announcement by the top dogs in the company, should I really be considering that as nothing?!
Hi Dr E,

I too share your confusion about Cerence.

However, there are some points which mitigate my concerns somewhat.

A few weeks ago, Mercedes started talking about MBUX not needing the "Hey Mercedes" wake up when there was only one person in the car, the corollary being that they still need it when there are two or more people.

Another thing is I've looked at Cerence patents, and while they discuss the use of NNs, they do not describe or claim any NN circuitry.

As you say, Mercedes found Akida to be 5 to 10 times beter than other systems for "Hey Mercedes". They also used "Hey Mercedes" as an example of what Akida could do, and appeared to make reference to plural uses of Akida.

On top of that, Mercedes also stated their desire to standardize on the chips they use. Akida is sensor agnostic.

Then there's Valeo Scala 3 lidar due out shortly, which I think may contain Akida, leaving aside Luminar with their foveated lidar and who have stated that they expect to expand their cooperation with Mercedes from mid-decade. MB used Scala 2 to obtain Level 3 ADAS certification, (sub-60 kph), while Scala 3 is rated to 160 kph.

Luminar, like Cerence, talk about using AI, but do not describe its construction.

Standardizing on Akida would improve the efficiency of the MB design office as their engineers would all be signing off the same hymn sheet in close harmony.
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 59 users

Diogenese

Top 20
Hi Dr E,

I too share your confusion about Cerence.

However, there are some points which mitigate my concerns somewhat.

A few weeks ago, Mercedes started talking about MBUX not needing the "Hey Mercedes" wake up when there was only one person in the car, the corollary being that they still need it when there are two or more people.

Another thing is I've looked at Cerence patents, and while they discuss the use of NNs, they do not describe or claim any NN circuitry.

As you say, Mercedes found Akida to be 5 to 10 times beter than other systems for "Hay Mercedes". They also used "Hay Mercedes" as an example of what Akida could do, and appeared to make reference to plural uses of Akida.

On top of that, Mercedes also stated their desire to standardize on the chips they use. Akida is sensor agnostic.

Then there's Valeo Scala 3 lidar due out shortly, which I think may contain Akida, leaving aside Luminar with their foveated lidar and who have stated that they expect to expand their cooperation with Mercedes from mid-decade. MB used Scala 2 to obtain Level 3 AD certification, (sub-60 kph), while Scala 3 is rated to 160 kph.

Luminar, like Cerence, talk about using AI, but do not describe its construction.

Standardizing on Akida would improve the efficiency of the MB design office as their engineers would all be signing off the same hymn sheet in close harmony.
Factors which can be bolstered by having different design engineers working on the same type of NN implementation are the configuration of the NPU/node connexions and layer configurations and the compilation and augmentation of the model libraries used for comparison in the inference/classification of sensor signals.
 
  • Like
  • Fire
  • Thinking
Reactions: 13 users
Factors which can be bolstered by having different design engineers working on the same type of NN implementation are the configuration of the NPU/node connexions and layer configurations and the compilation and augmentation of the model libraries used for comparison in the inference/classification of sensor signals.
Not sure of date on this article, but suggests DNN embedded and cloud?


CERENCE INTRODUCES NEW FEATURES IN CERENCE DRIVE, THE WORLD’S LEADING TECHNOLOGY AND SOLUTIONS PORTFOLIO FOR AUTOMAKERS AND CONNECTED CARS​

New capabilities such as enhanced voice recognition and synthetic speech serve as the foundation for a safer, more enjoyable journey for everyone
BURLINGTON, Mass. – Cerence Inc., AI for a world in motion, today introduced new innovations in Cerence Drive, its technology and solutions portfolio for automakers and IoT providers to build high-quality, intelligent voice assistant experiences and speech-enabled applications. Cerence Drive today powers AI-based, voice-enabled assistants in approximately 300 million cars from nearly every major automaker in the world, including Audi, BMW, Daimler, Ford, Geely, GM, SAIC, Toyota, and many more.
The Cerence Drive portfolio offers a distinct, hybrid approach with both on-board and cloud-based technologies that include voice recognition, natural language understanding (NLU), text-to-speech (TTS), speech signal enhancement (SSE), and more. These technologies can be deployed and tightly integrated with the wide variety of systems, sensors and interfaces found in today’s connected cars. The latest version of Cerence Drive includes a variety of new features to elevate the in-car experience:
> Enhanced, active voice recognition and assistant activation that goes beyond the standard push-to-talk buttons and wake-up words. The voice assistant is always listening for a relevant utterance, question or command, much like a personal assistant would, creating a more natural experience. In addition, Cerence’s voice recognition can run throughout the car, both embedded and in the cloud, distributing the technical load and delivering a faster user experience for drivers.
> New, deep neural net (DNN)-based NLU engine built on one central technology stack with 23 languages available both embedded and in the cloud. This streamlined approach creates new standards for scalability and flexibility between embedded and cloud applications and domains for simpler integration, faster innovation, and a more seamless in-car experience, regardless of connectivity.
> TTS and synthetic voice advancements that deliver new customizations, including a non-gender-specific voice for the voice assistant, and emotional output, which enables automakers to adjust an assistant’s speaking style based on the information delivered or tailored to a specific situation. In addition, the introduction of deep learning delivers a more natural and human-like voice with an affordable computational footprint.
> Improved, more intelligent speech signal enhancement that includes multi-zone processing with quick and simple speaker identification; passenger interference cancelation that blocks out background noise as well as voices from others in the car; and a deep neural net-based approach for greater noise suppression and better communication.
“Improving the experience for drivers and creating curated technology that feels unique and harmonious with our partners’ brands have been true motivators since we started our new journey as Cerence, and that extends to our latest innovations in Cerence Drive,” said Sanjay Dhawan, CEO, Cerence. “Cerence Drive, our flagship offering, is the driving force behind our promise of a truly moving in-car experience for our customers and their drivers, and our new innovations announced today are core to making that mission a reality. ”
Cerence Drive’s newest features are available now for automakers worldwide. To learn more about Cerence Drive, visit www.cerence.com/solutions.

Also a 2022 pdf spiel on their overall solutions package.

HERE

Guess we have to remember we also have a patent granted on 2018 on neuromorphic application via PVDM.


US-10157629-B2 - Low Power Neuromorphic Voice Activation System and Method​


Abstract
The present invention provides a system and method for controlling a device by recognizing voice commands through a spiking neural network. The system comprises a spiking neural adaptive processor receiving an input stream that is being forwarded from a microphone, a decimation filter and then an artificial cochlea. The spiking neural adaptive processor further comprises a first spiking neural network and a second spiking neural network. The first spiking neural network checks for voice activities in output spikes received from artificial cochlea. If any voice activity is detected, it activates the second spiking neural network and passes the output spike of the artificial cochlea to the second spiking neural network that is further configured to recognize spike patterns indicative of specific voice commands. If the first spiking neural network does not detect any voice activity, it halts the second spiking neural network.
 
  • Like
  • Love
  • Thinking
Reactions: 17 users

Diogenese

Top 20
Hi Dr E,

I too share your confusion about Cerence.

However, there are some points which mitigate my concerns somewhat.

A few weeks ago, Mercedes started talking about MBUX not needing the "Hey Mercedes" wake up when there was only one person in the car, the corollary being that they still need it when there are two or more people.

Another thing is I've looked at Cerence patents, and while they discuss the use of NNs, they do not describe or claim any NN circuitry.

As you say, Mercedes found Akida to be 5 to 10 times beter than other systems for "Hey Mercedes". They also used "Hey Mercedes" as an example of what Akida could do, and appeared to make reference to plural uses of Akida.

On top of that, Mercedes also stated their desire to standardize on the chips they use. Akida is sensor agnostic.

Then there's Valeo Scala 3 lidar due out shortly, which I think may contain Akida, leaving aside Luminar with their foveated lidar and who have stated that they expect to expand their cooperation with Mercedes from mid-decade. MB used Scala 2 to obtain Level 3 ADAS certification, (sub-60 kph), while Scala 3 is rated to 160 kph.

Luminar, like Cerence, talk about using AI, but do not describe its construction.

Standardizing on Akida would improve the efficiency of the MB design office as their engineers would all be signing off the same hymn sheet in close harmony.

This patent application shows an acoustic classifier 152, a function which could be performed by Akida.

The combined classifier may be used with the 3 classifier inputs for the context discrimination to decide if the speaker is talking to the car or in conversation.

US2022343906A1 FLEXIBLE-FORMAT VOICE COMMAND

1686732075911.png


[0042] As introduced above, the reasoner 150 processes both text output 115 and the audio signal 105 . The audio signal 105 is processed by an acoustic classifier 152 . In some implementations, this classifier is a machine learning classifier that is configured with data (i.e., from configuration data 160 ) that was trained on examples of system-directed and of non-system directed utterance by an offline training system 180 . In some examples, the machine-learning component of the acoustic classifier 152 receives a fixed-length representation of the utterance (or at least the part of the utterance received to that point) and outputs a score (e.g., probability, log likelihood, etc.) that represents a confidence that the utterance is a command. For example, the machine-learning component can be a deep neural network. Note that such processing does not in general depend on any particular words in the input, and may instead be based on features such as duration, amplitude, or pitch variation (e.g., rising or falling pitch). In some implementations, the machine-learning component processes a sequence, for example, processing a sequence of signal processing features (e.g., corresponding to fixed-length frames) that represent time-local characteristics of the signal, such as amplitude, spectral, and/or pitch, and the machine-learning component processes the sequence to provide the output score. For example, the machine learning component can implement a convolutional or recurrent neural network.



[0051] In situations in which the reasoner 150 determines that an utterance is a system-directed command directed to a particular assistant, it sends a reasoner output 155 to one of the assistants 140 A-Z with which the system 100 is configured. As an example, assistant 140 A includes a natural language understanding (NLU) 120 , whose output representing the meaning or intent of the command is passed to a command processor 130 , which acts on the determined meaning or intent.



[0052] Various technical approaches may be used in the NLU component, including deterministic or probabilistic parsing according to a grammar provided from the configuration data 160 , of machine-learning based mapping of the text output 115 to a representation of meaning, for example, using neural networks configured to classify the text output and/or identify particular words as providing variable values (e.g., “slot” values) for identified commands. The NLU component 120 may provide an indication of a general class of commands (e.g., a “skill”) or a specific command (e.g., and “intent”), as well as values of variables associated with the command. The configuration of the assistant 140 A may use configuration data that is determined using a training procedure and stored with other configuration data in the configuration data storage 160
.


I'm guessing that Akida 2 could be used in NLU 120.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 25 users

robsmark

Regular
Check the About/Investor relations area of the BRN website.
BRN list 7 reasons under Why Invest.
One of them is:
"Marquee brands include Mercedes, Valeo, Vorago, and NASA, and commercial IP licenses with Renesas and MegaChips.
Commercial availability of semiconductor chips, IP, tools, and boards."
They would not be including these companies on their website 'in writing' if there was no connection.
They are all subject to NDA's and from engagement to revenue is a fair time.
Patience is the key.
My bold.
Again, I reiterate, Mercedes is not a customer. They have not signed, and have not committed, and they are not currently a customer. It’s that simple.
 
  • Like
  • Fire
  • Love
Reactions: 13 users

gwinny66

Emerged
This patent application shows an acoustic classifier 152, a function which could be performed by Akida.

The combined classifier may be used with the 3 classifier inputs for the context discrimination to decide if the speaker is talking to the car or in conversation.

US2022343906A1 FLEXIBLE-FORMAT VOICE COMMAND

View attachment 38351

[0042] As introduced above, the reasoner 150 processes both text output 115 and the audio signal 105 . The audio signal 105 is processed by an acoustic classifier 152 . In some implementations, this classifier is a machine learning classifier that is configured with data (i.e., from configuration data 160 ) that was trained on examples of system-directed and of non-system directed utterance by an offline training system 180 . In some examples, the machine-learning component of the acoustic classifier 152 receives a fixed-length representation of the utterance (or at least the part of the utterance received to that point) and outputs a score (e.g., probability, log likelihood, etc.) that represents a confidence that the utterance is a command. For example, the machine-learning component can be a deep neural network. Note that such processing does not in general depend on any particular words in the input, and may instead be based on features such as duration, amplitude, or pitch variation (e.g., rising or falling pitch). In some implementations, the machine-learning component processes a sequence, for example, processing a sequence of signal processing features (e.g., corresponding to fixed-length frames) that represent time-local characteristics of the signal, such as amplitude, spectral, and/or pitch, and the machine-learning component processes the sequence to provide the output score. For example, the machine learning component can implement a convolutional or recurrent neural network.



[0051] In situations in which the reasoner 150 determines that an utterance is a system-directed command directed to a particular assistant, it sends a reasoner output 155 to one of the assistants 140 A-Z with which the system 100 is configured. As an example, assistant 140 A includes a natural language understanding (NLU) 120 , whose output representing the meaning or intent of the command is passed to a command processor 130 , which acts on the determined meaning or intent.



[0052] Various technical approaches may be used in the NLU component, including deterministic or probabilistic parsing according to a grammar provided from the configuration data 160 , of machine-learning based mapping of the text output 115 to a representation of meaning, for example, using neural networks configured to classify the text output and/or identify particular words as providing variable values (e.g., “slot” values) for identified commands. The NLU component 120 may provide an indication of a general class of commands (e.g., a “skill”) or a specific command (e.g., and “intent”), as well as values of variables associated with the command. The configuration of the assistant 140 A may use configuration data that is determined using a training procedure and stored with other configuration data in the configuration data storage 160
.


I'm guessing that Akida 2 could be used in NLU 120.
Isn't Cerence a software company? Not a hardware IP provider, I believe they are part of the Hey Mercedes voice activation system as well?
 
Top Bottom