BRN Discussion Ongoing

HopalongPetrovski

I'm Spartacus!
Sure, let them play golf. That'll really get the shareholders going.
Either that or he can tell us how far off the new chip is going to be. Why the secrecy when silence creates malaise?
Ahhhhh, I see, you're looking for an argument.
That room is three doors on the left down the hall. šŸ¤£
 
  • Haha
  • Like
  • Love
Reactions: 22 users

robsmark

Regular
Robsmark, it is a genuine question. You are correct that Mercedes are not announced as a customer. However, they are on the trusted by section of our website, implies using Akida for future opportunities. They have openly stated that use of Akida for, I understand, to be this system reduces power consumption by 5x, or words to that effect.
My question is - are Cerence using Akida in their offering or is there a pivot, or am I confusing apples and oranges.

Perhaps it is my lack of understanding that is leading to my confusion. The tweet and information at least gave me cause for pause. I honestly need help to understand if there is anything here or not.
And if given all the continued chat about Mercedes announcement by the top dogs in the company, should I really be considering that as nothing?!
I stand by my comment, until we have a signed deal then itā€™s nothing. Chat, partnerships, discussions, articles, and tweets mean diddly squat unless they sign, start using Akida, and revenue starts flowing. Until then they have zero commitment towards Brainchip. Thatā€™s just the way it is.
 
  • Like
  • Love
  • Thinking
Reactions: 30 users

manny100

Regular
Iā€™m not really sure what youā€™re hoping for in a response hereā€¦ one poster could state that it wouldnā€™t impact anything, and another could state that it would impact everything, but the reality is that nobody has a clue!

The company hasnā€™t even announced Mercedes as a customer yet and people think itā€™s a sure thing. Until we receive a commercial deal announcement, it should be considered as nothing.
Check the About/Investor relations area of the BRN website.
BRN list 7 reasons under Why Invest.
One of them is:
"Marquee brands include Mercedes, Valeo, Vorago, and NASA, and commercial IP licenses with Renesas and MegaChips.
Commercial availability of semiconductor chips, IP, tools, and boards."
They would not be including these companies on their website 'in writing' if there was no connection.
They are all subject to NDA's and from engagement to revenue is a fair time.
Patience is the key.
My bold.
 
  • Like
  • Love
  • Fire
Reactions: 38 users

Damo4

Regular
Robsmark, it is a genuine question. You are correct that Mercedes are not announced as a customer. However, they are on the trusted by section of our website, implies using Akida for future opportunities. They have openly stated that use of Akida for, I understand, to be this system reduces power consumption by 5x, or words to that effect.
My question is - are Cerence using Akida in their offering or is there a pivot, or am I confusing apples and oranges.

Perhaps it is my lack of understanding that is leading to my confusion. The tweet and information at least gave me cause for pause. I honestly need help to understand if there is anything here or not.
And if given all the continued chat about Mercedes announcement by the top dogs in the company, should I really be considering that as nothing?!

Push-to-talk buttons and wake-up words have long been the standard ways that users begin their conversations with their virtual assistants, both in the car and in the home. Cerence JustTalk revolutionizes interactions with in-car assistants, leveraging AI-powered technology that has the intelligence to detect when drivers are talking to the assistant ā€“ and to keep quiet when theyā€™re not. With Cerence JustTalk, drivers can simply begin speaking when they have a question or command for the assistant. They donā€™t need to push a button or say ā€œHey;ā€ the system knows when itā€™s being summoned based on the words uttered, how the driver spoke them, and the context of the conversation. Cerence JustTalkā€™s intelligent system capabilities also make use of additional sensor data and contextual information that enables it to avoid awakening the assistant during a conversation amongst the carā€™s inhabitants.

ā€œBeing the architects of MB.OS as well as the MBUX voice assistant, we are able to utilize innovative technology such as JustTalk to bring the intuitiveness of using voice as first interaction modality to the next level,ā€ said Andreas Biehl, Senior Manager Voice Assistant, Mercedes-Benz AG.

With numerous patents in the field of voice assistant activation, Cerence is uniquely positioned to deliver this innovation. Building on its core competencies in conversational AI and its more than two decades of experience in automotive and mobility user interfaces, Cerence is uniquely positioned to create this intelligent and natural way of activating the in-car assistant ā€“ an innovation that no other voice system offers in this form



It's pretty clear isn't it? The current MBUX is using Cerence JustTalk.
Akida was only in the EQXX and was a "Hey Mercedes" wake function.
Some people here theorised that the "always on" function of Akida 1.5 could be used for such a things but I think that's put to bed with this statement:
an innovation that no other voice system offers in this form

Also regarding the trusted section of the website, could just mean those who found some solution with Akida: the hey Mercedes function
 
  • Like
  • Thinking
Reactions: 11 users

Damo4

Regular
Push-to-talk buttons and wake-up words have long been the standard ways that users begin their conversations with their virtual assistants, both in the car and in the home. Cerence JustTalk revolutionizes interactions with in-car assistants, leveraging AI-powered technology that has the intelligence to detect when drivers are talking to the assistant ā€“ and to keep quiet when theyā€™re not. With Cerence JustTalk, drivers can simply begin speaking when they have a question or command for the assistant. They donā€™t need to push a button or say ā€œHey;ā€ the system knows when itā€™s being summoned based on the words uttered, how the driver spoke them, and the context of the conversation. Cerence JustTalkā€™s intelligent system capabilities also make use of additional sensor data and contextual information that enables it to avoid awakening the assistant during a conversation amongst the carā€™s inhabitants.

ā€œBeing the architects of MB.OS as well as the MBUX voice assistant, we are able to utilize innovative technology such as JustTalk to bring the intuitiveness of using voice as first interaction modality to the next level,ā€ said Andreas Biehl, Senior Manager Voice Assistant, Mercedes-Benz AG.

With numerous patents in the field of voice assistant activation, Cerence is uniquely positioned to deliver this innovation. Building on its core competencies in conversational AI and its more than two decades of experience in automotive and mobility user interfaces, Cerence is uniquely positioned to create this intelligent and natural way of activating the in-car assistant ā€“ an innovation that no other voice system offers in this form



It's pretty clear isn't it? The current MBUX is using Cerence JustTalk.
Akida was only in the EQXX and was a "Hey Mercedes" wake function.
Some people here theorised that the "always on" function of Akida 1.5 could be used for such a things but I think that's put to bed with this statement:
an innovation that no other voice system offers in this form

Also regarding the trusted section of the website, could just mean those who found some solution with Akida: the hey Mercedes function
BTW I'm not suggesting we aren't still working with Mercedes, just that it's not on the always on speech recognition.
We have tonnes of NDAs and Sean has confirmed we are still working with all EAP's.
Also some criticised Akida for it's Narrow use cases, I can't think of anything more narrow than Cerence' solution for voice recognition.
Akida has heaps more to offer, and is better suited for pure edge devices.
 
Last edited:
  • Like
Reactions: 7 users

Diogenese

Top 20
Robsmark, it is a genuine question. You are correct that Mercedes are not announced as a customer. However, they are on the trusted by section of our website, implies using Akida for future opportunities. They have openly stated that use of Akida for, I understand, to be this system reduces power consumption by 5x, or words to that effect.
My question is - are Cerence using Akida in their offering or is there a pivot, or am I confusing apples and oranges.

Perhaps it is my lack of understanding that is leading to my confusion. The tweet and information at least gave me cause for pause. I honestly need help to understand if there is anything here or not.
And if given all the continued chat about Mercedes announcement by the top dogs in the company, should I really be considering that as nothing?!
Hi Dr E,

I too share your confusion about Cerence.

However, there are some points which mitigate my concerns somewhat.

A few weeks ago, Mercedes started talking about MBUX not needing the "Hey Mercedes" wake up when there was only one person in the car, the corollary being that they still need it when there are two or more people.

Another thing is I've looked at Cerence patents, and while they discuss the use of NNs, they do not describe or claim any NN circuitry.

As you say, Mercedes found Akida to be 5 to 10 times beter than other systems for "Hey Mercedes". They also used "Hey Mercedes" as an example of what Akida could do, and appeared to make reference to plural uses of Akida.

On top of that, Mercedes also stated their desire to standardize on the chips they use. Akida is sensor agnostic.

Then there's Valeo Scala 3 lidar due out shortly, which I think may contain Akida, leaving aside Luminar with their foveated lidar and who have stated that they expect to expand their cooperation with Mercedes from mid-decade. MB used Scala 2 to obtain Level 3 ADAS certification, (sub-60 kph), while Scala 3 is rated to 160 kph.

Luminar, like Cerence, talk about using AI, but do not describe its construction.

Standardizing on Akida would improve the efficiency of the MB design office as their engineers would all be signing off the same hymn sheet in close harmony.
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 59 users

Diogenese

Top 20
Hi Dr E,

I too share your confusion about Cerence.

However, there are some points which mitigate my concerns somewhat.

A few weeks ago, Mercedes started talking about MBUX not needing the "Hey Mercedes" wake up when there was only one person in the car, the corollary being that they still need it when there are two or more people.

Another thing is I've looked at Cerence patents, and while they discuss the use of NNs, they do not describe or claim any NN circuitry.

As you say, Mercedes found Akida to be 5 to 10 times beter than other systems for "Hay Mercedes". They also used "Hay Mercedes" as an example of what Akida could do, and appeared to make reference to plural uses of Akida.

On top of that, Mercedes also stated their desire to standardize on the chips they use. Akida is sensor agnostic.

Then there's Valeo Scala 3 lidar due out shortly, which I think may contain Akida, leaving aside Luminar with their foveated lidar and who have stated that they expect to expand their cooperation with Mercedes from mid-decade. MB used Scala 2 to obtain Level 3 AD certification, (sub-60 kph), while Scala 3 is rated to 160 kph.

Luminar, like Cerence, talk about using AI, but do not describe its construction.

Standardizing on Akida would improve the efficiency of the MB design office as their engineers would all be signing off the same hymn sheet in close harmony.
Factors which can be bolstered by having different design engineers working on the same type of NN implementation are the configuration of the NPU/node connexions and layer configurations and the compilation and augmentation of the model libraries used for comparison in the inference/classification of sensor signals.
 
  • Like
  • Fire
  • Thinking
Reactions: 13 users
Factors which can be bolstered by having different design engineers working on the same type of NN implementation are the configuration of the NPU/node connexions and layer configurations and the compilation and augmentation of the model libraries used for comparison in the inference/classification of sensor signals.
Not sure of date on this article, but suggests DNN embedded and cloud?


CERENCE INTRODUCES NEW FEATURES IN CERENCE DRIVE, THE WORLDā€™S LEADING TECHNOLOGY AND SOLUTIONS PORTFOLIO FOR AUTOMAKERS AND CONNECTED CARS​

New capabilities such as enhanced voice recognition and synthetic speech serve as the foundation for a safer, more enjoyable journey for everyone
BURLINGTON, Mass. ā€“ Cerence Inc., AI for a world in motion, today introduced new innovations in Cerence Drive, its technology and solutions portfolio for automakers and IoT providers to build high-quality, intelligent voice assistant experiences and speech-enabled applications. Cerence Drive today powers AI-based, voice-enabled assistants in approximately 300 million cars from nearly every major automaker in the world, including Audi, BMW, Daimler, Ford, Geely, GM, SAIC, Toyota, and many more.
The Cerence Drive portfolio offers a distinct, hybrid approach with both on-board and cloud-based technologies that include voice recognition, natural language understanding (NLU), text-to-speech (TTS), speech signal enhancement (SSE), and more. These technologies can be deployed and tightly integrated with the wide variety of systems, sensors and interfaces found in todayā€™s connected cars. The latest version of Cerence Drive includes a variety of new features to elevate the in-car experience:
> Enhanced, active voice recognition and assistant activation that goes beyond the standard push-to-talk buttons and wake-up words. The voice assistant is always listening for a relevant utterance, question or command, much like a personal assistant would, creating a more natural experience. In addition, Cerenceā€™s voice recognition can run throughout the car, both embedded and in the cloud, distributing the technical load and delivering a faster user experience for drivers.
> New, deep neural net (DNN)-based NLU engine built on one central technology stack with 23 languages available both embedded and in the cloud. This streamlined approach creates new standards for scalability and flexibility between embedded and cloud applications and domains for simpler integration, faster innovation, and a more seamless in-car experience, regardless of connectivity.
> TTS and synthetic voice advancements that deliver new customizations, including a non-gender-specific voice for the voice assistant, and emotional output, which enables automakers to adjust an assistantā€™s speaking style based on the information delivered or tailored to a specific situation. In addition, the introduction of deep learning delivers a more natural and human-like voice with an affordable computational footprint.
> Improved, more intelligent speech signal enhancement that includes multi-zone processing with quick and simple speaker identification; passenger interference cancelation that blocks out background noise as well as voices from others in the car; and a deep neural net-based approach for greater noise suppression and better communication.
ā€œImproving the experience for drivers and creating curated technology that feels unique and harmonious with our partnersā€™ brands have been true motivators since we started our new journey as Cerence, and that extends to our latest innovations in Cerence Drive,ā€ said Sanjay Dhawan, CEO, Cerence. ā€œCerence Drive, our flagship offering, is the driving force behind our promise of a truly moving in-car experience for our customers and their drivers, and our new innovations announced today are core to making that mission a reality. ā€
Cerence Driveā€™s newest features are available now for automakers worldwide. To learn more about Cerence Drive, visit www.cerence.com/solutions.

Also a 2022 pdf spiel on their overall solutions package.

HERE

Guess we have to remember we also have a patent granted on 2018 on neuromorphic application via PVDM.


US-10157629-B2 - Low Power Neuromorphic Voice Activation System and Method​


Abstract
The present invention provides a system and method for controlling a device by recognizing voice commands through a spiking neural network. The system comprises a spiking neural adaptive processor receiving an input stream that is being forwarded from a microphone, a decimation filter and then an artificial cochlea. The spiking neural adaptive processor further comprises a first spiking neural network and a second spiking neural network. The first spiking neural network checks for voice activities in output spikes received from artificial cochlea. If any voice activity is detected, it activates the second spiking neural network and passes the output spike of the artificial cochlea to the second spiking neural network that is further configured to recognize spike patterns indicative of specific voice commands. If the first spiking neural network does not detect any voice activity, it halts the second spiking neural network.
 
  • Like
  • Love
  • Thinking
Reactions: 17 users

Diogenese

Top 20
Hi Dr E,

I too share your confusion about Cerence.

However, there are some points which mitigate my concerns somewhat.

A few weeks ago, Mercedes started talking about MBUX not needing the "Hey Mercedes" wake up when there was only one person in the car, the corollary being that they still need it when there are two or more people.

Another thing is I've looked at Cerence patents, and while they discuss the use of NNs, they do not describe or claim any NN circuitry.

As you say, Mercedes found Akida to be 5 to 10 times beter than other systems for "Hey Mercedes". They also used "Hey Mercedes" as an example of what Akida could do, and appeared to make reference to plural uses of Akida.

On top of that, Mercedes also stated their desire to standardize on the chips they use. Akida is sensor agnostic.

Then there's Valeo Scala 3 lidar due out shortly, which I think may contain Akida, leaving aside Luminar with their foveated lidar and who have stated that they expect to expand their cooperation with Mercedes from mid-decade. MB used Scala 2 to obtain Level 3 ADAS certification, (sub-60 kph), while Scala 3 is rated to 160 kph.

Luminar, like Cerence, talk about using AI, but do not describe its construction.

Standardizing on Akida would improve the efficiency of the MB design office as their engineers would all be signing off the same hymn sheet in close harmony.

This patent application shows an acoustic classifier 152, a function which could be performed by Akida.

The combined classifier may be used with the 3 classifier inputs for the context discrimination to decide if the speaker is talking to the car or in conversation.

US2022343906A1 FLEXIBLE-FORMAT VOICE COMMAND

1686732075911.png


[0042] As introduced above, the reasoner 150 processes both text output 115 and the audio signal 105 . The audio signal 105 is processed by an acoustic classifier 152 . In some implementations, this classifier is a machine learning classifier that is configured with data (i.e., from configuration data 160 ) that was trained on examples of system-directed and of non-system directed utterance by an offline training system 180 . In some examples, the machine-learning component of the acoustic classifier 152 receives a fixed-length representation of the utterance (or at least the part of the utterance received to that point) and outputs a score (e.g., probability, log likelihood, etc.) that represents a confidence that the utterance is a command. For example, the machine-learning component can be a deep neural network. Note that such processing does not in general depend on any particular words in the input, and may instead be based on features such as duration, amplitude, or pitch variation (e.g., rising or falling pitch). In some implementations, the machine-learning component processes a sequence, for example, processing a sequence of signal processing features (e.g., corresponding to fixed-length frames) that represent time-local characteristics of the signal, such as amplitude, spectral, and/or pitch, and the machine-learning component processes the sequence to provide the output score. For example, the machine learning component can implement a convolutional or recurrent neural network.



[0051] In situations in which the reasoner 150 determines that an utterance is a system-directed command directed to a particular assistant, it sends a reasoner output 155 to one of the assistants 140 A-Z with which the system 100 is configured. As an example, assistant 140 A includes a natural language understanding (NLU) 120 , whose output representing the meaning or intent of the command is passed to a command processor 130 , which acts on the determined meaning or intent.



[0052] Various technical approaches may be used in the NLU component, including deterministic or probabilistic parsing according to a grammar provided from the configuration data 160 , of machine-learning based mapping of the text output 115 to a representation of meaning, for example, using neural networks configured to classify the text output and/or identify particular words as providing variable values (e.g., ā€œslotā€ values) for identified commands. The NLU component 120 may provide an indication of a general class of commands (e.g., a ā€œskillā€) or a specific command (e.g., and ā€œintentā€), as well as values of variables associated with the command. The configuration of the assistant 140 A may use configuration data that is determined using a training procedure and stored with other configuration data in the configuration data storage 160
.


I'm guessing that Akida 2 could be used in NLU 120.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 25 users

robsmark

Regular
Check the About/Investor relations area of the BRN website.
BRN list 7 reasons under Why Invest.
One of them is:
"Marquee brands include Mercedes, Valeo, Vorago, and NASA, and commercial IP licenses with Renesas and MegaChips.
Commercial availability of semiconductor chips, IP, tools, and boards."
They would not be including these companies on their website 'in writing' if there was no connection.
They are all subject to NDA's and from engagement to revenue is a fair time.
Patience is the key.
My bold.
Again, I reiterate, Mercedes is not a customer. They have not signed, and have not committed, and they are not currently a customer. Itā€™s that simple.
 
  • Like
  • Fire
  • Love
Reactions: 13 users

gwinny66

Emerged
This patent application shows an acoustic classifier 152, a function which could be performed by Akida.

The combined classifier may be used with the 3 classifier inputs for the context discrimination to decide if the speaker is talking to the car or in conversation.

US2022343906A1 FLEXIBLE-FORMAT VOICE COMMAND

View attachment 38351

[0042] As introduced above, the reasoner 150 processes both text output 115 and the audio signal 105 . The audio signal 105 is processed by an acoustic classifier 152 . In some implementations, this classifier is a machine learning classifier that is configured with data (i.e., from configuration data 160 ) that was trained on examples of system-directed and of non-system directed utterance by an offline training system 180 . In some examples, the machine-learning component of the acoustic classifier 152 receives a fixed-length representation of the utterance (or at least the part of the utterance received to that point) and outputs a score (e.g., probability, log likelihood, etc.) that represents a confidence that the utterance is a command. For example, the machine-learning component can be a deep neural network. Note that such processing does not in general depend on any particular words in the input, and may instead be based on features such as duration, amplitude, or pitch variation (e.g., rising or falling pitch). In some implementations, the machine-learning component processes a sequence, for example, processing a sequence of signal processing features (e.g., corresponding to fixed-length frames) that represent time-local characteristics of the signal, such as amplitude, spectral, and/or pitch, and the machine-learning component processes the sequence to provide the output score. For example, the machine learning component can implement a convolutional or recurrent neural network.



[0051] In situations in which the reasoner 150 determines that an utterance is a system-directed command directed to a particular assistant, it sends a reasoner output 155 to one of the assistants 140 A-Z with which the system 100 is configured. As an example, assistant 140 A includes a natural language understanding (NLU) 120 , whose output representing the meaning or intent of the command is passed to a command processor 130 , which acts on the determined meaning or intent.



[0052] Various technical approaches may be used in the NLU component, including deterministic or probabilistic parsing according to a grammar provided from the configuration data 160 , of machine-learning based mapping of the text output 115 to a representation of meaning, for example, using neural networks configured to classify the text output and/or identify particular words as providing variable values (e.g., ā€œslotā€ values) for identified commands. The NLU component 120 may provide an indication of a general class of commands (e.g., a ā€œskillā€) or a specific command (e.g., and ā€œintentā€), as well as values of variables associated with the command. The configuration of the assistant 140 A may use configuration data that is determined using a training procedure and stored with other configuration data in the configuration data storage 160
.


I'm guessing that Akida 2 could be used in NLU 120.
Isn't Cerence a software company? Not a hardware IP provider, I believe they are part of the Hey Mercedes voice activation system as well?
 

Diogenese

Top 20
Not sure of date on this article, but suggests DNN embedded and cloud?


CERENCE INTRODUCES NEW FEATURES IN CERENCE DRIVE, THE WORLDā€™S LEADING TECHNOLOGY AND SOLUTIONS PORTFOLIO FOR AUTOMAKERS AND CONNECTED CARS​

New capabilities such as enhanced voice recognition and synthetic speech serve as the foundation for a safer, more enjoyable journey for everyone
BURLINGTON, Mass. ā€“ Cerence Inc., AI for a world in motion, today introduced new innovations in Cerence Drive, its technology and solutions portfolio for automakers and IoT providers to build high-quality, intelligent voice assistant experiences and speech-enabled applications. Cerence Drive today powers AI-based, voice-enabled assistants in approximately 300 million cars from nearly every major automaker in the world, including Audi, BMW, Daimler, Ford, Geely, GM, SAIC, Toyota, and many more.
The Cerence Drive portfolio offers a distinct, hybrid approach with both on-board and cloud-based technologies that include voice recognition, natural language understanding (NLU), text-to-speech (TTS), speech signal enhancement (SSE), and more. These technologies can be deployed and tightly integrated with the wide variety of systems, sensors and interfaces found in todayā€™s connected cars. The latest version of Cerence Drive includes a variety of new features to elevate the in-car experience:
> Enhanced, active voice recognition and assistant activation that goes beyond the standard push-to-talk buttons and wake-up words. The voice assistant is always listening for a relevant utterance, question or command, much like a personal assistant would, creating a more natural experience. In addition, Cerenceā€™s voice recognition can run throughout the car, both embedded and in the cloud, distributing the technical load and delivering a faster user experience for drivers.
> New, deep neural net (DNN)-based NLU engine built on one central technology stack with 23 languages available both embedded and in the cloud. This streamlined approach creates new standards for scalability and flexibility between embedded and cloud applications and domains for simpler integration, faster innovation, and a more seamless in-car experience, regardless of connectivity.
> TTS and synthetic voice advancements that deliver new customizations, including a non-gender-specific voice for the voice assistant, and emotional output, which enables automakers to adjust an assistantā€™s speaking style based on the information delivered or tailored to a specific situation. In addition, the introduction of deep learning delivers a more natural and human-like voice with an affordable computational footprint.
> Improved, more intelligent speech signal enhancement that includes multi-zone processing with quick and simple speaker identification; passenger interference cancelation that blocks out background noise as well as voices from others in the car; and a deep neural net-based approach for greater noise suppression and better communication.
ā€œImproving the experience for drivers and creating curated technology that feels unique and harmonious with our partnersā€™ brands have been true motivators since we started our new journey as Cerence, and that extends to our latest innovations in Cerence Drive,ā€ said Sanjay Dhawan, CEO, Cerence. ā€œCerence Drive, our flagship offering, is the driving force behind our promise of a truly moving in-car experience for our customers and their drivers, and our new innovations announced today are core to making that mission a reality. ā€
Cerence Driveā€™s newest features are available now for automakers worldwide. To learn more about Cerence Drive, visit www.cerence.com/solutions.

Also a 2022 pdf spiel on their overall solutions package.

HERE

Guess we have to remember we also have a patent granted on 2018 on neuromorphic application via PVDM.


US-10157629-B2 - Low Power Neuromorphic Voice Activation System and Method​


Abstract
The present invention provides a system and method for controlling a device by recognizing voice commands through a spiking neural network. The system comprises a spiking neural adaptive processor receiving an input stream that is being forwarded from a microphone, a decimation filter and then an artificial cochlea. The spiking neural adaptive processor further comprises a first spiking neural network and a second spiking neural network. The first spiking neural network checks for voice activities in output spikes received from artificial cochlea. If any voice activity is detected, it activates the second spiking neural network and passes the output spike of the artificial cochlea to the second spiking neural network that is further configured to recognize spike patterns indicative of specific voice commands. If the first spiking neural network does not detect any voice activity, it halts the second spiking neural network.
Hi Fmf,

These are the rest of the drawings from the Cerence patent above:

1686733286237.png



1686733308552.png
 
  • Like
  • Fire
  • Love
Reactions: 9 users

Damo4

Regular
Hi Fmf,

These are the rest of the drawings from the Cerence patent above:

View attachment 38357


View attachment 38358

Hi Dio,

Just reading through your posts on Cerence, thank you for all the info.

I have a question about the classification. Do you think it will be a prebuilt/trained model, or is it capable of learning without cloud intervention?
If it is capable of on device learning, would the occupants of the car have to provide feedback regarding accuracy or the responses?
 
  • Like
Reactions: 2 users
D

Deleted member 118

Guest
Again, I reiterate, Mercedes is not a customer. They have not signed, and have not committed, and they are not currently a customer. Itā€™s that simple.

 
  • Like
  • Haha
Reactions: 9 users
  • Like
Reactions: 4 users

Frangipani

Regular
  • Like
  • Wow
Reactions: 2 users

Mccabe84

Regular
I stand by my comment, until we have a signed deal then itā€™s nothing. Chat, partnerships, discussions, articles, and tweets mean diddly squat unless they sign, start using Akida, and revenue starts flowing. Until then they have zero commitment towards Brainchip. Thatā€™s just the way it is.
All the NDA talk also mean diddly squat too. It may or not be true..
 
  • Like
  • Fire
Reactions: 10 users

Diogenese

Top 20
Hi Dio,

Just reading through your posts on Cerence, thank you for all the info.

I have a question about the classification. Do you think it will be a prebuilt/trained model, or is it capable of learning without cloud intervention?
If it is capable of on device learning, would the occupants of the car have to provide feedback regarding accuracy or the responses?
Hi Damo,

Glad you asked. I was just about to post that 10 years ago, Cerence was an entirely cloud-based system.

I'm pretty certain that they had no in-house digital or analog SNN SoC expertise, at least until they met BRN which they must have done over at Markus SchƤfer's place. Their patents talk about NN as a known quantity but do not evince any evidence of a deep knowledge of the circuit configuration.

The Cerence patent has a priority of 20210426.

Just harking back to Fig1 above, there is no provision for on-line learning in the acoustic classifier:

1686735047528.png


[0038] The configuration data 160 is generally or largely determined before operation of the system. For example, an offline training/configuration component 180 uses an audio or transcribed text corpus of commands for one or more assistants to determine the structure of valid commands for those assistants, and a language model may be derived from the corpus for use in processing audio input containing speech. For example, the language model may be a statistical language model, meaning that different word sequences are associated with different probability or scores indicating their likelihood. In some cases, such a statistical language model is an ā€œn-gramā€ model in which statistics on n-long sequences of words are used to build the model. In some cases, some or all of the language model may have a finite-state form, for example, specifying a finite set of well-structured commands. In some cases, such a finite-state form is statistical in that different commands may have different probabilities or scores. In some examples, a combination of approaches, such as n-gram and finite-state forms, are combined in a finite-state transducer. It should be understood that there are a number of alternatives to the form of the language model. In some examples, the occurrences of wake words and/or names given to assistants, or generic subjects that may not be explicitly defined (e.g., ā€œcomputer,ā€ ā€œautomobile,ā€ ā€œyouā€), are identified in the corpus and are essentially replaced with placeholders to permit configuration of different words or names for assistants, including coined names that may not actually occur in the corpus, for use in runtime systems without having to derive new language models. In some examples, the language model used to automatically transcribe an audio input may be biased to avoid missing true occurrences of wake words that are part of commands in the process of automated transcription of audio input. In some examples, the language model provides a way to tag output words, for example, to indicate that the word occurred in the position of a subject or assistant name in the language model, and such tags are used in further processing of the output text.

[0039] In some examples, the configuration data 160 may be determined in part during operation or shortly before operation of the system. For example, an optional online training/configuration component 170 may receive, from the user, a name they have given to an assistant. For example, the user may say or type the word ā€œSophieā€ to name the assistant. The configuration data 160 is then amended to modify the language model to permit the word ā€œSophieā€ in the name position in the language model and/or modify the pronunciation of a placeholder for the name with a pronunciation of the word ā€œSophie,ā€ for example, determined by an automated text-to-phoneme converter (e.g., as is used in a speech synthesis system). In some examples, the online training/configuration component 170 may receive other information that is used in configuring components of the system, for example, the names of family members of the user and/or names of passengers in a vehicle. These names may be used to configure the language model to replace a placeholder for non-system names that may be present in utterances spoken by the user but that are not directed to the system
.

[0042] As introduced above, the reasoner 150 processes both text output 115 and the audio signal 105 . The audio signal 105 is processed by an acoustic classifier 152 . In some implementations, this classifier is a machine learning classifier that is configured with data (i.e., from configuration data 160 ) that was trained on examples of system-directed and of non-system directed utterance by an offline training system 180 . In some examples, the machine-learning component of the acoustic classifier 152 receives a fixed-length representation of the utterance (or at least the part of the utterance received to that point) and outputs a score (e.g., probability, log likelihood, etc.) that represents a confidence that the utterance is a command. For example, the machine-learning component can be a deep neural network. Note that such processing does not in general depend on any particular words in the input, and may instead be based on features such as duration, amplitude, or pitch variation (e.g., rising or falling pitch). In some implementations, the machine-learning component processes a sequence, for example, processing a sequence of signal processing features (e.g., corresponding to fixed-length frames) that represent time-local characteristics of the signal, such as amplitude, spectral, and/or pitch, and the machine-learning component processes the sequence to provide the output score. For example, the machine learning component can implement a convolutional or recurrent neural network.
 
  • Like
  • Fire
  • Love
Reactions: 17 users

Diogenese

Top 20
So 250 & 251 are the key regarding non use of a wake word and initiating the assistant or not.
Yes. That is the function of combined classifier 151 in Fig. 1.
 
  • Like
Reactions: 8 users

Bravo

If ARM was an arm, BRN would be its bicepsšŸ’Ŗ!
Robsmark, it is a genuine question. You are correct that Mercedes are not announced as a customer. However, they are on the trusted by section of our website, implies using Akida for future opportunities. They have openly stated that use of Akida for, I understand, to be this system reduces power consumption by 5x, or words to that effect.
My question is - are Cerence using Akida in their offering or is there a pivot, or am I confusing apples and oranges.

Perhaps it is my lack of understanding that is leading to my confusion. The tweet and information at least gave me cause for pause. I honestly need help to understand if there is anything here or not.
And if given all the continued chat about Mercedes announcement by the top dogs in the company, should I really be considering that as nothing?!
Cerence is right down my alley...

 
  • Like
  • Fire
  • Love
Reactions: 14 users
Top Bottom