Howdy Brain Fam,
Hope everyone is having a great weekend. Let's hope I can make it even better!
I just watched the Cerence 25th Annual Needham Growth Conference which was filmed on the 10th Jan 2023. It's a 40 min approx video presentation that you have to sign up for to watch (full name and email address required for access). This link is here if you're interested in watching.
https://wsw.com/webcast/needham
I'm itching to share a bit of information from the presentation because I believe there were numerous points raised throughout the presentation that indicate quite strongly the possible use of our technology in Cerence's embedded voice solution IMO.
For some background, Cerence is a global leader in conversational AI and they state that they are the only company in the world to offer the "complete stack" including conversational AI, audio, speech to text AI. Cerence state that every second newly defined SOP (start of production) car uses their technology, and they’re working with some very big names such as BYD, NIO, GM, Ford, Toyota, Volkswagen, Stellantis, Mercedes, BMW.
In the presentation they discussed how in November they held their second Analyst Day in which they outlined their new strategy called "Destination Next". They said that from a technology perspective this strategy or transition means they are going to be evolving from a voice only driver-centric solution via their Cerence Assistant or Co-pilate to a truly immersive in-cabin experience. Stefan Ortmanns (CEO Cerence) said early in the presentation something like "which means we're
bringing in more features and applications beyond conversational AI, for example, wellness sensing, for example surrounding awareness, emotional AI or the interaction inside and outside the car with passengers and we have all these capabilities for creating a really immersive companion”. He also said something about the underlying strategy being based on 3 pillars, "scalable AI, teachable AI, and the immersive in-cabin experience", which has been bought about as a result of a "huge appetite for AI".
At about 6 mins Stefan Ortmanns says they have
recently been shifting gear to bring in more proactive AI and he said something along these lines "What does it mean? So you bring everything you get together, so you have access to the sensors in the car, you an embedded solution, you have a cloud solution, and you also have this proactive AI, for example the road conditions or the weather conditions. And if you can bring everything together you have a personalised solution for the diver and also for the passengers and this is combines with what we call the (??? mighty ?? intelligence). And looking forward for the immersive experience, you need to bring in more together, it's not just about speech, it's about AI in general right so, with what I said wellness sensing, drunkenness detection, you know we're working on all this kind of cool stuff. We're working on emotional AI to have a better engagement with the passengers and also with the driver. And this is our future road map and we have vetted this with 50-60 OEM's across the globe and we did it together with a very well know consultancy firm."
At about 13 mins they describe how there will be very significant growth in fiscal years 23/24 because of the bookings they have won over the last 18 to 24 months that will go into production at the end of this year and very early in 2024 and a lot of them will have the higher tech stack that Stephan talked about.
At roughly 25 mins Stefan Ortmanns is asked how they compete with big tech like Alexa, Google, Apple, and how are they are co-exisiting because there are certain OEMS's using Alexa and certain ones using Cerence as well. In terms of what applications is Cerence providing Stephan replied stating something like "Alexa is a good example, so what you're seeing in the market is that OEM's are selecting us for their OEM branded solution and we are providing the wake word for interacting with Alexa, that's based on our core technology".
Now here comes the really good bit. At 29 mins the conversation turns to partnership statements, and they touch on NVDIA and whether Cerence view NVDIA as a competitor or partner (sounds familiar). This question was asked in relation to NVDIA having its own chauffeur product which enables some voice capabilities with its own hardware and software however Cerence has also been integrated into NVDIA's DRIVE platform. In describing this relationship, the Stefan Ortmanns says something like "So you're right. They have their own technology, but our technology stack is more advanced. And here we're talking about
specifically Mercedes where they're positioned with their hardware and with our solution. There's also another semi-conductor big player, Qualcomm namely now they are working with Volkswagen group and they're also using our technology. So we're very flexible and open with our partners".
Following on form that they discuss how Cerence is also involved in the language processing for BMW which has to be "seamlessly integrated" with "
very low latency".
So, a couple of points I wanted to throw in to emphasise why I'm thinking all of this so strongly indicates the use of BrainChip's technology being a part of Cerence's stack.
- Cerence presented Mercedes as the premium example in which to demonstrate how advanced their voice technology is in comparison to NVDIA's. Since this presentation is only a few days old, I don't think they'd be referring to Mercedes old voice technology but rather the new advanced technology developed for the Vision EQXX. And I don't think Cerence would be referring to Mercedes at all if they weren't still working with them.
- This is after Mercedes worked with BrainChip on the “wake word” detection for the Vision EQXX which made it 5-10 times more efficient. So, it only seems logical if Cerence's core technology is to provide the wake word that they should incorporate BrainChip’s technology to make the wake word detection 5-10 times faster.
- In November 2022 Nils Shanz, who was responsible for user interaction and voice control at Mercedes and who also worked on the Vision EQXX voice control system was appointed Chief Product Officer at Cerence.
- Previous post in which Cerence describe their technology as "self-learning",etc #6,807
- Previous post in which Cerence technology is described as working with an internet connection #35,234 and #31,305
- I’m no engineer but I would have thought the new emotion detection AI and contextual awareness AI that are connected to the car’s sensors must be embedded into Cerence’s technology for it all to work seamlessly.
Anyway, naturally I would love to hear what
@Fact Finder has to say about this. As we all know he is the world's utmost guru in being able to sort the chaff from the wheat and always stands at the ready to pour cold water on any outlandish dot joining attempts when the facts don't stack up.
Of course, these are only the conclusions I have arrived at after watching the presentation and I would love to hear what everyone else’s impression are. Needless to say, I hope I'm right.
B