BRN Discussion Ongoing


A SWF article from EE times fyi - do we or font we aspire to Transformers šŸ¤”

DF4079D1-AAB1-4518-982A-523BBE6370B8.jpeg
 
  • Like
  • Thinking
Reactions: 9 users

Moonshot

Regular
I'm probably the least tech savvy on this forum but I don't see Ergo2 capable of one shot or few shot learning, doesn't use spiking neuromorphic architecture and is 100 times more power hungry than Akida.
It's 7mm Ɨ7mm so likely very expensive.
Agree it doesnā€™t have on chip learning but they are claiming 30fps at 17mw whereas akida 1000 is 30 FPS @ 157 mw in 28 nm
Donā€™t know what nm Ergo is, maybe Dio can help clarify?
Edit: built on GlobalFoundries' 22FDX process node

For comparison, Renesas DRP can do 30 frames per second at 3.1 Watts



Info on the process- same cost as 28nm


By my reading the after accounting for process to produce the chip still makes them about twice as power efficient as akida. Need some expert help!
 
Last edited:
  • Like
Reactions: 7 users

stuart888

Regular
Prophesee have a White Paper out on event based sensing, "Metavision for Machines". It probably isn't appropriate for me to publish it myself as I had to apply for it, but here is a small teaser for you
View attachment 27580

It sounds very exciting from where I am sitting. Lets hope Prophesee can get the word out to the world!
Yes, @Dhm! Prophesee is getting the word out, like upcoming Jan 25th TinyML Presentation!

Go Luca Verre, spread the word of the Brainchip Akida, sitting on the pole šŸšŸšŸ (love it @TECH)!

https://www.tinyml.org/event/tinyml-trailblazers-series-tinyml-success-stories-with-luca-verre

1674256068959.png
 
  • Like
  • Fire
  • Love
Reactions: 37 users

Jasonk

Regular
Akida inside?
Seems to tick a lot of boxes and involve know partners.

1674256326251.png
 
  • Like
Reactions: 10 users

VictorG

Member
Agree it doesnā€™t have on chip learning but they are claiming 30fps at 17mw whereas akida 1000 is 30 FPS @ 157 mw in 28 nm
Donā€™t know what nm Ergo is, maybe Dio can help clarify?
Edit: built on GlobalFoundries' 22FDX process node

For comparison, Renesas DRP can do 30 frames per second at 3.1 Watts



Info on the process- same cost as 28nm


By my reading the after accounting for process to produce the chip still makes them about twice as power efficient as akida. Need some expert help!
Agreed, this one is for @Diogenese
 
  • Like
Reactions: 2 users

Bravo

If ARM was an arm, BRN would be its bicepsšŸ’Ŗ!

A SWF article from EE times fyi - do we or font we aspire to Transformers šŸ¤”

View attachment 27624


As I understand it, AKIDA 2000 is going to be optimized for LSTM and transformers. Any comparison to Perceive would be difficult at this stage as AKIDA 2000 has yet to be launched.
 
  • Like
  • Fire
Reactions: 19 users

stuart888

Regular
10,000 fps equivalent. I take this to mean that Prophesee's DVS event camera can capture movement (change in light impinging on a pixel) detected by individual pixels at 10,000 Hz, not a full screen of pixels in a frame. DVS cameras do not have a shutter, so the photoreceptor plate is continuously exposed to the field of view. The thing which would limit the speed which a DVS could capture movement would be the response time of the pixels unencumbered by any frame rate - without the inherent delay of the fixed frame configuration of normal video. The pixels fire asynchronously as the light impinging on the individual pixels changes.

Going back to Prophesee's comments about Akida, Akida can accept asynchronous input from individual pixels. It does not need to wait for a full frame of image data. It is able to receive individual "events" as they occur. So it seems from Prophesee's comments, Akida is capable of matching the performance of the Prophesee DVS, something that frame-based system cannot do.

On the other hand, nViso has tested Akida with framed video to over 1.6 k fps.
The 10,000 fps equivalent did baffle my mind, plus the DVS cameras without a shutter!

Lots of answers to how this is done appears to be the fantastic SNN article by Brainchip 4 Bits are Enough, but I liked the Nature article on: neuromorphic computing algorithms and applications that yells Brainchip Akida. Cannot remember who posted it, but thanks, it really helps.

Highly parallel operation: neuromorphic operations are inherently parallel, where all of the neurons and synapses can potentially be operating simultaneously.

Simple Operations: the computations performed by neurons and synapses are relatively simple when compared with the parallelized von Neumann systems.

Collocated processing and memory: there is no notion of a separation of processing and memory in neuromorphic hardware. Although neurons are sometimes thought of as processing units and synapses are sometimes thought of as memory, the neurons and synapses both perform processing and store values in many implementations. The collocation of processing and memory helps mitigate the von Neumann bottleneck regarding the processor/memory separation, which causes a slowdown in the maximum throughput that can be achieved.

https://www.nature.com/articles/s43588-021-00184-y
1674259691012.png
 
  • Like
  • Fire
Reactions: 11 users

VictorG

Member
Agreed, this one is for @Diogenese
Not one for quoting ChatGPT but I use this instance as an exception.

Differentiate between Akida and Ergo 2.

Akida and Ergo 2 are both neuromorphic processors developed by different companies for a wide range of applications such as image and speech recognition, autonomous vehicles, and industrial automation. However, there are some key differences between the two processors.

Architecture: Akida is based on a spiking neural network architecture, which mimics the behavior of neurons in the brain. Ergo 2, on the other hand, is based on a more traditional artificial neural network architecture.

Power consumption: Akida is designed to be highly power-efficient, with low power consumption and high performance. Ergo 2 also claims to be low-power but the specific power consumption figures are not publicly available.

Programming: Akida provides a software development kit (SDK) and programming model that is designed to be easy to use, even for developers with limited experience in neuromorphic computing. Ergo 2 also has a development kit but the information about the programming model is not publicly available.

Pricing: Akida is available as a system-on-a-chip (SoC) and as a development board, with the SoC available for $1.65 in volume orders. Ergo 2 is available as a development kit that includes a development board and software tools, and is priced at $2,995.

Company: Akida is developed by Brainchip, a company that specializes in neuromorphic computing solutions. Ergo 2 is developed by Mythic, a company that specializes in low-power artificial intelligence processors.

In summary, while both Akida and Ergo 2 are neuromorphic processors designed for a wide range of applications, they differ in terms of architecture, power consumption, programming, pricing and the company that developed them.
 
  • Like
  • Fire
  • Love
Reactions: 59 users

Serengeti

Regular
Happy Saturday everyone!

Came across a number of AI podcasts and thought Iā€™d share. Not directly linked to BRN, they discuss AI in general but thought I would post for those who may be interested.



 
  • Like
  • Fire
  • Love
Reactions: 18 users

misslou

Founding Member
Not one for quoting ChatGPT but I use this instance as an exception.

Differentiate between Akida and Ergo 2.

Akida and Ergo 2 are both neuromorphic processors developed by different companies for a wide range of applications such as image and speech recognition, autonomous vehicles, and industrial automation. However, there are some key differences between the two processors.

Architecture: Akida is based on a spiking neural network architecture, which mimics the behavior of neurons in the brain. Ergo 2, on the other hand, is based on a more traditional artificial neural network architecture.

Power consumption: Akida is designed to be highly power-efficient, with low power consumption and high performance. Ergo 2 also claims to be low-power but the specific power consumption figures are not publicly available.

Programming: Akida provides a software development kit (SDK) and programming model that is designed to be easy to use, even for developers with limited experience in neuromorphic computing. Ergo 2 also has a development kit but the information about the programming model is not publicly available.

Pricing: Akida is available as a system-on-a-chip (SoC) and as a development board, with the SoC available for $1.65 in volume orders. Ergo 2 is available as a development kit that includes a development board and software tools, and is priced at $2,995.

Company: Akida is developed by Brainchip, a company that specializes in neuromorphic computing solutions. Ergo 2 is developed by Mythic, a company that specializes in low-power artificial intelligence processors.

In summary, while both Akida and Ergo 2 are neuromorphic processors designed for a wide range of applications, they differ in terms of architecture, power consumption, programming, pricing and the company that developed them.
Can you imagine the customers company meetings to decide which one to go with.

Hmmm, shall we choose the $1.65 or the $2,995.00?
 
  • Like
  • Haha
  • Love
Reactions: 40 users

VictorG

Member
Not one for quoting ChatGPT but I use this instance as an exception.

Differentiate between Akida and Ergo 2.

Akida and Ergo 2 are both neuromorphic processors developed by different companies for a wide range of applications such as image and speech recognition, autonomous vehicles, and industrial automation. However, there are some key differences between the two processors.

Architecture: Akida is based on a spiking neural network architecture, which mimics the behavior of neurons in the brain. Ergo 2, on the other hand, is based on a more traditional artificial neural network architecture.

Power consumption: Akida is designed to be highly power-efficient, with low power consumption and high performance. Ergo 2 also claims to be low-power but the specific power consumption figures are not publicly available.

Programming: Akida provides a software development kit (SDK) and programming model that is designed to be easy to use, even for developers with limited experience in neuromorphic computing. Ergo 2 also has a development kit but the information about the programming model is not publicly available.

Pricing: Akida is available as a system-on-a-chip (SoC) and as a development board, with the SoC available for $1.65 in volume orders. Ergo 2 is available as a development kit that includes a development board and software tools, and is priced at $2,995.

Company: Akida is developed by Brainchip, a company that specializes in neuromorphic computing solutions. Ergo 2 is developed by Mythic, a company that specializes in low-power artificial intelligence processors.

In summary, while both Akida and Ergo 2 are neuromorphic processors designed for a wide range of applications, they differ in terms of architecture, power consumption, programming, pricing and the company that developed them.
I think my key take away from this is that the world is moving towards SNN while Ergo 2 is developing ANN.
Sort of like Ergo 2 is the worlds best gas engine technology while the world has embraced electric cars.
 
  • Like
  • Fire
  • Haha
Reactions: 18 users

HopalongPetrovski

I'm Spartacus!
I think my key take away from this is that the world is moving towards SNN while Ergo 2 is developing ANN.
Sort of like Ergo 2 is the worlds best gas engine technology while the world has embraced electric cars.
They're making the best damn buggy whips money can buy. šŸ¤£
 
  • Haha
  • Like
Reactions: 13 users

Dhm

Regular
Not sure what to expect with the next 4C. Many here are wary and with good reason. If the numbers are less than expected I am hoping for some strong spin with the commentary. Last time we got most sales activity and engagement ever, or words to that effect. That is a very strong statement and we can take hope from it. But not necessarily expectation.

But the Qualcomm video could imply that they have already generated revenue for us. And if so, how many others may be doing the same in their markets? Up until a few weeks ago we hadnā€™t heard of VVDN then out of the blue there they are.

Interesting times ahead, especially with so many shorts that may need covering.
 
  • Like
  • Love
  • Fire
Reactions: 35 users

alwaysgreen

Top 20
Not sure what to expect with the next 4C. Many here are wary and with good reason. If the numbers are less than expected I am hoping for some strong spin with the commentary. Last time we got most sales activity and engagement ever, or words to that effect. That is a very strong statement and we can take hope from it. But not necessarily expectation.

But the Qualcomm video could imply that they have already generated revenue for us. And if so, how many others may be doing the same in their markets? Up until a few weeks ago we hadnā€™t heard of VVDN then out of the blue there they are.

Interesting times ahead, especially with so many shorts that may need covering.
I wish we had a few additional license announcements over the course of the year but are they required if the licenses are purchased through Megachips?

I'm looking forward to the half year report more than the 4C. The 4C just shows cash receipts (which we know will likely not be high) but the half yearly with show any revenue that may have been generated by companies entering into contracts with Megachips or Renasas.
 
  • Like
  • Fire
Reactions: 27 users

TheDon

Regular
Had a dream the other night that this coming 4c is so good that the SP went up to $30.
šŸ˜

TheDon
 
  • Like
  • Haha
  • Love
Reactions: 42 users

Earlyrelease

Regular
I think my key take away from this is that the world is moving towards SNN while Ergo 2 is developing ANN.
Sort of like Ergo 2 is the worlds best gas engine technology while the world has embraced electric cars.
VG
No its all good I believe Ergo2 have also teamed up with Kodak to ensure the results recorded on the best wet film available. :cool:
 
  • Haha
  • Like
Reactions: 17 users

Diogenese

Top 20
Agree it doesnā€™t have on chip learning but they are claiming 30fps at 17mw whereas akida 1000 is 30 FPS @ 157 mw in 28 nm
Donā€™t know what nm Ergo is, maybe Dio can help clarify?
Edit: built on GlobalFoundries' 22FDX process node

For comparison, Renesas DRP can do 30 frames per second at 3.1 Watts



Info on the process- same cost as 28nm


By my reading the after accounting for process to produce the chip still makes them about twice as power efficient as akida. Need some expert help!
When comparing chalk and cheese, are we talking camembert or parmesan?

Perceive's figures are for the YOLO V5 database.

In 2019, the Akida simulator did 30 fps @ 157 mW on MobileNet V1.

1674264832523.png

I don't have the fps/W figures for the Akida 1 SoC performance, so if anyone has those to hand, it would be much appreciated.

The chip simulator seems to have topped out at 80 fps. We know Akida 1 SoC can top 1600 fps (don't recall what database), which is 20 times faster than the simulator, so it wouldn't even get into second gear doing 30 fps.

Back in 2018, Bob Beachler said they expected 1400 frames per second per Watt.
https://www.eejournal.com/article/brainchip-debuts-neuromorphic-chip/

So 30 fps would be 20 mW if there is a linear dependence. And, of course, we were told that the SoC performed better than expected.

But the main thing is that the comparison databases need to be the same as performance varies depending on database.

Perceive relys on compression to achieve extreme sparsity by ignoring the zeros in the multiplier.

US11003736B2 Reduced dot product computation circuit

1674271057333.png


[0003] Some embodiments provide an integrated circuit (IC) for implementing a machine-trained network (e.g., a neural network) that computes dot products of input values and corresponding weight values (among other operations). The IC of some embodiments includes a neural network computation fabric with numerous dot product computation circuits in order to process partial dot products in parallel (e.g., for computing the output of a node of the machine-trained network). In some embodiments, the weight values for each layer of the network are ternary values (e.g., each weight is either zero, a positive value, or the negation of the positive value), with at least a fixed percentage (e.g., 75%) of the weight values being zero. As such, some embodiments reduce the size of the dot product computation circuits by mapping each of a first number (e.g., 144) input values to a second number (e.g., 36) of dot product inputs, such that each dot product input only receives at most one input value with a non-zero corresponding weight value.

1. A method for implementing a machine-trained network that comprises a plurality of processing nodes, the method comprising:

at a particular dot product circuit performing a dot product computation for a particular node of the machine-trained network:

receiving (i) a first plurality of input values that are output values of a set of previous nodes of the machine-trained network and (ii) a set of machine-trained weight values associated with a set of the input values;

selecting, from the first plurality, a second plurality of input values that is a smaller subset of the first plurality of input values, said selecting comprising (i) selecting the input values from the first plurality of input values that are associated with non-zero weight values, and (ii) not selecting a group of input values from the first plurality of input values that are associated with weight values that are equal to zero;

computing a dot product based on (i) the second plurality of input values and (ii) weight values associated with the second plurality of input values
.

ASIDE: This isn't relevant to the discussion, but I stumbled across it just now and it shows that we developed a dataset for MagikEye in 2020, which I had not observed before.
1674269102740.png
 
  • Like
  • Love
  • Fire
Reactions: 49 users

Moonshot

Regular
When comparing chalk and cheese, are we talking camembert or parmesan?

Perceive's figures are for the YOLO V5 database.

In 2019, the Akida simulator did 30 fps @ 157 mW on MobileNet V1.

View attachment 27636
I don't have the fps/W figures for the Akida 1 SoC performance, so if anyone has those to hand, it would be much appreciated.

The chip simulator seems to have topped out at 80 fps. We know Akida 1 SoC can top 1600 fps (don't recall what database), which is 20 times faster than the simulator, so it wouldn't even get into second gear doing 30 fps.

Back in 2018, Bob Beachler said they expected 1400 frames per second per Watt.
https://www.eejournal.com/article/brainchip-debuts-neuromorphic-chip/

So 30 fps would be 20 mW if there is a linear dependence. And, of course, we were told that the SoC performed better than expected.

But the main thing is that the comparison databases need to be the same as performance varies depending on database.

Perceive relys on compression to achieve extreme sparsity by ignoring the zeros in the multiplier.

US11003736B2 Reduced dot product computation circuit

View attachment 27638

[0003] Some embodiments provide an integrated circuit (IC) for implementing a machine-trained network (e.g., a neural network) that computes dot products of input values and corresponding weight values (among other operations). The IC of some embodiments includes a neural network computation fabric with numerous dot product computation circuits in order to process partial dot products in parallel (e.g., for computing the output of a node of the machine-trained network). In some embodiments, the weight values for each layer of the network are ternary values (e.g., each weight is either zero, a positive value, or the negation of the positive value), with at least a fixed percentage (e.g., 75%) of the weight values being zero. As such, some embodiments reduce the size of the dot product computation circuits by mapping each of a first number (e.g., 144) input values to a second number (e.g., 36) of dot product inputs, such that each dot product input only receives at most one input value with a non-zero corresponding weight value.

1. A method for implementing a machine-trained network that comprises a plurality of processing nodes, the method comprising:

at a particular dot product circuit performing a dot product computation for a particular node of the machine-trained network:

receiving (i) a first plurality of input values that are output values of a set of previous nodes of the machine-trained network and (ii) a set of machine-trained weight values associated with a set of the input values;

selecting, from the first plurality, a second plurality of input values that is a smaller subset of the first plurality of input values, said selecting comprising (i) selecting the input values from the first plurality of input values that are associated with non-zero weight values, and (ii) not selecting a group of input values from the first plurality of input values that are associated with weight values that are equal to zero;

computing a dot product based on (i) the second plurality of input values and (ii) weight values associated with the second plurality of input values
.

ASIDE: This isn't relevant to the discussion, but I stumbled across it just now and it shows that we developed a dataset for MagikEye in 2020, which I had not observed before.
View attachment 27637
Youā€™re a champion Dio
 
  • Like
  • Fire
Reactions: 13 users

Bravo

If ARM was an arm, BRN would be its bicepsšŸ’Ŗ!
Howdy Brain Fam,

Hope everyone is having a great weekend. Let's hope I can make it even better!

I just watched the Cerence 25th Annual Needham Growth Conference which was filmed on the 10th Jan 2023. It's a 40 min approx video presentation that you have to sign up for to watch (full name and email address required for access). This link is here if you're interested in watching. https://wsw.com/webcast/needham

I'm itching to share a bit of information from the presentation because I believe there were numerous points raised throughout the presentation that indicate quite strongly the possible use of our technology in Cerence's embedded voice solution IMO.

For some background, Cerence is a global leader in conversational AI and they state that they are the only company in the world to offer the "complete stack" including conversational AI, audio, speech to text AI. Cerence state that every second newly defined SOP (start of production) car uses their technology, and theyā€™re working with some very big names such as BYD, NIO, GM, Ford, Toyota, Volkswagen, Stellantis, Mercedes, BMW.

In the presentation they discussed how in November they held their second Analyst Day in which they outlined their new strategy called "Destination Next". They said that from a technology perspective this strategy or transition means they are going to be evolving from a voice only driver-centric solution via their Cerence Assistant or Co-pilate to a truly immersive in-cabin experience. Stefan Ortmanns (CEO Cerence) said early in the presentation something like "which means we're bringing in more features and applications beyond conversational AI, for example, wellness sensing, for example surrounding awareness, emotional AI or the interaction inside and outside the car with passengers and we have all these capabilities for creating a really immersive companionā€. He also said something about the underlying strategy being based on 3 pillars, "scalable AI, teachable AI, and the immersive in-cabin experience", which has been bought about as a result of a "huge appetite for AI".

At about 6 mins Stefan Ortmanns says they have recently been shifting gear to bring in more proactive AI and he said something along these lines "What does it mean? So you bring everything you get together, so you have access to the sensors in the car, you have an embedded solution, you have a cloud solution, and you also have this proactive AI, for example the road conditions or the weather conditions. And if you can bring everything together you have a personalised solution for the driver and also for the passengers and this is combines with what we call the (??? mighty ?? intelligence). And looking forward for the immersive experience, you need to bring in more together, it's not just about speech, it's about AI in general right so, with what I said wellness sensing, drunkenness detection, you know we're working on all this kind of cool stuff. We're working on emotional AI to have a better engagement with the passengers and also with the driver. And this is our future road map and we have vetted this with 50-60 OEM's across the globe and we did it together with a very well know consultancy firm."

At about 13 mins they describe how there will be very significant growth in fiscal years 23/24 because of the bookings they have won over the last 18 to 24 months that will go into production at the end of this year and very early in 2024 and a lot of them will have the higher tech stack that Stefan talked about.

At roughly 25 mins Stefan Ortmanns is asked how they compete with big tech like Alexa, Google, Apple, and how are they are co-exisiting because there are certain OEMS's using Alexa and certain ones using Cerence as well. In terms of what applications is Cerence providing Stephan replied stating something like "Alexa is a good example, so what you're seeing in the market is that OEM's are selecting us for their OEM branded solution and we are providing the wake word for interacting with Alexa, that's based on our core technology".

Now here comes the really good bit. At 29 mins the conversation turns to partnership statements, and they touch on NVDIA and whether Cerence view NVDIA as a competitor or partner (sounds familiar). This question was asked in relation to NVDIA having its own chauffeur product which enables some voice capabilities with its own hardware and software however Cerence has also been integrated into NVDIA's DRIVE platform. In describing this relationship, Stefan Ortmanns says something like "So you're right. They have their own technology, but our technology stack is more advanced. And here we're talking about specifically Mercedes where they're positioned with their hardware and with our solution. There's also another semi-conductor big player, Qualcomm namely now they are working with Volkswagen group and they're also using our technology. So we're very flexible and open with our partners".

Following on from that they discuss how Cerence is also involved in the language processing for BMW which has to be "seamlessly integrated" with "very low latency".

So, a couple of points I wanted to throw in to emphasise why I'm thinking all of this so strongly indicates the use of BrainChip's technology being a part of Cerence's stack.
  • Cerence presented Mercedes as the premium example in which to demonstrate how advanced their voice technology is in comparison to NVDIA's. Since this presentation is only a few days old, I don't think they'd be referring to Mercedes old voice technology but rather the new advanced technology developed for the Vision EQXX. And I don't think Cerence would be referring to Mercedes at all if they weren't still working with them.
  • This is after Mercedes worked with BrainChip on the ā€œwake wordā€ detection for the Vision EQXX which made it 5-10 times more efficient. So, it only seems logical if Cerence's core technology is to provide the wake word that they should incorporate BrainChipā€™s technology to make the wake word detection 5-10 times faster.
  • In November 2022 Nils Shanz, who was responsible for user interaction and voice control at Mercedes and who also worked on the Vision EQXX voice control system was appointed Chief Product Officer at Cerence.
  • Previous post in which Cerence describe their technology as "self-learning",etc #6,807
  • Previous post in which Cerence technology is described as working without an internet connection #35,234 and #31,305
  • Iā€™m no engineer but I would have thought the new emotion detection AI and contextual awareness AI that are connected to the carā€™s sensors must be embedded into Cerenceā€™s technology for it all to work seamlessly.
Anyway, naturally I would love to hear what @Fact Finder has to say about this. As we all know he is the world's utmost guru in being able to sort the chaff from the wheat and always stands at the ready to pour cold water on any outlandish dot joining attempts when the facts don't stack up.

Of course, these are only the conclusions I have arrived at after watching the presentation and I would love to hear what everyone elseā€™s impression are. Needless to say, I hope I'm right.

B šŸ’‹



Howdy Gang!

Following on from Cerence's 25th Annual Needham Growth Conference (as discussed above), I've just had the opportunity to take a peek at the Cerence Investor Day Destination Next pdf which you can access here if you wish to https://cerence.gcs-web.com/static-files/4fd79f3e-2c0a-429a-841e-dcc68fc52e83

So the way I see now it is that AKIDA is 99.999999% likely to be incorporated in the "Cerence Immersive Companion" and I'll explain why.

Firstly, you'll see on one of the slides I've posted it lists the "New Innovations" including enhanced in-cabin audio AI, surroundings awareness, teachable AI, and a multi-sensory experience platform in addition to other features. These New Innovations will be rolled out in FY 23/24.

Secondly, there's a slide that shows Niils Shanz and the slide highlights his work on the MBUX, hyperscreen and "Hey Mercedes" voice assistant.

Thirdly, remember at the Cerence's 25th Annual Needham Growth Conference when Stefan Ortmanns said in relation to NVDIA something like "So you're right. They have their own technology, but our technology stack is more advanced. And here we're talking about specifically Mercedes where they're positioned with their hardware and with our solution." I believe that this comment was made in reference to the Vision EQXX voice control system which is what I think is being referred to by Cerence as the new "enhanced in-cabin audio AI" which will roll out in FY23/24 as part of the Cerence Immersive Companion.

Fourthly, another slide shows "Multisensory Inputs" which includes full HMI solution integrating multi-modal inputs (incl. conversational AI, DMS, facial, gaze, gesture, emotion, etc.) and multi-modal outputs (incl. voice, graphic, haptic, vibration, etc.) I bolded the ect because I thought that could be hinting at smell and taste.

Fifthly, Stefan Ortmans said that Cerence's technology is more advanced than NVDIA's and that Qualcomm are using it too, so that rules out NVDIA and Qualcomm from being the magic edge computing ingredient.

Sixthly, Arm and Cerence are partners.

Seventhly, everything adds up nicely in terms of functionality, time-lines and just the general vibe IMO.

Eighthly, DYOR because I often have no idea what I'm talking about.

B šŸ’‹

Screen Shot 2023-01-21 at 2.18.36 pm.png



Screen Shot 2023-01-21 at 2.18.22 pm.png


Screen Shot 2023-01-21 at 2.18.05 pm.png

Screen Shot 2023-01-21 at 2.19.09 pm.png

Screen Shot 2023-01-21 at 2.18.56 pm.png


Screen Shot 2023-01-21 at 2.16.40 pm.png
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 83 users

stuart888

Regular
Maybe it the drag coefficient (64% of arrow dynamics), and you add in 4 Bits are Enough for the rest! šŸ¬

 
  • Like
  • Love
  • Fire
Reactions: 15 users
Top Bottom