McHale
Regular
A lot to think about McH,
Note to self:
A. models
B. Mercedes NAOMI4
C. s/w
A. Models are what the NN has to search through.
I'll confine my thoughts to images and speech, but other sensor inputs are treated on the same principles.
Images: Static (photos, drawings); Moving (Video)
Sound: Key word spotting, NLP; other sounds.
Each of these can be divided into several layers of sub-categories with increasing specificity. In a NN, the larger the model, the more power is consumed in making an inference/classification, because the processor needs to examine each example in th model to see which the sensor input most nearly resembles.
Thus it make sense to have specific models foe spcific tasks. The narrower the task, the smaller the model can be.
For example, with image classification in an ADAS/AV, images of astronomy of scuba diving are irrelevant. So ADAS models are compiled from millions of images captured from vehicle-mounted cameras/videos.
Akida excels at classifing static images, and can do this at many frames per second. However, Akida 1 then relied on the associated CPU running software to process the classified images to determine an object's speed and direction. That's the genius of TENNS - it is capable of performing the speed analysis in silicon or in software far more efficiently than conventional software.
I prefer to talk about images/video because Natural Language processing is something I struggle to comprehend, but apparently TENNS makes this a cakewalk too.
Open AI tries to have everything in its model, but that burns a massive amount of energy for a single inquiry - a bit like biting off more than it can chew.
So now we have RAG, where subject-specific models can be downloaded depending on what the NN procesor is intended to do.
B. NAOMI4 - Yes This is a German government funded research project and will not produce a commercial outcome any time soon.
C. H/W v S/W
Valeo does not have an Akida silicon in its SCALA 3. It uses software to process the lidar sensor signals. Because we've been working with them for several years in a JD, I'm hopeful that the software will iclude Akida 2/TENNS simulation software. Sean did mention that we now have an algorithm product line.
The rationale for this was explained in the Derek de Bono/Valeo podcast posted yesterday that software allows for continual upgrading. He also mentioned that provision for some H/W upgrades could also be accommodated. Given TENNS young age, it will have developed significantly in the last couple of years, so it could not be set in silicon at this early stage, although Anil did announce some now deferred preparations for taping out some months ago.
Again, I am hopeful that Akida2/TENNS will be included in the software of both Valeo and Mercedes SDVs (and in other EAP participants' products) because it produces real-time results at a much lower power consumption.
Then there's PICO ... the dormant watchdog ...
Hi Dio thanks for your response to my post from last Thursday, but I must admit to not having really put what I was trying to say in
a clear or properly worded fashion.
When I was talking about models I was really meaning to speak to the different programming/coding languages that can be (need to be) used to interface with the various versions/iterations of Akida, for instance Python, PyTorch, Keras and a number of others I have seen. mentioned.
So in my post, I said models in an incorrect context, because although I do not understand a good deal of the pertinent technical niceties, I do, I believe, know that a model is like a library that can be used for certain applications/uses of Akida, which you also described.
Going back to the coding languages, why do the various iterations of Akida require the use of different coding languages, if that statement is in fact correct. Regardless however, why are there different coding languages required, because I do know that several different languages are used. ?