That fantastic article 4 Bits Are Enough got me digging into a little deeper into some of PvDM's writings.
https://www.mdpi.com/1424-8220/19/22/4831
I was thinking about the input spike data. For more simple pattern detection, my assumption it is typically an array of numbers. I could be way off with the voice or text, but video pixels would be numbers. Sound would be numbers, and all packed into an array, I believe. A big array, lots of numbers.
So
I thought Akida version one was targeting use-cases where the events happen rarely. For example, the use-case of an endangered animal walking in front of a battery powered wilderness video camera. Only when it sees the very rare (less than 300 alive) Florida Panther does it trigger the event.
So lots of zeros, until some sort of animal passes past the camera, where motion detection triggers the video camera.
Then with the 1000fps video camera,
I suppose that is Akida making inference and machine learning on 1000 fps is awfully special. On the edge, where only real events get captured and passed to the cloud on some timetable.
From sparse to fastest of all time!
Perhaps only the Brainchip SNN Team really knows, with their secret JAST sauce. Perhaps (low event bits, and after the first 4 events, Akida will demote all to zero).
i==BigfanHere!
Just trying to always learn more.
View attachment 27127