Could someone who knows stuff please have a look at this and confirm ours is better?
âBetterâ is open for interpretation. Iâd say different and limited. It seems to be a chip that can be trained to do a specific task in a way that is generally acceptable to call it AI. Not in my opinion, but seemingly accepted by many others. But then I could do the same in a sequential programmimg language also. No AI need be involved.
For each pixel
if( pixel changes from previous state) then
do something
end if
Store current pixel state for next iteration
end loop
A couple of things that stood out for me were:
1) âGrAI VIP can handle MobileNetv1âSSD running at 30fps for 184 mW, around 20Ă the inferences per second per Watt compared to a comparable GPUâ
Comparing it to a power-hungry GPU is a bit naughty. Everyone knows they are power hungry and anyway, GPUs donât do inferences per seâjust sledgehammer, power hungry, high level maths. Well considering multiplication to be high level that is.
Akida has helped achieve 1000fps and uses ÂľW
2) it uses 16 but floating point in calcs. That would be compute intensive.
3) the system can be trained, but I saw nothing about it learning.
and
4) it seems very specific to processing images only. Although they do also mention audio, bit their example is only for video.
IMHO it seems like they are closer to a normal, and single tasked, CNN and are using the word neuromorphic in a very loose manner. Pretty much just as a buzz wordâprobably to get search engines to find the article. Sure they call things neurons, but so to do many other implementations call memory cells neurons, and call what they have neuromorphic.
As
@jtardif999 stated, they donât mention synapses, and I donât accept that if you have neurons, then synapses naturally follow. They should, in a true neuromorphic implementation, but so many are using that term for things that are very loosely modelled on only part of the brain.
As an example I refer to ReRAM implementations of âneuromorphicâ systems. They store both state and weight in memory cells, and use the resistive state of memory cells to perform analogue addition and multiplication. But I think all such âneuromorphicâ implementations suffer the same limitation of not being able to learn, they can only be trained. And once trained for a task, that is the only task they do until re-trained. And if that is all you want, then is your definition of âbetterâ.
This raises a VERY relevant question, is Akida too good. The world has time-and-time again gone with simple to understand, and simple to use solutions, over complex multi-faceted solutions. The world especially likes mass-produced widgets that do a required task well-enough. Some of these other âneuromorphicâ solutions may prove to be just that. People seem happy to throw money multiple times at an inferior product rather than pay extra for the product they really need.
Thereâs enough room in the TAM for multiple players. Iâm happy for Akida to occupy the top spot, solving the more difficult problems, and leave the more mundane to others.