Yeah I like line too about funeral for CNN, So he or Brainchip is pumped, What does that funeral for CNN means now
@Diogenese
He specifically mentions "CNN models".
CNN models are used in Software NNs and in competing NN accelerator hardware.
CNN models are usually associated with MAC (multiply/accumulate) processing in full digital, 16 bit or 32 bit per byte processors, although the pressure to reduce energy and latency has seen 8 bits adopted as standard recently. While some zero multiplication can be skipped to reduce the number of multipliction operations, basically all the input bytes are processed.
MAC operations are carried out in a matrix array (rows and columns) where all the multiplication results are added column by column.
Akida on the other hand, only processes bytes (1, 2, or 4 bit events or spikes) which change.
In addition, Akida uses N-of-M coding, processing only the largest N of M input bytes as these N bytes carry the most relevant information, meaning that M-N events are dropped from the calculation.
Further more, data in Akida models is 1, 2, or 4 bit bytes and can also use N-of-M coding, greatly reducing the size of each model, so, in fact, you get a double-whammy of compression.
Thus Akida 1 is much more power efficient as it performs far fewer operations in less time to process the same input information as conventional CNNs.
Now TENNs beings additional efficiencies, which I haven't got to the bottom of yet (It took me a long time to get some sort of grasp of N-of-M coding). This makes TENNs more power efficient and further reduces latency.
So I think Tony is saying that CNNs are yesterdays tech, so make way for tomorrow.