This is way above my pay grade - I'v been flying by the seat of my pants before this, so I think this is in the Kristofor Karlson, Simon Thorpe, PvdM balliwick.
However, if I had to guess, I'd say that our on-chip, one-shot learning would make this irrelevant for Akida.
View attachment 2815
Weight generation and storage generally happen off-chip, creating a power and latency bottleneck related to the movement of this data to and from external memory. Instead, the Hiddenite architecture features on-chip weight generation for re-generating weights through a random number generator, effectively eliminating the need to access the external memory. Beyond this, Hiddenite offers "on-chip supermask expansion,” a feature that reduces the number of supermasks that need to be loaded by the accelerator. 0
Fabricated on TSMC's 40nm technology, the chip measures 3 mm x 3 mm and is capable of performing 4,096 multiply-and-accumulate operations simultaneously. The researchers further claim it achieves a maximum of 34.8 TOPS per watt, all while reducing the amount of model transfer to half that of binarized networks.
Akida
US2020143229A1 SPIKING NEURAL NETWORK
View attachment 2816
[0123]
In some embodiments, when a logical AND operation is performed on a spike bit in the spike packet that is ‘1’ and a synaptic weight that is zero, the result is a zero. This can be referred to as an ‘unused spike.’ When a logical AND operation is performed on a spike bit in the spike packet that is ‘0’ and a synaptic weight that is ‘1’, the result is zero. This can be referred to as an ‘unused synaptic weight’. The learning circuit (e.g., weight swapper 113 ) can swap random selected unused synaptic weights where unused spikes occur.
By the way, I just revisited Kristofor Karlson's TinyML talk from March last year:
Watch It’s an SNN future: Are you ready for it? Converting CNN’s to SNN’s - Talk Video by Kristofor Karlson | ConferenceCast.tv
BrainChip provides a number of proprietary models:
View attachment 2817