Aha! Penny's dropped. Remember when 8-bit weights were announced?The claims set out the specifics of the invention. In this case the invention is directed to machine learning, and it does this by adding NPUs to the final layer of a NN. The final layer is where the learning takes place. The "secondary training data set" spikes could be the activation event spikes from the sensor (camera/microphone/...), the primary training data set having been provided by the model library data used in initial configuration.
Supplementary spiking neurons (NPUs) are added to the final layer (Fig 10, 1002, 1004) where Akida does its learning, presumably to incorporate newly learned features. The ALUs of Figure 1 would be involved in the step of "performing a learning function ... by performing a synaptic weight value variation ...", bearing in mind that this is for multi-bit weights and activations.
View attachment 39178
This change may be to accommodate 8-bit weights/activations.
The ALUs may be more efficient at handling the multi-bit "spikes" than the original Akida configuration.
I found this Sanskrit engraving on Eric von Dunnycan's tomb:
https://doc.brainchipinc.com/_modules/akida_models/imagenet/model_mobilenet.html
...
weight_quantization (int, optional): sets all weights in the model to have a particular quantization bitwidth except for the weights in the first layer.
Defaults to 0.
* '0' implements floating point 32-bit weights.
* '2' through '8' implements n-bit weights where n is from 2-8 bits.
activ_quantization (int, optional): sets all activations in the model to have a particular activation quantization bitwidth.
Defaults to 0.
...
input_scaling (tuple, optional): scale factor and offset to apply to
inputs. Defaults to (128, -1). Note that following Akida convention, the scale factor is an integer used as a divider.
...
© Copyright 2022, BrainChip Holdings Ltd. All Rights Reserved.
If I recall correctly, it is only the weights that are 8-bit, and only for the purpose of compatibility with 3rd party model libraries.
If there are 8-bit weights and 4-bit activations, an 8*4 matrix would be used.
Last edited: