Hi Wags,
It is true that Akida 1 does not use MACs (Multiply Accumulators). However, TeNNs in Akida 2 do use MACs.
The article, which explains the principles of NNs, describes MACs as they are used in "conventional" neural networks (NN).
https://embeddedcomputing.com/techn...l-neuron-for-advanced-artificial-intelligence
MACs operate on binary numbers. for example, binary 0001 = decimal 1, as follows:
0000 = 0,
0001 =1,
0010 = 2,
0011 =3,
0100 = 4,
0101 = 5,
0110 = 6,
0111 = 7,
1000 = 8
...
1111 = 15.
Note that even binary numbers end in 0, and odd binary numbers end in 1.
Originally the 1-bit version of Akida used "spikes", or to be more precise, it used a single binary bit (1) to represent a spike. The activation spikes (eg, outputs from a pixel of a camera) were multiplied by a single-bit weight value, so the result was a single bit, 1 only if both the activation and the weight were 1. Otherwise the result was 0.
Basically a NN works by comparing an input image with a dataset (model) of images, where the pixels making up the images are represented by binary values (old school) or spikes (SNN). Convolution involves taking a block of input spikes and scanning the entire stored image dataset (weights) or millions of images to find the closest match. Using the old school MAC process, this process involves trillions of transistor switch operations each causing a current to be generated. This is why the old school process burns so much power.
Now comes the tricky bit. Customer feedback demanded higher accuracy, so the production version switched to 4-bit weights and activations. This meant that the pixel output activations and the weights could be represented by 15 shades of grey or zero.
At this stage I asked PvdM if that meant that Akida would be using MACs to handle the 4-bit numbers, and his response was "No". Obviously, if he had wanted to, he could have been more forthcoming, so I understood that the actual mechanism was still in stealth mode.
So now my guess is that, for each spike in the sample, there are 4 parallel circuits, each performing the same function on each of the bit positions of the activation and weight. At this stage, vanilla CNNs were using 16 or 32 bit numbers.
Which brings us to TeNNs.
WO2023250093A1 METHOD AND SYSTEM FOR IMPLEMENTING TEMPORAL CONVOLUTION IN SPATIOTEMPORAL NEURAL NETWORK 20220622 COENEN OLIVIER JEAN-MARIE DOMINIQUE [US]; PEI YAN RU [US]
Yesterday's article compares TeNNs use of MACs with competitor's.
Introducing TENN: Revolutionizing Computing with an Energy Efficient Transformer Replacement - BrainChip
Introducing TENN: Revolutionizing Computing with an Energy Efficient Transformer Replacement By Dr. Tony Lewis, CTO at BrainChip, Olivier Coenen, Senior Research Scientist at BrainChip and Yan Ru Pei, research Scientist at BrainChip The Rise of TENN: A Game Changer in AI TENN, or...
brainchip.com
View attachment 62959
O
n speech enhancement through denoising, TENN was compared to the state of the art (SoTA) networks. While providing almost the same performance with this particular implementation, the number of parameters and number of operations (MACs) with TENN are nearly 3 times and 12 times less.
So a reference to MAC is not an automatic disqualification of Akida 2.