Fact Finder
Top 20
This is what the Russian researchers say:I’m trying to get my head around the ‘Frontiers in Neuroscience’ article posted by someone last night - the updated article by Russian authors. My primitive read of the comparison of Akida with other neuromorphic chips was that we weren’t hugely different (like I said though, I don’t really understand the nitty gritty). Anyone else have any insights?
“3.8. Akida
Akida (Vanarse et al., 2019) is the first commercial neuromorphic processor, commercially available since August 2021. It has been developed by Australian BrainChip since 2013. Fifteen companies, including NASA, joined the early access program. In addition to Akida System on Chip (SoC), BrainChip also offers licensing of their technologies, providing chip manufacturers a license to build custom solutions.
The chip is marketed as a power efficient event-based processor for edge computing, not requiring an external CPU. Power consumption for various tasks may range from 100 μW to 300 mW. For example, Akida is capable of processing at 1,000 frames/Watt (compare to TrueNorth with 6,000 frames/Watt). The first generation chip supports operations with convolutional and fully connected networks, with the prospect to add support of LSTM, transformers, capsule networks, recurrent and cortical neural networks. ANN network can be transformed into SNN and executed on the chip.
One Akida chip in a mesh network incorporates 80 Neural Processing Units, which enables modeling 1,200,000 neurons and 10,000,000,000 synapses. The chip is built at TSMC 28 nm. In 2022, BrainChip announced the second generation chip at 16 nm.
Akida’s ecosystem provides a free chip emulator, TensorFlow compatible framework MetaTF for the transformation of convolutional and fully connected neural networks into SNN, and a set of pre-trained models. When designing a neural network architecture for execution at Akida, one should take into account a number of additional limitations concerning the layer parameters (e.g., maximum convolution size is 7, while stride 2 is supported for convolution size 3 only) and their sequence.
The major distinctive feature is that incremental, one-shot and continuous learning are supported straight at the chip. At the AI Hardware Summit 2021 BrainChip showed the solution capable of identifying a human in other contexts after having seen him or her only once. Another product by BrainChip is a smart speaker, that on having heard a new voice asks the speaker to identify and after that calls the person by their name. There results are achieved with help of a proprietary local training algorithm on the basis of homeostatic STDP. Only the last fully connected layer supports synaptic plasticity and is involved in learning.
Another instructive case from the AI Hardware Summit 2021 was a classification of fast-moving objects (for example, a race car). Usually, such objects are off the frame center and significantly blurred but they can be detected using an event- based approach”
So just straight off the top the distinctive features:
1. It is a commercial in the market chip - first one and has commercial partners.
2. One shot learning on chip.
3. Incremental continuous learning on chip.
4. 1.2 million neurons and 10 billion synapses
5. Is compatible with TensorFlow and has MetaTF.
There are other things but not disclosed in this paper such as scalability and much higher fps than the article refers too, ability to process all five senses as well as radar, Lidar and ultrasonics on the one chip. @Diogenese can add the other things I have forgotten at the moment. Yes it is also sensor and processor agnostic. That will have to do for now.
The article also does not mention that Loihi is not a single chip but a board combining multiple chips. Loihi can only process two senses at last report a month or so ago.
My opinion only DYOR
FF
AKIDA BALLISTA