Hi Sera,
I flagged graiMatter a couple of years ago as one to watch.
This article discusses the different tech approaches of graiMatter and Akida:
Spiking Neural Networks: Research Projects Or Commercial Products?
Opinions differ widely, but in this space that isn’t unusual.
MAY 18TH, 2020 - BY: BRYON MOYER
https://semiengineering.com/spiking-neural-networks-research-projects-or-commercial-products/
T
emporal coding is said by some to be closer to what happens in the brain, although there are differing opinions on that, with some saying that that’s the case only for a small set of examples: “It’s actually not that common in the brain,” said Jonatha Tapson, GrAI Matter’s chief scientific officer. An example where it is used is in owl’s ears. “They use their hearing to hunt at night, so their directional sensitivity has to be very high.” Instead of representing a value by a frequency of spikes, the value is encoded as the delay between spikes [ # rate coding #]. Spikes then represent events, and the goal is to identify meaningful patterns in a stream of spikes.
…
Temporally coded SNNs can be most effective when driven by sensors that generate temporal-coded data – that is, event-based sensors. Dynamic vision sensors (DVS) are examples. They don’t generate full frames of data on a frames-per-second basis. Instead, each pixel reports when its illumination changes by more than some threshold amount. This generates a “change” event, which then propagates through the network. Valentian said these also can be particularly useful in AR/VR applications for “visual odometry,” where inertial measurement units are too slow.
…
Meanwhile, BrainChip started with rate coding, but decided that wasn’t commercially viable. Instead, it uses rank coding (or rank-order coding), which uses the order of arrival of spikes (as opposed to literal timing) to a neuron as a code. This is a pattern-oriented approach, with arrivals in the prescribed order (along with synaptic weighting) stimulating the greatest response and arrivals in other orders providing less stimulation.
…
All of these coding approaches aside, GrAI Matter uses a more direct approach. “We encode values directly as numbers – 8- or 16-bit integers in GrAI One or Bfloat16 in our upcoming chip. This is a key departure from other neuromorphic architectures, which have to use rate or population or time or ensemble codes. We can use those, too, but they are not efficient,” said Tapson.
…
The [ # BrainChip #] neural fabric is fully configurable for different applications. Each node in the array contains four neural processing units (NPUs), and each NPU can be configured for event-based convolution (supporting standard or depthwise convolution) or for other configurations, including fully connected. Events are carried as packets on the network.
While NPU details or images are not available, [ # WO2020092691 published 20202507 # ] BrainChip did further explain that each NPU has digital logic and SRAM, providing something of a processing-in-memory capability, but not using an analog-memory approach. An NPU contains eight neural processing engines that implement the neurons and synapses. Each event is multiplied by a synaptic weight upon entering a neuron.
According to this article, GraiMatter are not using SNNs. From their choice of 8 bit or 16 bit integers/FP I assume they need a MAC matrix circuit to process weights and activations, as in CNN. This is not a sparse process, as every bit must be processed. hence GraiMatter would use more power and would be slower than Akida.
GraiMatter's assertion that "
other neuromorphic architectures, which have to use rate or population or time or ensemble codes." does not apply to Akida, which uses
rank coding, from which Simon Thorpe'se N-of-M code is derived. This is based on the discovery that the strongest signals trigger retinal receptors and pixels earlier than weaker signals. Most of the information is carried in the earlier arriving spikes, and the later-arriving spikes can be discarded. When you think about it, this is quite like how the DVS/event camera works. N-of-M coding uses the order of arrival, and does not need to track the time of arrival. It just counts the first N spikes and closes the gate.
GraiMatter uses 8 bit or 16 bit precision mathematics, whereas Akida uses inference based on probability. You may recall that some demonstrations of Akida show a bar chart with the probabilities of the subject item being one of a number of different articles, eg, dog, cat, parrot, elephant. Akida does the comparison and selects the one which is the best fit. Of course, the model libraries are much larger than that.
I find this an amazing leap of imagination, to conceive that such a process could be implemented in silicon, and N-of-M is pretty clever too.