jtardif999
Regular
Hi @jtardif999,
that is simply not true.
On a side note: Wouldn’t you expect neuromorphic researchers outside of Intel Labs having called out Mike Davies and his team over the past 6+ years if your claim were accurate?
Also, has any of our BrainChip staff ever contested the claim that Loihi has on-chip learning capabilities? (https://redwood.berkeley.edu/wp-content/uploads/2021/08/Davies2018.pdf)
No? I wonder why.
But let the facts speak for themselves…
Here is a video by fortiss, the Research Institute of the Free State of Bavaria for Software-Intensive Systems, whose researchers have been working with Loihi and SpiNNaker for years and recently also partnered with both BrainChip and Innatera (https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-454015).
The gesture recognition demo shown is run on Loihi 1.
“What we do here is on-the-fly learning from few examples - so, in the gesture case just a few seconds of presenting time - and as we want to implement this on neuromorphic hardware, we make use of the neuromorphic hardware’s plasticity processor, which enables learning with local signals, so implementation of local learning rules. In order to prevent catastrophic forgetting in the network, we only update the weights of the last layer of the network.
(…) So using this learning rule, we can implement on-chip adaptation to new classes. And as this happens directly on the neuromorphic processor, it can be implemented with only a very low latency and energy overhead. And this means that we can learn new classes in real-time during inference without interrupting the inference for specific learning procedure [sic].
(…) The third stage is the rapid on-chip learning layer, which is neuromorphic processor’s plasticity layer, and it enables learning only from short demonstrations, which can enhance accuracy and also enable personalisation of gestures.”
View attachment 85137
This claim of yours is equally false.
While the individual on-chip learning capabilities differ from chip to chip (some of them seem to be very basic, although this is all far too technical for me to understand) and none of these capabilities are exactly like those of Akida (which certainly makes it unique in that sense), there are plenty of neuromorphic chips around that are claimed to have the capability to learn on-the-fly during inference, without having to be retrained in the cloud.
A Google search will bring up numerous examples, such as…
View attachment 85141 View attachment 85142
View attachment 85143
View attachment 85151
View attachment 85145 View attachment 85146
View attachment 85149
View attachment 85147
View attachment 85150
Most of the examples here are old and research based. Research may well vindicate the ability for on-chip learning but it still has to be commercialised. Why haven’t Intel commercialised Loihi after having it available for longer than Akida? Spiniker AFAIK are not competition for Akida at the edge and still haven’t fully commercialised an offering. The Darwin chip is Chinese and therefore not a direct competitor either, and again not commercial yet. Getting back to the ChatGTP list, none of the commercial offerings in that list have on-chip learning. BrainChip have always touted this commercial capability as unique to them, now why would they state that if it was false statement?