Fact Finder
Top 20
Some will recall an interview conducted with Rob Telson where he was asked about competition from Nvidia being a problem. Rob Telson answered this question by saying words to the effect 'We see Nvidia more as a partner in the future.' Rob Telson said this with a great degree of confidence and perhaps some may have thought it was born out of conceit or hubris when Nvidia has been and still is accorded the title of leader in the Artificial Intelligence space. We all know that in the past when asked about competition Brainchip representatives would talk about the technical advantages of SNN and how there was a Von Neumann bottleneck but on this occasion when confronted with Nvidia Rob Telson just went straight to they will be a partner. While many of us myself included struggle with the science and engineering and thank our lucky stars @Diogenese graces this forum and is prepared to share his engineering knowledge I found the following explanation as to why Nvidia needs SNN very accessible and worth sharing here. The quote is from page 83 of this very long Thesis and you are welcome to read the whole paper. There is no mention of Brainchip and AKIDA that I saw and having read other papers by Gregor Lenz I know him to be a fan of Intel and Loihi though AKIDA 2nd Generation may well change his entire perspective:
:
Neuromorphic algorithms and hardware for event-based processing Gregor Lenz
"On our prototype device, we also made use of the Tensorflow Lite backend, which in turn uses neural network accelerator hardware for efficient inference. The issue is, however, that when we convert events into frame representations, we lose a lot of the advantages of event cameras.
As mentioned in the beginning of this conclusion, the hardware lottery gave us GPUs to work with, but they are designed for a different kind of data. GPUs are a great workhorse when it comes to parallelising compute-intense tasks, but they fail to exploit high sparsity in signals.
That is why sparse computation on GPUs is something that the research community is actively looking into at the moment [286, 287].
NVIDIA announced in 2020 that their latest generation of tensor cores would be able to transform dense matrices into sparse matrices using a transformation called 4:2 sparsity, where the cost of computation is reduced by half. This accelerates inference up to a factor of 2 for a minor hit of accuracy [288], but such a feature requires supplementary hardware to check for zeros in the data.
On neuromorphic hardware, the sparse input directly drives asynchronous transistor switching activity, without the need for additional checks. The difference is essentially the lack of input in comparison to lots of zeros of input in the case of GPUs.
Even though GPUs and TensorFlow Lite are making amends to reduce the need to process unnecessary zeros, neuromorphic computing tackles different application scenarios.
Sparsity in signals from an event-based sensor reaches levels of 99% depending on scene activity and is therefore much higher than what a 4:2 sparsity could achieve to shrink.
In the end neuromorphic computing that can exploit the absence of new input information will have a head start in certain applications for power-critical systems that track spurious events at high speeds.
This is where GPUs that apply sparsity checks to avoid computation in a later step will not be able to compete. "
My opinion only DYOR
FF
AKIDA BALLISTA
:
Neuromorphic algorithms and hardware for event-based processing Gregor Lenz
"On our prototype device, we also made use of the Tensorflow Lite backend, which in turn uses neural network accelerator hardware for efficient inference. The issue is, however, that when we convert events into frame representations, we lose a lot of the advantages of event cameras.
As mentioned in the beginning of this conclusion, the hardware lottery gave us GPUs to work with, but they are designed for a different kind of data. GPUs are a great workhorse when it comes to parallelising compute-intense tasks, but they fail to exploit high sparsity in signals.
That is why sparse computation on GPUs is something that the research community is actively looking into at the moment [286, 287].
NVIDIA announced in 2020 that their latest generation of tensor cores would be able to transform dense matrices into sparse matrices using a transformation called 4:2 sparsity, where the cost of computation is reduced by half. This accelerates inference up to a factor of 2 for a minor hit of accuracy [288], but such a feature requires supplementary hardware to check for zeros in the data.
On neuromorphic hardware, the sparse input directly drives asynchronous transistor switching activity, without the need for additional checks. The difference is essentially the lack of input in comparison to lots of zeros of input in the case of GPUs.
Even though GPUs and TensorFlow Lite are making amends to reduce the need to process unnecessary zeros, neuromorphic computing tackles different application scenarios.
Sparsity in signals from an event-based sensor reaches levels of 99% depending on scene activity and is therefore much higher than what a 4:2 sparsity could achieve to shrink.
In the end neuromorphic computing that can exploit the absence of new input information will have a head start in certain applications for power-critical systems that track spurious events at high speeds.
This is where GPUs that apply sparsity checks to avoid computation in a later step will not be able to compete. "
My opinion only DYOR
FF
AKIDA BALLISTA