Whilst the below April 2022 independent testing article in Plos One utilises Loihi, like to believe we all have an understanding of how we stack up against Loihi.
The main takeaways for me and the supporting results only add weight to the potential and capabilities of Akida vs CPU & GPU and how it provides an exceptional complimentary accelerator / processor imo.
Full article here for those more tech savvy:
Neuromorphic computing mimics the neural activity of the brain through emulating spiking neural networks. In numerous machine learning tasks, neuromorphic chips are expected to provide superior solutions in terms of cost and power efficiency. Here, we explore the application of Loihi, a...
journals.plos.org
Abstract
Neuromorphic computing mimics the neural activity of the brain through emulating spiking neural networks. In numerous machine learning tasks, neuromorphic chips are expected to provide superior solutions in terms of cost and power efficiency. Here, we explore the application of Loihi, a neuromorphic computing chip developed by Intel, for the computer vision task of image retrieval. We evaluated the functionalities and the performance metrics that are critical in content-based visual search and recommender systems using deep-learning embeddings.
Our results show that the neuromorphic solution is about 2.5 times more energy-efficient compared with an ARM Cortex-A72 CPU and 12.5 times more energy-efficient compared with NVIDIA T4 GPU for inference by a lightweight convolutional neural network when batch size is 1 while maintaining the same level of matching accuracy. The study validates the potential of neuromorphic computing in low-power image retrieval, as a complementary paradigm to the existing von Neumann architectures.
The comparison of the average power consumptions between the SNNs and the ANNs is shown in
Table 4. With the batch size set to one, the SNN with 16 time steps
uses 217.0x/24.0x less power than the ANNs on Xeon/i7 CPUs, 9.3x less than the ANN on ARM CPU, and 40.8x/31.3x less than the ANNs on V100/T4 GPU. This is where neuromorphic hardware starts to shine as it consumes way less power than the conventional hardware. Utilizing the temporal sparsity of SNN appropriately, we believe the neuromorphic hardware can further reduce its power consumption. Another thing that we can observe from
Table 4 is that the static (idle) power dominates the power consumption of the Loihi chip.
View attachment 22541
We measured the total energy used per inference (forward pass) reported in
Table 5. These results can also be estimated by combining the results of Tables
3 and
4. As summarized in
Table 5, with the batch size set to one,
the energy consumption of SNN with 16 time steps is 15.6x/3.2x less than the ANNs on Xeon/i7 CPUs, 2.5x less than the ANN on ARM CPU, and 17.5x/12.5x less than the ANNs on V100/T4 GPUs per inference. This proves the benefits of the neuromorphic hardware in the low energy-budget applications of machine learning, particularly lightweight image search engines and visual recommender systems. It is apparent that when large batch sizes are used, CPUs and GPUs consume less energy per example. However, there are many use cases where inference is executed in small batches, and they are the targets for neuromorphic hardware in the current stage.
View attachment 22542
Conclusion
We studied the application of the Loihi chip, a neuromorphic computing hardware developed by Intel, in image retrieval.
Our results show that the generation of the deep learning embeddings by spiking neural networks for lightweight convolutional neural networks is about 2.5 times more energy-efficient compared with a CPU and 12.5 times more energy-efficient compared with a GPU. We confirm the
long-term potential of neuromorphic computing in machine learning, not as a replacement for the predominant von Neumann architecture, but
as accelerated coprocessors.