Fact Finder
Top 20
Hi JBhttps://arxiv.org/html/2401.06911v1
"Furthermore, due to the runtime limitation of 20 minutes for jobs on Intel’s cloud, on-chip training was not possible, restricting the exploration of this important capability of Loihi".
Akida can solve that.
This is an important find. I have taken the following extract to cover the facts you have revealed:
- •
Superiority of SNN and Loihi 2: The results highlight that, in all cases, Intel’s Loihi 2 performs better than the CNN implemented in Xilinx’s VCK5000. However, it is worth noting that, as the batch size increases, the advantage of Loihi became less pronounced as the performance points move closer to the EDP line, although it still outperforms the Xilinx’s VCK5000 implementation. - •
Interference detection as a promising use case: Among the considered scenarios, it seems that the interference detection and classification benefited the most from the implementation on Intel’s Loihi chipset. Even though the time ratio remained generally higher than one, the energy savings achieved with Loihi were significant. In some cases, the energy ratio reached values as high as 105 - •
SNN encoding: Fig. 3 compares the impact of FFT Rate and FFT TEM encoding for the ID scenario. Interestingly, the type of coding used did not have a significant impact on this comparison. - •
RRM’s Performance: While RRM did not achieve energy savings as pronounced as in the ID scenario, it consistently achieved an energy ratio exceeding 102
V-ARemarks about the results obtained with Intel’s Loihi 2
The results presented in this article were conducted with Loihi 2 but using the remote access from Intel’s research cloud. Although Intel offers the possibility of shipping physical chips to INRC partners premises, at the moment of developing these results the primary access to Loihi 2 was through the Intel’s neuromorphic research cloud. Obviously, the remote access introduced some additional limitations as it was not possible to control for concurrent usage by other users, which could lead to delays and increased power consumption. Additionally, the specific interface used by Intel on their cloud was not disclosed, potentially resulting in differences when conducting measurements with future releases of Loihi 2. Furthermore, due to the runtime limitation of 20 minutes for jobs on Intel’s cloud, on-chip training was not possible, restricting the exploration of this important capability of Loihi.The cloud interface plays a key role, as it impacts the transfer of spiking signals to the chipset. High input size may span long per-step processing time. For example, in the flexible payload use case, the execution time per example increased from approximately 5 ms to 100 ms when the input size went from 252 neurons to 299 neurons.
VI Conclusion
While we enter the era of AI, it becomes evident that energy consumption is a limiting factor when training and implementing neural networks with significant number of neurons. This issue becomes particularly relevant for nonterrestrial communication devices, such as satellite payloads, which are in need of more efficient hardware components in order to benefit from the potential of AI techniques.Neuromorphic processors, such as Intel’s Loihi 2, have shown to be more efficient when processing individual data samples and are, therefore, a better fit for use cases where real world data arrives to the chip and it needs to be processed right away. In this article, we verified this hypothesis using real standard processor solutions, such as the Xilinx’s VCK5000 and the Intel’s Loihi 2 chipset.
Acknowledgments
This work has been supported by the European Space Agency (ESA) funded under Contract No. 4000137378/22/UK/ND - The Application of Neuromorphic Processors to Satcom Applications. Please note that the views of the authors of this paper do not necessarily reflect the views of ESA. Furthermore, this work was partially supported by the Luxembourg National Research Fund (FNR) under the project SmartSpace (C21/IS/16193290).”The first Fact is that it confirms that Loihi 1 & 2 are most definitely still only research platforms.
The second Fact is that the lack of willingness of Intel to disclose the Cloud interphase it uses would as the paper suggests be a significant limitation for commercial adoption.
The third Fact disclosed that inference time increased on Loihi 2 the greater the number of neurons in use means is a huge issue for Intel as from every thing I have read about AKIDA the more nodes hence neurons in play the faster the inference times. (Perhaps I miss the point here so this needs a Diogenese ruler run over my conclusion.)
Fact four even Loihi 2 despite its limitations is far more efficient than Xilinx’s VCK5000 running CNN’s.
Finally Fact five using neuromorphic computing for cognitive communication is clearly the way forward.
My opinion only DYOR
Fact Finder