Fact Finder
Top 20
The following research article is quite interesting because what it does in its conclusion is point out why the type of LIF neurons forming the basis of Intel and others approach to SNN are fraught with complexity and high error rates. Noting that Intel and DARPA are listed among the sponsors gives this opinion some additional credence:
“5 Conclusion
SNNs have been considered as a potential solution for the low-power machine intelligence due to their event-driven nature of computation and the inherent recurrence that helps retain information over time. However, practical applica- tions of SNNs have not been well demonstrated due to an improper task selection and the vanishing gradient problem. In this work, we proposed SNNs with improved inherent recurrence dynamics that are able to effectively learn long sequences. The benefit of the proposed architectures is 2× reduction in number of the trainable parameters compared to the LSTMs. Our training scheme to train the proposed architectures allows SNNs to produce multiple-bit outputs (as opposed to simple binary spikes) and help with gradient mismatch issue that occurs due to the use of surrogate func- tion to overcome spiking neurons’ non-differentiability. We showed that SNNs with improved inherent recurrence dy- namics reduce the gap in speech recognition performance from LSTMs and GRUs to 1.10% and 0.36% on TIMIT and LibriSpeech 100h dataset. We also demonstrated that improved SNNs lead to 10.13-11.14× savings in multi- plication operations over standard GRUs on TIMIT and LibriSpeech 100h speech recognition problem. This work serves as an example of how the inherent recurrence of SNNs can be used to effectively learn long temporal se- quences for applications on edge computing platforms.
Prediction accuracy (%)
100 75 50 25 0
32-bit 5-bit 3-bit 1-bit
100 75 50 25
Large difference
0 1-bit
3-bit
5-bit 32-bit
Output precision (a)
1x
1.5x
2x
3x
Number of neurons/layer (b)
Acknowledgements
This work was supported in part by the Center for Brain Inspired Computing (C-BRIC)—one of the six centers in Joint University Microelectronics Program (JUMP), in part by the Semiconductor Research Corporation (SRC) Program sponsored by Defense Advanced Research Projects Agency (DARPA), in part by Semiconductor Research Corporation, in part by the National Science Foundation, in part by Intel Corporation, in part by the Department of Defense (DoD) Vannevar Bush Fellowship, in part by the U.S. Army Re- search Laboratory, and in part by the U.K. Ministry of De- fence under Agreement W911NF-16-3-0001”
My opinion only DYOR
FF
AKIDA BALLISTA
“5 Conclusion
SNNs have been considered as a potential solution for the low-power machine intelligence due to their event-driven nature of computation and the inherent recurrence that helps retain information over time. However, practical applica- tions of SNNs have not been well demonstrated due to an improper task selection and the vanishing gradient problem. In this work, we proposed SNNs with improved inherent recurrence dynamics that are able to effectively learn long sequences. The benefit of the proposed architectures is 2× reduction in number of the trainable parameters compared to the LSTMs. Our training scheme to train the proposed architectures allows SNNs to produce multiple-bit outputs (as opposed to simple binary spikes) and help with gradient mismatch issue that occurs due to the use of surrogate func- tion to overcome spiking neurons’ non-differentiability. We showed that SNNs with improved inherent recurrence dy- namics reduce the gap in speech recognition performance from LSTMs and GRUs to 1.10% and 0.36% on TIMIT and LibriSpeech 100h dataset. We also demonstrated that improved SNNs lead to 10.13-11.14× savings in multi- plication operations over standard GRUs on TIMIT and LibriSpeech 100h speech recognition problem. This work serves as an example of how the inherent recurrence of SNNs can be used to effectively learn long temporal se- quences for applications on edge computing platforms.
Prediction accuracy (%)
100 75 50 25 0
32-bit 5-bit 3-bit 1-bit
100 75 50 25
Large difference
0 1-bit
3-bit
5-bit 32-bit
Output precision (a)
1x
1.5x
2x
3x
Number of neurons/layer (b)
Acknowledgements
This work was supported in part by the Center for Brain Inspired Computing (C-BRIC)—one of the six centers in Joint University Microelectronics Program (JUMP), in part by the Semiconductor Research Corporation (SRC) Program sponsored by Defense Advanced Research Projects Agency (DARPA), in part by Semiconductor Research Corporation, in part by the National Science Foundation, in part by Intel Corporation, in part by the Department of Defense (DoD) Vannevar Bush Fellowship, in part by the U.S. Army Re- search Laboratory, and in part by the U.K. Ministry of De- fence under Agreement W911NF-16-3-0001”
My opinion only DYOR
FF
AKIDA BALLISTA