Couldn't find this conference paper from 2024 posted but maybe my search didn't capture it.
Anyway, was a positive presentation at a IEEE conference and below are a couple of snips and as they advise, "This is an accepted manuscript version of a paper before final publisher editing and formatting. Archived with thanks to IEEE."
HERE
As the authors acknowledge:
"This work has been partially supported by King Abdullah University of Science and Technology CRG program under grant number: URF/1/4704-01-01.
We would like also to thank Edge Impulse and Brainchip companies for providing us with the software tools and hardware platform used during this work.*
One of the authors, M E Fouda, caught my eye given his/her relationship with "3", maybe employer...hmmmm.
D. A. Silva1
, A. Shymyrbay1
, K. Smagulova1
, A. Elsheikh2
,
M. E. Fouda3,†
and A. M. Eltawil1
1 Department of ECE, CEMSE Division, King Abdullah University of Science and Technology, Thuwal 23955, Saudi Arabia
2 Department of Mathematics and Engineering Physics, Faculty of Engineering, Cairo University, Giza 12613, Egypt
3 Rain Neuromorphics, Inc., San Francisco, CA, 94110, USA
†Email:
foudam@uci.edu
End-to-End Edge Neuromorphic Object Detection System
Abstract—Neuromorphic accelerators are emerging as a potential solution to the growing power demands of Artificial
Intelligence (AI) applications. Spiking Neural Networks (SNNs), which are bio-inspired architectures, are being considered as a way to address this issue. Neuromorphic cameras, which operate on a similar principle, have also been developed, offering low power consumption, microsecond latency, and robustness in various lighting conditions.
This work presents a full neuromorphic
system for Computer Vision, from the camera to the processing hardware, with a focus on object detection. The system was evaluated on a compiled real-world dataset and a new synthetic dataset generated from existing videos, and it demonstrated good performance in both cases. The system was able to make accurate predictions while consuming 66mW, with a sparsity of 83%, and a time response of 138ms.
View attachment 77018
VI. CONCLUSION AND FUTURE WORK
This work showed a low-power and real-time latency full spiking neuromorphic system for object detection based on IniVation’s DVXplorer Lite event-based camera and Brainchip’s Akida AKD1000 spiking platform. The system was evaluated on three different datasets, comprising real-world and synthetic samples. The final mapped model achieved mAPs of 28.58 for the GEN1 dataset, equivalent to 54% of a more complex state of-the-art model and 89% of the performance detection from the best-reported result for the single-class dataset PEDRo, having 17x less parameters. A power consumption of 66mW and a latency of 138.88ms were reported, being suitable for real-time edge applications.
For future works, different models are expected to be adapted to the Akida platform, from which more recent
releases of the YOLO family can be implemented. Moreover, it is expected to evaluate those models in real-world scenarios instead of recordings, as well as the acquisition of more data to evaluate this setup under different challenging situations.