Frangipani
Top 20
Uni researchers in the United Arab Emirates (from New York University Abu Dhabi and Khalifa University, Abu Dhabi) have experimented with Akida for neuromorphic AI-based robotics - the field our CTO Dr. Tony Lewis is an expert in:
View attachment 60342
View attachment 60343
View attachment 60344
Rachmad Vidya Wicaksana Putra and Muhammad Shafique, two of the New York University (NYU) Abu Dhabi eBrain Lab researchers, whose above paper on experimenting with Akida for neuromorphic AI-based robotics was published almost exactly a year ago, appear to be extremely enamoured with our neuromorphic processor! They co-authored another paper on AKD1000 with their colleague Pasindu Wickramasinghe that was published yesterday and that also got accepted at the International Joint Conference on Neural Networks (IJCNN), June 30th - July 5th, 2025 in Rome, Italy.
“Neuromorphic Processors
1) Overview: The energy efficiency potentials offered by
SNNs can be maximized by employing neuromorphic hard- ware processors [22]. In the literature, several processors have been proposed, and they can be categorized as research and commodity processors. Research processors refer to neuro- morphic chips that are designed only for research and not commercially available, hence access to these processors is limited. Several examples in this category are SpiNNaker, NeuroGrid, IBM’s TrueNorth, and Intel’s Loihi [4].
Meanwhile, commodity processors refer to neuromorphic chips that
are available commercially, such as BrainChip’s Akida [7] and SynSense’s DYNAP-CNN [8]. In this work, we consider the Akida processor as it supports on-chip learning for SNN fine-tuning, which is beneficial for adaptive edge AI systems [7].
(…)
E. Further Discussion
It is important to compare neuromorphic-based solutions against the state-of-the-art ANN-based solutions, which typically employ conventional hardware platforms, such as CPUs, GPUs, and specialized accelerators (e.g., FPGA or ASIC). To ensure a fair comparison, we select object recognition as the application and YOLOv2 as the network, while considering performance efficiency (FPS/W) as the comparison metric. Summary of the comparison is provided in Table I, and it clearly shows that our Akida-based neuromorphic solution achieves the highest performance efficiency. This is due to the sparse spike-driven computation that is fully exploited by neuromorphic processor, thus delivering highly power/energy-efficient SNN processing. Moreover, our Akida-based neuromorphic solution also offers an on-chip learning capability, which gives it further advantages over the other solutions. This comparison highlights the immense potentials of neuromor- phic computing for enabling efficient edge AI systems.
VI. CONCLUSION
We propose a novel design methodology to enable efficient SNN processing on commodity neuromorphic processors. It is evaluated using a real-world edge AI system implementation with the Akida processor. The experimental results demonstrate that, our methodology leads the system to achieve high performance and high energy efficiency across different applications. It achieves low latency of inference (i.e., less than 50ms for image classification, less than 200ms for real- time object detection in video streaming, and less than 1ms for keyword recognition) and low latency of on-chip learning (i.e., less than 2ms for keyword recognition), while consuming less than 250mW of power. In this manner, our design methodology potentially enables ultra-low power/energy design of edge AI systems for diverse application use-cases.”
Last edited: