Our IBM friend Kevin Johansen have maybe got inspiration from below Swedish university white paper called:
Comparison of Akida Neuromorphic
Processor and NVIDIA Graphics
Processor Unit for Spiking Neural
Networks
CARL CHEMNITZ
MALIK ERMIS
Degree Project in Computer Science and Engineering, First Cycle 15 credits
Date: June 9, 2025
Supervisor: Jörg Conradt
Examiner: Pawel Herman
Swedish title: Jämförelse av neuromorfisk processor Akida och NVIDIA grafikkort för Spiking
Neural Networks
School of Electrical Engineering and Computer Science
Some snippets:
Key Observations
• Neuromorphic Akida demonstrates 99.520 % (MNIST) and
95.956-99.699 % (YOLO) energy reduction compared to GPU for the same
or similar networks.
• For simpler networks, Akida processes 76.733 % faster (1.622 ms vs
7.014 ms), proving its suitability for latency critical real-time processing
tasks. However, for more complex models, the AKD1000 is outperformed
by the GTX 1080, 73.769 ms to 160.872 ms.
• Akida’s adaptive clocking (35-179 MHz) reduces clock speeds by an average
of 86.610 % versus GPUs’ relatively fixed 1746 MHz operation, reflecting its
dynamic power engagement through sparse computing.
• The sparse input patterns, utilizing Akida’s neuromorphic architecture,
achieves up to 58 times fewer clock cycles through spike-based,
asynchronous processing, demonstrating significant improvements in
computational efficiency and suitability for edge AI systems.
• Akida correlation for MNIST model show that between clock cycles and both
energy and time were essentially zero (0.0153, with a p-value of 0.497). In
contrast to the YOLOv2 model where it shows a small but definite correlation
between clock cycles and both energy consumption (0.2119) and inference
time (0.2022), with p-value below 0.05 (making the correlation plausible).
• Quantization has a major effect on all key metrics except for the clock speed
on the GPU, reducing energy consumption, throughput, and latency by 85.9-
99.9 %. This demonstrates the suitability and importance of quantization
for deploying neural networks on resource limited hardware. However, for
more complex neural networks, this comes at the cost of reduced accuracy,
highlighting critical trade-off between computational efficiency and predic-
tive performance.
Whole white paper: