Hi Larry,
I don't think the times really tell us much.
The test was done using the Akida ADE simulation software:
"3.3. Intel Loihi and BrainChip Akida
The 10-layer deep learning network was converted into formats compatible with Intel and BrainChip. In both cases, precision of the weights was varied to see how it affected model size and accuracy. Timing results were also produced.
For Intel, we accessed Loihi (Nahuku32) chips via the cloud and NxSDK 1.0;
for BrainChip, we simulated the chips on CPU via their hardware abstraction layer. Timing results are not comparable because CPU or GPU simulations are much faster than running on neuromorphic hardware."
5.
"Due to this constraint, our experiments were conducted using Intel’s cloud environment and the stand-alone simulation environment for Akida. There are drastic differences between software simulation and hardware and we intend to study those further."
However, this does explain something else. I saw a quote a couple of weeks ago which referred to 8-bit akida, and, as we all know, Akida only has 4 bits. However, apparently when using the Akida simulator, you can choose 8 bits.
"For BrainChip, accuracy also increased with precision; maximum accuracy was 82.5% at 8 bits and 66.8% at 4 bits. The 8-bit model produced results in 1.23 seconds, the 4-bit model in 1.20 seconds. Time was significantly higher than the full precision model but that is to be expected as neuromorphic hardware is being simulated whereas the full precision model does not have this constraint. Going forward, we hope to tune the BrainChip model to increase its accuracy and time its operation in hardware, not just CPU simulation."
Loihi was done on a hardware cloud-based processor, while Akida was the MetaTF Akida simulation software, so the loihi time is distorted by the internet delay, while the Akida time derives from a software implementation.