I'm on the fence here. There is a tech which is gaining some notoriety called in-memory-compute. I'm not sure how much this differs from Akida, which also has memory distributed with the compute elements.“This is also the fastest, most energy efficient embodiment of Akida. Akida's 2-bit and 4-bte modes re more accurate, but are slower and use more power.”
While I am not technical at all my read of @Diogenese excellent post is that Akida and Plumerai are a great match for each other.
What concerns me is that Plumerai are offering IP licences for FPGAs. Now, I suppose that, if they were to be using Akida IP, they could have a right to sub-licence Akida IP.
https://plumerai.com/#bnns
IP-core for BNN inference on FPGAs
For customers that use FPGAs and require the most energy-efficient solution, we provide a custom IP-core that is highly-optimized for our BNN models and software.This is Simon Thorpe's presentation: "The precision tradeoff" which explains temporal coding. It's the best explanation of "spikes good - floating point bad" that you could hope to find.The reason that 1-bit binarization does not lose too much accuracy when compared with 8, 16, or 32 bits is explained by Simon Thorpe's discussion of the JAST Rules.
Basically, the speed at which the optical nerve responds to light stimulus is proportional to the strength of the light stimulus. This means that, for a camera light sensor, the first pixel to respond is the most important, hence the winner-take-all rule.
https://videos.insa-rennes.fr/video...6ac53873ac3153d6eae4a8812d22c9007c4f1bbcb1ed/
This slide illustrates the neuron receiving the stronger light signal (high intensity) firing before the neuron receiving the low intensity signal.
Rate code BAD!
This slide shows the intensity/delay function of the retina/pixel demonstrating that the stronger the light signal, the faster the response.
If you want to save your soul from hell,
a'ridin' on our range,
then cowboy change your ways today
and get this presentation between your ears.