Because it is event-driven, I think one of the metrics which will probably be used to stress not just Akida but other SNNs would be the timing between the pulses of the spikes.
However, I would think that by processing events more frequently, one would also be consuming more power to do so. I don't know the minimum spike distance that Akida will process comfortably. It may already be capable of keeping up with the sensors that exist.
While spikes with a short time between each pulse might be most beneficial for detecting or inferencing things in a video stream, the actual training of the network itself may not require such rapid input. However, it may require more passes in training to get better accuracy.
Memory, parameters in the model, power consumption, and cost will be factors, but Akida will also require some different benchmarking criteria than the existing AI accelerators that crunch matrices.
Hi
@FrederikSchack , JD,
We know from nViso that Akida 1 can run at better than 1000 fps equivalent. If memory serves it tops out at about 1600. Akida 1 can process 30 fps with 2 nodes (8*NPUs). Akida 1 can also run several independent parallel threads, and the threads can be interpreting different types of data - that is all down to the configuration and the different model libraries and weights.
I think Akida 2 has 64 nodes maximum, but can be connected to a lot more Akida 2s.
CORRECTION: 128 nodes:
https://www.hackster.io/news/brainc...-vision-transformer-acceleration-5fc2d2db9d65
One limiting factor on event rate, apart from the actual event occurrence rate, is the sensor response/recovery time. A DVS like Prophesee has to compare the photodiode output of each pixel with a threshold voltage to determine if an event has been detected. If the diode output falls below the threshold, it is ignored.
The signals from each pixel of Prophesee's DVS (event camera) undergo a lot of processing.
This is the circuitry connected to each individual pixel of Prophesee's collision anticipation DVS:
US2021056323A1 FAST DETECTION OF SECONDARY OBJECTS THAT MAY INTERSECT THE TRAJECTORY OF A MOVING PRIMARY OBJECT
A
system (1) for detecting dynamic secondary objects (55) that have a potential to intersect the trajectory (51) of a moving primary object (50), comprising a vision sensor (2) with a light-sensitive area (20) that comprises event-based pixels (21), so that a relative change in the light intensity impinging onto an event-based pixel (21) of the vision sensor (2) by at least a predetermined percentage causes the vision sensor (2) to emit an event (21a) associated with this event-based pixel (21), wherein the system (1) further comprises a discriminator module (3) that gets both the stream of events (21a) from the vision sensor (2) and information (52) about the heading and/or speed of the motion of the primary object (50) as inputs, and is configured to identify, from said stream of events (21a), based at least in part on said information (52), events (21b) that are likely to be caused by the motion of a secondary object (55), rather than by the motion of the primary object (50).
Th spike rate is in the lap of the gods. It is determined by real-world events and the ability of the sensor to respond. Each Akida NPU does packetize input events, but the input spike rate limits the response time:
WO2020092691A1 AN IMPROVED SPIKING NEURAL NETWORK