You can
"You can go to the DeGirum website" ... and try Akida with your model or our model"Hi fmf,
The authors are from a couple of Italian Unis and ESA. Alexander Hadjiivanov studied at UNSW.
The Abstract provides some informative background which illustrates the context:
"Spiking Neural Networks (SNN) are highly attractive due to their theoretically superior energy efficiency due to their inherently sparse activity induced by neurons communicating by means of binary spikes. Nevertheless, the ability of SNN to reach such efficiency on real world tasks is still to be demonstrated in practice. To evaluate the feasibility of utilizing SNN onboard spacecraft, this work presents a numerical analysis and comparison of different SNN techniques applied to scene classification for the EuroSAT dataset. Such tasks are of primary importance for space applications and constitute a valuable test case given the abundance of competitive methods available to establish a benchmark. Particular emphasis is placed on models based on temporal coding, where crucial information is encoded in the timing of neuron spikes. These models promise even greater efficiency of resulting networks, as they maximize the sparsity properties inherent in SNN. A reliable metric capable of comparing different architectures in a hardware-agnostic way is developed to establish a clear theoretical dependence between architecture parameters and the energy consumption that can be expected onboard the spacecraft. The potential of this novel method and his flexibility to describe specific hardware platforms is demonstrated by its application to predicting the energy consumption of a BrainChip Akida AKD1000 neuromorphic processor."
The tests were carried out on a single Akida node (4 NPUs). while this serves to demonstrate the versatility of Akida, I guess this would have come at a latency penalty if the various "layers" needed to be run sequentially through the node. Still, as they were looking at energy efficiency, this is a secondary consideration.
I wonder if the recycling through a single node had anything to do with the problem with premature spiking as this increases the latency of processing:
3.4 Summary
"SNN still struggle to scale to deeper architectures: when the number of layers N≥5, higher layers start to fire while only partial information is available from lower layers. Possible causes for this include the large memory consumption at training and error accumulation, both due to the unrolling in time adopted by SG, which can limit the effectiveness of regularization methods such as BNTT."
4 Hardware testing
In order to evaluate the energy consumption of spiking networks on actual hardware, a series of benchmark models were implemented on the BrainChip Akida AKD1000 [78] device as it has built-in power consumption reporting capabilities. The AKD1000 system-on-chip (SoC) is the first generation of digital neuromorphic accelerator by BrainChip, designed to facilitate the evaluation of models for different types of tasks, such as image classification and online on-chip learning. Three convolutional models compatible with the AKD1000 hardware were trained on the EuroSAT dataset. They consists of convolutional blocks made up by a Conv2D → BatchNorm → MaxPool → ReLU → stack of layers. The last convolutional block lacks the MaxPool layer and is followed by a flattening and a dense linear layer at the end. The three models differ in the number of convolutional blocks, and in the number of filters in each Conv2D layers: the architectures are detailed in Fig. 10. The selection of rather small models was determined by the hardware capabilities, for a single AKD1000 node was available during this work. The Akida architecture can be arranged in multiple parallel nodes to host larger models.
Jonathan Tapson said that Akida 2 is 8 times as efficient as Akida 1000. That gives an even greater power/energy advantage. It can also handle 8-bit activations.
As to the models:
"The EuroSAT RGB dataset, a classification task representative of a class of tasks of potential interest in the field of Earth Observation, was selected as case study. SNN models based on both temporal and rate coding, and their ANN counterparts, were compared in a hardware-agnostic way in terms of accuracy and complexity by means of a novel metric,."
"Benchmark SNN models, both latency and rate based, exhibited a minimal loss in accuracy, compared with their equivalent ANN, with significantly lower (from −50 % to −80 %) EMAC per inference."
As we know, Edge impulse can rapidly develop models for Akida, and then there is on-chip learning. and we're mates with DeGirum.
https://au.video.search.yahoo.com/s...04a57e78d47e4795939bc4ed54b9d967&action=click
Akida uses rank (temporal) coding, rather than rate coding. I think that rank coding is faster than rate coding.