We have software for Akida Development Environment (ADE) and MetaTF.We have established BrainChip Systems India to support the requirement for robust system software and firmware. As Akida is implemented commercially, it is important that our software is mature and provides a positive user experience.”
We also need some software/firmware to enable our ARM Cortex microprocessor to be able to configure the NPUs and manage the weights for the NPUs.
There is also software used in the CNN2SNN conversion.
https://doc.brainchipinc.com/user_guide/cnn2snn.html
By careful attention to specifics in the architecture and training of the CNN, an overly complex conversion step from CNN to SNN can be avoided. The CNN2SNN toolkit comprises a set of functions designed for the popular Tensorflow Keras framework, making it easy to train a SNN-compatible network.
Typical training scenario
The first step in the conversion workflow is to train a standard Keras model. This trained model is the starting point for the quantization stage. Once it is established that the overall model configuration prior to quantization yields a satisfactory performance on the task, we can proceed with quantization.The CNN2SNN toolkit offers a turnkey solution to quantize a model: the quantize function. It replaces the neural Keras layers (Conv2D, SeparableConv2D and Dense) and the ReLU layers with custom CNN2SNN layers, which are quantization-aware derived versions of the base Keras layer types. The obtained quantized model is still a Keras model with a mix of CNN2SNN quantized layers (QuantizedReLU, QuantizedDense, etc.) and standard Keras layers (BatchNormalization, MaxPool2D, etc.).
Direct quantization of a standard Keras model (also called post-training quantization) generally introduces a drop in performance. This drop is usually small for 8-bit or even 4-bit quantization of simple models, but it can be very significant for low quantization bitwidth and complex models.
If the quantized model offers acceptable performance, it can be directly converted into an Akida model, ready to be loaded on the Akida NSoC (see the convert function).
However, if the performance drop is too high, a quantization-aware training is required to recover the performance prior to quantization. Since the quantized model is a Keras model, it can then be trained using the standard Keras API.
Note that quantizing directly to the target bitwidth is not mandatory: it is possible to proceed with quantization in a serie of smaller steps. For example, it may be beneficial to keep float weights and only quantize activations, retrain, and then, quantize weights.
When Akida is processing input data, the software is dormant.
But when we get to Akida 3000's neural cortex, who knows?