Some recent updates in April.
Couple interesting ones I thought.
Update Akida and CNN2SNN to version 2.7.2
New features
- [akida] Full support of standard and depthwise transposed convolution in V2 hardware
- [akida] Initial support of Stem in V2 hardware (not all used cased covered for now)
- [akida] Introduced VitEncoderBlock layer: it corresponds to the groups of layers that will be mapped to an hardware block
- [akida] Limited multi-sequence mapping on V2
- [akida] Engine fully supports 8-bits outputs for Akida 2.0 (with/without activation)
- [akida] Added a workaround to avoid race condition when NPs try to reach AeDMA
- [akida] Added support for stride 2 1x1 Conv2D (pointwise) layers for ResNet50
- [akida] Added support for 7x7, stride 2 with max pooling (pool_size=3, pool_stride=2) InputConv2D layer for ResNet50
- [akida] Added an 'activation' parameter to Add layer for ResNet50
- [akida] Improved NP count in 'Model.summary()' for 2.0 components
- [akida] Improved layers docstrings
- [cnn2snn] A set of layers that are compatible with a VitEncoderBlock will now be converted to the block instead of independent layers
Update Akida models to 1.5.0
New features
- Aligned with CNN2SNN 2.7.2
- Reworked the detection tools: dataset management now based on TensorFlow datasets, others tools updated accordingly, COCO dataset added
- Updated AkidaNet YOLO/VOC model: now trained on COCO and transferred on the whole 20 VOC classes
- Introduced AudioViT/Urbansound model, leveraging ViT backbone for audio classification
- Added a 'summary' CLI to akida_models that handles Keras, ONNX and Akida models
- Rebased DVSSamsung model architecture to its DVSGesture sibling for hardware efficiency
- Pruned DS-CNN/KWS model from unnecessary reshape
Documentation update
- Performances: added AudioViT, updated dectection and audio sections
- Updated "Advanced ONNX model quantization" removing parts for patterns that are now supported