Fullmoonfever
Top 20
New update to BRN Git.
Not that I understand most of it but @Diogenese may see something new or diff?
I did like seeing TENNS eye tracking mention & support for 4 bit in Akida 2.0.
Presuming the dynamic shapes update adds flexibility being for Keras and ONNX based models?
Compare
ktsiknos-brainchip released this 3 days ago
2.14.0-doc-1
09e60f4
Upgrade to Quantizeml 0.17.1, Akida/CNN2SNN 2.14.0 and Akida models 1.8.0
Not that I understand most of it but @Diogenese may see something new or diff?
I did like seeing TENNS eye tracking mention & support for 4 bit in Akida 2.0.
Presuming the dynamic shapes update adds flexibility being for Keras and ONNX based models?
Upgrade to Quantizeml 0.17.1, Akida/CNN2SNN 2.14.0 and Akida models 1.8.0
LatestCompare
2.14.0-doc-1
09e60f4
Upgrade to Quantizeml 0.17.1, Akida/CNN2SNN 2.14.0 and Akida models 1.8.0
Update QuantizeML to version 0.17.1
New features
- Now handling models with dynamic shapes in both Keras and ONNX. Shape is deduced from calibration samples or from the input_shape parameter.
- Added a Keras and ONNX common reset_buffers entry point for spatiotemporal models
- GlobalAveragePooling output will now be quantized to QuantizationParams.activation_bits instead of QuantizationParams.output_bits when preceeded by an activation
Bug fixes
- Applied reset_buffers to variables recording to prevent shape issue when converting a model to Akida
- Handle ONNX models with shared inputs that would not quantize or convert properly
- Handle unsupported strides when converting an even kernel to odd
- Fixed analysis module issue when applied to TENNs models
- Fixed analysis module weight quantization error on Keras models
- Keras set_model_shape will now handle tf.dataset samples
- It is now possible to quantize a model with a split layer as input
Update Akida and CNN2SNN to version 2.14.0
Aligned with FPGA-1692(2-nodes)/1691(6-nodes)
New features and updates:
- [cnn2snn] Updated requirement to QuantizeML 0.17.0
- [akida] Added support for 4-bit in 2.0. Features aligned with 1.0, that is InputConv2D, Conv2D, DepthwiseConv2D and Dense layers support 4-bit weights (except InputConv2D) and activations.
- [akida] Extented TNP_B support to 2048 channels and filters
- [akida] HRC is now optional in a virtual device
- [akida] For real device, input and weight SRAM values are now read from the mesh
- [akida] Introduce an akida.NP.SramSize object to manage default memories
- [akida] Extended python Layer API with "is_target_component(NP.type)" and "macs"
- [akida] Added "akida.compute_minimal_memory" helper
Bug fixes
- [akida] Fixed several issues when computing input or weight memory sizes for layers
Update Akida models to 1.8.0
- Updated QuantizeML dependency to 0.17.0 and CNN2SNN to 2.14.0
- Updated 4-bit models for 2.0 and added a bitwidth parameter to the pretrained helper
- TENNs EyeTracking is now evaluated on the labeled test set
- Dropped MACS computation helper and CLI: MACS are natively available on Akida layers and models.
Documentation update
- Updated 2.0 4-bit accuracies in model zoo page
- Updated advanced ONNX quantization tutorial with MobiletNetV4