Appears some within the Sentry Project have been creating some Resnet model variants and deploying to Akida for some positive results.
From memory I think
@Frangipani maybe have posted previously on Dr Eappen?
Apols if all this posted but I haven't bothered with a search or caught up with all the posts. Been trying to ignore the renumeration / AGM debates and opinions as I will form my own and don't personally don't feel the need to engage.
From a couple of days ago.
GLUSE: Enhanced Channel-Wise Adaptive Gated Linear Units SE for Onboard Satellite Earth Observation Image Classification
Thanh-Dung Le, , Vu Nguyen Ha, , Ti Ti Nguyen, , Geoffrey Eappen, , Prabhu Thiruvasagam, , Hong-fu Chou, , Duc-Dung Tran, , Hung Nguyen-Kha, Luis M. Garces-Socarras, ,
Jorge L. Gonzalez-Rios, , Juan Carlos Merlano-Duncan, , Symeon ChatzinotasThis work was funded by the Luxembourg National Research Fund (FNR), with the granted SENTRY project corresponding to grant reference C23/IS/18073708/SENTRY.Thanh-Dung Le, Vu Nguyen Ha, Ti Ti Nguyen, Geoffrey Eappen, Prabhu Thiruvasagam, Hong-fu Chou, Duc-Dung Tran, Hung Nguyen-Kha, Luis M. Garces-Socarras, Jorge L. Gonzalez-Rios, Juan Carlos Merlano-Duncan, Symeon Chatzinotas are with the Interdisciplinary Centre for Security, Reliability, and Trust (SnT), University of Luxembourg, Luxembourg (Corresponding author. Email:
thanh-dung.le@uni.lu).This paper is a revised and expanded version of a paper entitled “Semantic Knowledge Distillation for Onboard Satellite Earth Observation Image Classification”, which was accepted for presentation at IEEE ICMLCN 2025, Barcelona, Spain, 26–29 May 2025.
Abstract
This study introduces ResNet-GLUSE, a lightweight ResNet variant enhanced with Gated Linear Unit-enhanced Squeeze-and-Excitation (GLUSE), an adaptive channel-wise attention mechanism. By integrating dynamic gating into the traditional SE framework, GLUSE improves feature recalibration while maintaining computational efficiency. Experiments on EuroSAT and PatternNet datasets confirm its effectiveness, achieving exceeding 94% and 98% accuracy, respectively. While MobileViT achieves 99% accuracy, ResNet-GLUSE offers 33× fewer parameters, 27× fewer FLOPs, 33× smaller model size (MB), ≈6× lower power consumption (W), and ≈3× faster inference time (s), making it significantly more efficient for onboard satellite deployment.
Furthermore, due to its simplicity, ResNet-GLUSE can be easily mimicked for neuromorphic computing, enabling ultra-low power inference at just 852.30 mW on Akida Brainchip. This balance between high accuracy and ultra-low resource consumption establishes ResNet-GLUSE as a practical solution for real-time Earth Observation (EO) tasks. Reproducible codes are available in our shared repository
.......
View attachment 82744
Figure 5 Performance of the ResNet-GLUSE during the inference on Akida neuromorphic computing at the TelecomAI-lab, SnT
More importantly, due to its architectural simplicity and compactness, the ResNet-GLUSE
model can easily be mimicked, adapted, and deployed on the Akida Brainchip neuromorphic computing platform [42]. Experimental deployment on Akida hardware further highlights its exceptional efficiency, with an extremely low inference power consumption averaging 877 mW, achieving an inference energy consumption of just 182.42 mJ/frame, and maintaining a practical frame rate of 4.81 fps. Such results highlight the potential of the ResNet-GLUSE model to operate efficiently on neuromorphic hardware, enabling energy-efficient onboard image analysis in resource-constrained environments. From the boxplot accuracy distribution, depicted in Fig.
5, the model’s accuracy ranges from about 93% to nearly 96%, with a median of approximately 94.7%, demonstrating stable and high performance across multiple runs, further confirming its robust inference performance under varied operational conditions.
.......
ResNet-GLUSE optimizes accuracy, efficiency, and resource consumption. It reduces energy consumption by up to 6× compared to MobileViT on GPUs
and enables ultra-low power inference (852.30 mW) on Akida Brainchip. Despite a slight complexity increase over SE, its adaptive gating mechanism delivers substantial performance gains with minimal overhead.
......
Acknowledgment
We acknowledge Dr. Geoffrey Eappen for his valuable efforts in mimicking and implementing the model on Akida. This work was funded by the Luxembourg National Research Fund (FNR), with granted SENTRY project corresponding to grant reference C23/IS/18073708/SENTRY. Part of this work was supported by the SnT TelecomAI Lab, the Marie Speyer Excellence Grant (BrainSat) and the FNR BrainSatCom project (BRIDGES/2024/IS/19003118).