Hi All
It has often been said that MegaChips is very quiet which is true but it is also MegaChips modus operandi to be very, very discreet. Try as they might however if you look hard enough in the end they slip up.
It does take a little work though but I found the following paper sometime ago now:
N Yoshida, H Miura, T Matsutani… - 2022 IEEE 8th World …, 2022 - ieeexplore.ieee.org
This study applied data augmentation to improve the inference accuracy of edge-artificial intelligence on-chip learning, which uses the fine-tuning technique with limited knowledge and …”
The first thing to note is that these researchers all work for MegaChips.
The second thing is the abstract states:
“This study applied data augmentation to improve the inference accuracy of edge-artificial intelligence on-chip learning, which uses the fine-tuning technique with limited knowledge and without a cloud server. Subsequently, keyword spotting was adopted as an example of the edge-AI application to evaluate inference accuracy. Investigations revealed that all four data augmentation types contributed to inference accuracy improvements, boosting data augmentation by 5.7 times rather than the one-shot boost without data augmentation recorded previously.”
The following link takes you to a sign in page to which I do not have access however whenever this occurs opening the references can sometimes add insight into whether the paper is of interest. In this case it did and I found:
References & Cited By
1.
Akida enablement platforms, [online] Available:
https://brainchip.com/akida-enablement-platforms/.
Hide Context
Google Scholar
For example, the BrainChip was released as an evaluation chip AKD1000 [1] to execute finetuning with just one-shot input.
Go To Text
2.
M. Horowitz, "computing's energy problem (and what we can do about it)",
IEEE International Solid-State Circuits Conference Digest of Technical Papers (ISSCC), pp. 10-14, 2014.
Hide Context
View Article Google Scholar
Subsequent investigations revealed that it realized a neuromorphic neural network, thereby reducing computational costs [2], [3].
Go To Text
3.
S.I. Ikegawa, R. Saiin, Y. Sawada and N. Natori, "Rethinking the role of normalization and residual blocks for spiking neural networks",
Sensors, vol. 22, 2022.
Hide Context
CrossRef Google Scholar
Subsequent investigations revealed that it realized a neuromorphic neural network, thereby reducing computational costs [2], [3].
Go To Text
4.
Overview of meta TF, [online] Available:
https://doc.brainchipinc.com/index.html.
Hide Context
Google Scholar
The Akida neuromorphic ML Framework (MetaTF) [4] was adopted to establish the edge-AI computing environment.
Go To Text
5.
P. Warden, "Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition",
arXiv, 2018.
Hide Context
Google Scholar
Subsequently, the google speech command dataset was applied to train KWS [5]–[7], after which audio files were transformed to a Mel-frequency power spectrogram [8], thereby aiding supply to DS-CNN [9].
Go To Text
6.
Model zoo performances, [online] Available:
https://doc.brainchipinc.com/zoo_performances.html.
Hide Context
Google Scholar
Subsequently, the google speech command dataset was applied to train KWS [5]–[6][7], after which audio files were transformed to a Mel-frequency power spectrogram [8], thereby aiding supply to DS-CNN [9].
Go To Text
7.
DS-CNN/KWS inference, [online] Available:
https://doc.brainchipinc.com/examples/general/plot_2_ds_cnn_kws.html.
Hide Context
Google Scholar
Subsequently, the google speech command dataset was applied to train KWS [5]–[7], after which audio files were transformed to a Mel-frequency power spectrogram [8], thereby aiding supply to DS-CNN [9].
Go To Text
8.
S.B. Davis and P. Mermelstein, "Comparison of parametric representations for monosyllabic word recognition in continuously spoken sentences",
IEEE Transactions on Acoustics Speech, vol. 28, pp. 357-366, 1980.
Hide Context
View Article Google Scholar
Subsequently, the google speech command dataset was applied to train KWS [5]–[7], after which audio files were transformed to a Mel-frequency power spectrogram [8], thereby aiding supply to DS-CNN [9].
Go To Text
9.
F. Chollet,
Xception: Deep learning with depthwise separable convolutions, 2016, [online] Available:
http://arxiv.org/abs/1610.02357.
Hide Context
Google Scholar
Subsequently, the google speech command dataset was applied to train KWS [5]–[7], after which audio files were transformed to a Mel-frequency power spectrogram [8], thereby aiding supply to DS-CNN [9].
Go To Text”
Based upon the above it is clear that MegaChip was using AKIDA as the Edge environment and we’re able to show a marked improvement in inference accuracy using four different types of data augmentation for training purposes.
My opinion only DYOR
Fact Finder