Thanks
@thelittleshort
In desperation I resorted to following the science which has served me well across my investments.
As a result I found this very recently published paper out of China.
While I have included the link I don’t think one needs to read more than the enclosed abstract to understand why AKIDA second generation with vision transformers has excited so many particularly when you appreciate that until AKIDA 2nd gen even Brainchip said that the reason they included CNN in their SNN design was that CNN had a clear advantage in vision processing.
Bringing together SNN with vision transformers means according to this research paper this is no longer the case:
Deep Spiking Neural Networks with High Representation Similarity Model Visual Pathways of Macaque and Mouse
Liwei Huang, Zhengyu Ma, Liutao Yu, Huihui Zhou, Yonghong Tian
arXiv preprint arXiv:2303.06060, 2023
Deep artificial neural networks (ANNs) play a major role in modeling the visual pathways of primate and rodent. However, they highly simplify the computational properties of neurons compared to their biological counterparts. Instead, Spiking Neural Networks (SNNs) are more biologically plausible models since spiking neurons encode information with time sequences of spikes, just like biological neurons do. However, there is a lack of studies on visual pathways with deep SNNs models. In this study, we model the visual cortex with deep SNNs for the first time, and also with a wide range of state-of-the-art deep CNNs and ViTs for comparison. Using three similarity metrics, we conduct neural representation similarity experiments on three neural datasets collected from two species under three types of stimuli. Based on extensive similarity analyses, we further investigate the functional hierarchy and mechanisms across species. Almost all similarity scores of SNNs are higher than their counterparts of CNNs with an average of 6.6%. Depths of the layers with the highest similarity scores exhibit little differences across mouse cortical regions, but vary significantly across macaque regions, suggesting that the visual processing structure of mice is more regionally homogeneous than that of macaques. Besides, the multi-branch structures observed in some top mouse brain-like neural networks provide computational evidence of parallel processing streams in mice, and the different performance in fitting macaque neural representations under different stimuli exhibits the functional specialization of information processing in macaques. Taken together, our study demonstrates that SNNs could serve as promising candidates to better model and explain the functional hierarchy and mechanisms of the visual system.
View at arxiv.org
This reveal of course does more than just confirm the reason why there is such excitement but it also gives weight to Peter van der Made’s statement that with the release AKIDA 2nd gen the about 3 year lead would extend out to about 5 YEARS.
Once Edge Impulse starts to publicly demonstrate the vision leap made possible by AKIDA 2nd gen things will get very exciting I suspect.
My opinion only DYOR
FF
AKIDA BALLISTA