Saw that guy live about a month a go. Laughed the whole night through.D.I.L.L.I.G.A.F ...................... up to speed on this one @Rocket577
AKIDA BALLISTA
Come on @thelittleshort tells us what Michael commented???Perpetuation in action
View attachment 32516
View attachment 32520
View attachment 32518
View attachment 32519
You really are a top blokePerpetuation in action
View attachment 32516
View attachment 32520
View attachment 32518
View attachment 32519
Like cricket umpires, they are doing a good job if no one notices them.Hello everyone, sorry to interrupt the conversation. I was just wondering how things have been going since Dreddb0t was introduced. I've noticed that it has dealt with all reported posts since inception.
Also, please keep in mind that if you come across any content that you think violates the rules, you can click on the report button and Dreddb0t will make a decision within a minute, regardless of the time or day.
Come on Littleshort tells us what Michael said???
Tell us please the suspense is killing me.![]()
All good Z.Hello everyone, sorry to interrupt the conversation. I was just wondering how things have been going since Dreddb0t was introduced. I've noticed that it has dealt with all reported posts since inception.
Also, please keep in mind that if you come across any content that you think violates the rules, you can click on the report button and Dreddb0t will make a decision within a minute, regardless of the time or day.
Thanks @thelittleshortHi @Fact Finder - his comment was simply tagging in Blake Eastman from Nonverbal Group, which in itself is interesting - in terms of the founder of this particular company being tagged into a BrainChip
More interestingly is the inclusion of Kyongsik Yun on the Nonverbal Group website
Kyongsik specialises in machine learning at NASA Jet Propulsion Laboratory.
Kyongsik Yun is a technologist at the Jet Propulsion Laboratory, California Institute of Technology, and a senior member of the American Institute of Aeronautics and Astronautics (AIAA).
His research focuses on building brain-inspired technologies and systems, including deep learning computer vision, natural language processing, brain-computer interfaces, and noninvasive remote neuromodulation
What does it all mean? What is the link? I have no idea. The website is not phone friendly, maybe it is more clear viewing on a PC?
View attachment 32527
View attachment 32528
View attachment 32529
View attachment 32533
Rule 1 of flat-pack assembly:Thanks @thelittleshort
In desperation I resorted to following the science which has served me well across my investments.
As a result I found this very recently published paper out of China.
While I have included the link I don’t think one needs to read more than the enclosed abstract to understand why AKIDA second generation with vision transformers has excited so many particularly when you appreciate that until AKIDA 2nd gen even Brainchip said that the reason they included CNN in their SNN design was that CNN had a clear advantage in vision processing.
Bringing together SNN with vision transformers means according to this research paper this is no longer the case:
Deep Spiking Neural Networks with High Representation Similarity Model Visual Pathways of Macaque and Mouse
Liwei Huang, Zhengyu Ma, Liutao Yu, Huihui Zhou, Yonghong Tian
arXiv preprint arXiv:2303.06060, 2023
Deep artificial neural networks (ANNs) play a major role in modeling the visual pathways of primate and rodent. However, they highly simplify the computational properties of neurons compared to their biological counterparts. Instead, Spiking Neural Networks (SNNs) are more biologically plausible models since spiking neurons encode information with time sequences of spikes, just like biological neurons do. However, there is a lack of studies on visual pathways with deep SNNs models. In this study, we model the visual cortex with deep SNNs for the first time, and also with a wide range of state-of-the-art deep CNNs and ViTs for comparison. Using three similarity metrics, we conduct neural representation similarity experiments on three neural datasets collected from two species under three types of stimuli. Based on extensive similarity analyses, we further investigate the functional hierarchy and mechanisms across species. Almost all similarity scores of SNNs are higher than their counterparts of CNNs with an average of 6.6%. Depths of the layers with the highest similarity scores exhibit little differences across mouse cortical regions, but vary significantly across macaque regions, suggesting that the visual processing structure of mice is more regionally homogeneous than that of macaques. Besides, the multi-branch structures observed in some top mouse brain-like neural networks provide computational evidence of parallel processing streams in mice, and the different performance in fitting macaque neural representations under different stimuli exhibits the functional specialization of information processing in macaques. Taken together, our study demonstrates that SNNs could serve as promising candidates to better model and explain the functional hierarchy and mechanisms of the visual system.
View at arxiv.org
This reveal of course does more than just confirm the reason why there is such excitement but it also gives weight to Peter van der Made’s statement that with the release AKIDA 2nd gen the about 3 year lead would extend out to about 5 YEARS.
Once Edge Impulse starts to publicly demonstrate the vision leap made possible by AKIDA 2nd gen things will get very exciting I suspect.
My opinion only DYOR
FF
AKIDA BALLISTA
Hi @Fact Finder - his comment was simply tagging in Blake Eastman from Nonverbal Group, which in itself is interesting - in terms of the founder of this particular company being tagged into a BrainChip
At least up to mid-2021, with one exception, TI's patents indicate a conviction that ML was a software thing.So much great information and links being posted here lately that I’m sure this has already been posted and I just can’t keep up , but just in case it hasn’t been.
Newsroom | news.ti.com - 404 Page Not Found
news.ti.com
DALLAS, March 15, 2023 /PRNewswire/ -- To build on innovations that advance intelligence at the edge, Texas Instruments (TI) (Nasdaq: TXN) today introduced a new family of six Arm® Cortex®-based vision processors that allow designers to add more vision and artificial intelligence (AI) processing at a lower cost, and with better energy efficiency, in applications such as video doorbells, machine vision and autonomous mobile robots.
This new family, which includes the AM62A, AM68A and AM69A processors, is supported by open-source evaluation and model development tools, and common software that is programmable through industry-standard application programming interfaces (APIs), frameworks and models. This platform of vision processors, software and tools helps designers easily develop and scale edge AI designs across multiple systems while accelerating time to market. For more information, see www.ti.com/edgeai-pr.
"In order to achieve real-time responsiveness in the electronics that keep our world moving, decision-making needs to happen locally and with better power efficiency," said Sameer Wasson, vice president, Processors, Texas Instruments. "This new processor family of affordable, highly integrated SoCs will enable the future of embedded AI by allowing for more cameras and vision processing in edge applications."
More interestingly is the inclusion of Kyongsik Yun on the Nonverbal Group website
Kyongsik specialises in machine learning at NASA Jet Propulsion Laboratory.