I recall a poster not long ago saying one of the things mentioned by Co rep was that companies understood why SNN was the next step but were still getting heads around the processing.
Was watching a video over the weekend by Rodolphe Sepulchre and was interesting to get an actual Researcher / Academics insight into basic Neuromorphic thoughts.
Rodolphe Sepulchre is a control engineer and theorist at Cambridge University. He focuses on applying feedback control engineering principles to build circuits that model neurons and neuronal circuits. We discuss his work on mixed feedback control – positive and negative – as an underlying principle of the mixed digital and analog brain signals,, the role of neuromodulation as a controller, applying these principles to Eve Marder’s lobster/crab neural circuits, building mixed-feedback neuromorphics, some feedback control history, and how “If you wish to contribute original work, be prepared to face loneliness,” among other topics.
Video at end of post (go to around 53min mark) and transcript can be found
HERE though is a bit disjointed as Rodolphe has an accent so the words not always match perfectly.
The crux was in his section about neuromorphics / barriers where he states industry is very excited for neuromorphics however whilst the technology is understood and is ahead of us (researchers), the theory behind it for researchers etc lags and is not understood that well yet.
This to me lends support to my first sentence in that industry wants & needs SNN / neuromorphic but is struggling a little to get their head around it entirely. This also dovetails in BRNs recent efforts with academia and empowering and accelerating the next Gen of engineers & developers who
WILL understand it and drive the uptake forward.
Adds weight to how ahead of the curve BRN, PVDM, Anil et al are.
Just need someone in industry to take the leap and say right...we get it...here's how we can use Akida and commit publicly.
A comment that Rodolphe also made was:
"And that’s why there is such a, an interest in the industry. And, and so I’m,
I’m not worried about, you know, the potential of, um, neuromorphic. Um, I’m more worried about, um, the pace of developments of the theory. It’s very slow <laugh>, um, partly because, you know, most of people work. I mean, this is a bandwagon effect.
So nowadays it’s, you know, far easier to get a job and just to develop another deep neural network."
He also states:
"And at least my understanding of neuromorphic is that it’s precise leader, the mixture of the tool that’s that is fundamental to neuromorphic. And the truth is that we don’t have a theory for that.
We, we don’t, we don’t know how to handle spikes. So we, some people handle spikes in a statistical way. Some people say that spikes are irrelevant. Some people say that each single spikes is hugely important, but I mean,
this diversity of, uh, almost opinions, I would say <laugh> is just telling us that we don’t have a good theory to handle, um, spiking information technology at the moment."
Speaks of Intel chip (obviously this is out there in the research community as we know) and Event based cameras - speaks highly of these:
That’s a tough question. <laugh>, uh, talking about the future. It’s always difficult.
My experience is that technology as very often is way ahead of us. And when I say us, I mean researchers. So I think that the theory of neuromorphic design is lacking behind the technology of neuromorphic design, um, by a very significant margin. Nowadays, there is a huge interest from the industry for neuromorphic. Intel is building, uh, neuromorphic chips and, and, and
then the event based camera was commercialized just a few years ago, but it’s, I think it’s a complete revolution in the computer vision, um, industry and community, but the theory likes behind we have why, because what we have on the table, what we learn as a students is a sort of a double sets of tools. And you pick your digital tool or your analog tools from two different bags. And, and you do that at every level in every discipline.