View attachment 2016
Ralph Etienne-Cummings directs the Computational Sensory-Motor Systems Laboratory at Johns Hopkins University. His research spans a range of electrical and computer engineering topics. Including, but not limited to, mixed-signal VSLI systems, computational sensors, computer vision, neuromorphic engineering, smart structures, mobile robotics, and neuroprosthetic devices.
His research has convinced him of the need for neuromorphic computing.
research-repository.uwa.edu.au
Etienne-Cummings is associated with UWA
Not sure if anyone else has had a chance to watch this webinar from this morning, but it was actually really interesting and entertaining is some parts. I’d recommend having a listen if you’ve got time.
All of the panel speakers have known each other for decades, so it was all done in good spirits.
We had Kwabena Boahen (Stanford Uni) and Ralph Etienne-Cummings (John Hopkins Uni) that were very pro-neuromorphic computing, our nemesis Yann LeCun (Facebook/Meta and New York Uni) who seems to be a bit of a mixed bag in this session and then Bill Dally (Nvidia) who is obviously anti-Spiking Neural Networks and pro-CPU, GPU, etc.
Kwabena definitely turned out to be the best value during the panel discussion at the end. My favourite quote from Kwabena when Bill failed miserably to answer a question regarding why the brain is so efficient by saying that it's because it's slow...and talking more about computation and implying that spiking neural networks would need to slow things down to be able to be more efficient like a brain...
”I would say Bill – you’re stuck in the cloud. If I’m on the edge on my phone, I don’t need to be fast….slow is good enough for me. And if that is going to allow me to detach from the cloud then I’m very happy with that.”
Going back to a couple of things mentioned in the presentations, Bill showed a comparison based on benchmarking different approaches using MLPerf and stated that spiking neural networks simply don’t compete on the standard benchmarks and if they did they should be "sweeping" the MLPerf numbers.
My question for the technically minded posters here –
can MLPerf be used for benchmarking Akida’s performance, or is this more suited to the von Neumann architecture? I’d like to see how we stack up here.
From Yann LeCun’s preso – the only thing worth noting is that he doesn’t believe that STDP does anything useful and says it’s a side effect of something complicated we don’t understand…some underlying rule. I’m hoping that this is something that isn’t complicated for Peter Van Der Made and he fully understands the value of STDP.
Anyone have any thoughts on this one?
The only positive thing I took from Yann LeCun was that he did acknowledge that implementing learning on chip “isn’t a bad idea”. Although he believes back propagation is probably the way we should do this.
The summation at the very end of the webinar was quite good…
Ralph –
“I think we’ve convinced Yann. I think Yann is on our side now”.
Kwabena –
“I think Bill is the last man standing here”.
Yann –
“I’m with you for the next revolution. You know…maybe you guys will be there”.
Well…imo Akida already is there!...
And overall there was a lot of content in this webinar that verifies the Brainchip approach and how far ahead of the pack we are with Akida. Lots of positives to take out of it.