While I was reading today mostly about RAIN neuromorphic (a company building an analog neuromorphic chip - backed by Sam Altmann, OpenAI & among others, previously by some Saudi Aramco related venture fund until the US government stepped in), I'd like to summarize my highlights also from another article/interview with Mike Davies (director of Intel’s neuromorphic computing lab) about Intel's Loihi 2, by Sally Ward-Foxton:
"What Is Holding Back Neuromorphic Computing?"
- Mike Davies [...] told EE Times that the technology shows immense promise for reducing power consumption and latency versus current deep-learning–based neural networks
- this requires dedicated hardware accelerators
- there are still challenges with training regimes and software maturity
- Loihi 2 is intended for bigger scale data center-class systems
- "It started to blur the boundary between the pure neuromorphic approach and the traditional AI accelerator architecture"
- The best approach isn’t necessarily the one that most closely matches biology
"[...] by augmenting the pure biological approach and then applying backpropagation, I think this is going to lead to really exciting networks and capabilities."- “Our challenge is how to provide the right documentation and understanding all the caveats on how to get good results”
- some of the most popular commercial use cases have shifted towards transformer networks.
“We do feel like we’re fighting against the tide on recurrent neural networks,” Davies said.- However, he admits Loihi will need a solution for feedforward networks. Current work focuses on converting feedforward networks into recurrent networks [...]
- SpikeGPT’s inventors said the model is competitive with deep learning networks but with 20× fewer operations and corresponding reduction in power consumption
- Does this mean large language models (LLMs) have a future in the spiking domain?
“The challenge is the attention stage is not compatible with most neuromorphic architectures...there’s a matrix-matrix multiplication at the heart of transformers that is difficult to implement in neuromorphic architectures,” Davies said. (SpikeGPT gets around this by feeding data points in sequentially).- “[...] while we’re not there yet—we will need a silicon iteration to support it [...]”
- Davies highlights recent work on meta learning (learning how to learn— or changing the training algorithm based on the data) as an example of new approaches to training that could be beneficial
- Intel recently released a software package for doing prototype-based fast learning on chip—a module which can be combined with deep learning networks to do efficient last-layer learning (extracting features from a dataset and then using local neuroscience-inspired learning rules to learn those features online in a semi-supervised way, while, crucially, not forgetting what has previously been learned).
- “[...] So our pace of progress has been a little slower than I was hoping two years ago, and that has unfortunately become a limitation—software readiness has been holding back the results.”
- “We want to make sure that what people develop this time around is not disposable, it’s not just written up in a paper and then forgotten about, but it contributes to a body of code that people can use, adapt and carry forward,” he said.
- “There’s no question in my mind that we are on a commercialization path, and there are results—we are talking about three orders of magnitude improvement in energy delay product [a figure of merit combining energy and latency]—there should be a way to turn this into value for end applications,” Davies said.
- Most interest so far is coming from the space and aerospace industries that are size, weight and power sensitive
- Intel’s Loihi demonstration at Intel Innovation used a recurrent neural network to solve satellite scheduling—coordinating constellations of satellites to schedule cameras over points of interest—in a non-standard way using Loihi 2 chips.
- Davies points out that Loihi 2 is still at the research phase, but adds that next-gen silicon (in, say, the next five years) could be designed with more specific commercial applications in mind
RAIN neuromorphic - interesting links:
Nice post CMF,
A number of Mike Davies replies/comments just highlight how far WE ARE ahead of Intel (years)...yet many articles that name both of us
would suggest we are neck and neck competing....that is so far from the truth, we've been commercial for some time now, yes we have
research architecture being worked through as we speak, but 2 generations out there already, even more advanced technologies in the
pipeline, going by what Mike has stated in that interview, I personally don't see Intel as a threat at all, and quite frankly, if their stock price
was yet half the current price of Nvidia I believe that they would have already made a play for us.
An important thing for many to remember is that, despite us spearheading this SNN technology, facing all the headwinds associated
with disruptive technologies, the amount of time to complete cycles to actually get to market is 100% going to directly affect all other
players ambitions as well, things may speed up over time, but as it currently sits, I see our current lead not necessarily extending but rather
maintaining the status quo, meaning no one company will leap frog us, only consider taking us over in the near future.
Great to see Peter comment on Linkedin....he rarely comments on that platform.
Tech.
Last edited: