Brain-inspired Computing Is Ready for the Big Time
Neuromorphic pioneer Steve Furber says it's just awaiting a killer app
Edd Gent
10 hours ago
6 min read
Edd Gent is a Contributing Editor for IEEE Spectrum.
Steve Temple (left) holding a SpiNNaker chip with Steve Furber (right) in front of a labelled plot of the chip.
Steve Furber
neuromorphic computing spinnaker artificial intelligence neural networks spiking neural networks
Efforts to build brain-inspired computer hardware have been underway for decades, but the field has yet to have its breakout moment. Now, leading researchers say the time is ripe to start building the first large-scale
neuromorphic devices that can solve practical problems.
The
neural networks that have powered recent progress in
artificial intelligence are loosely inspired by the brain, demonstrating the potential of technology that takes its cues biology. But the similarities are only skin deep and the
algorithms and hardware behind today’s AI operate in fundamentally different ways to biological
neurons.
Neuromorphic engineers hope that by designing technology that more faithfully replicates the way the brain works, we will be able to mimic both its incredible computing power and its
energy efficiency. Central to this approach is the use of
spiking neural networks, in which computational neurons mimic their biological cousins by communicating using spikes of activity, rather than the numerical values used in conventional neural networks. But despite decades of research and increasing interest from the private sector, most demonstrations remain small scale and the technology has yet to have a commercial breakout.
In a
paper published in
Nature in January, some of the field’s leading researchers argue this could soon change.
Neuromorphic computing has matured from academic prototypes to production-ready devices capable of tackling real-world challenges, they argue, and is now ready to make the leap to large-scale systems.
IEEE Spectrum spoke to one of the paper’s authors, Steve Furber, the principal designer of the ARM microprocessor—the technology that now powers most cellphones—and the creator of the
SpiNNaker neuromorphic
computer architecture.
In the paper you say that neuromorphic computing is at a critical juncture. What do you mean by that?
Steve Furber: We’ve demonstrated that the technology is there to support
spiking neural networks at pretty much arbitrary scale and there are useful things that can be done with them. The criticality of the current moment is that we really need some demonstration of a killer app.
The
SpiNNaker project started 20 years ago with a focus on contributing to brain science, and neuromorphics is an obvious technology if you want to build models of brain cell function. But over the last 20 years, the focus has moved to engineering applications. And to really take off in the engineering space, we need some demonstrations of neuromorphic advantage.
In parallel over those 20 years, there’s been an explosion in mainstream AI based on a rather different sort of neural network. And that’s been very impressive and obviously had huge impacts, but it’s beginning to hit some serious problems, particularly in the energy requirements of
large language models (LLMs). And there’s now an expectation that neuromorphic approaches may have something to contribute, by significantly reducing those unsustainable energy demands.
The SpiNNaker team assembles a million-core neuromorphic system.SpiNNaker
We are close to having neuromorphic systems at a scale sufficient to support LLMs in neuromorphic form. I think there are lots of significant application developments at the smaller end of the spectrum too. Particularly close to sensors, where using something like an
event-based image sensor with a neuromorphic processing system could give a very low energy vision system that could be applied in areas such as security and automotive and so on.
When you talk about achieving a large-scale neuromorphic computer, how would that compare to systems that already exist?
Furber: There are lots of examples out there already like the large
Intel Loihi 2 system,
Hala Point. That’s a very dense, large-scale system. The SpiNNaker 1 machine that we’ve been running a service on [at the University of Manchester, UK] since 2016 had half a million ARM cores in the system, expanding to a million in 2018. That’s reasonably large scale. Our collaborators on
SpiNNaker 2 [
SpiNNcloud Systems, based in Dresden,
Germany] are beginning to market systems at the 5 million core level, and they will be able to run quite substantial LLMs.
Now, how much those will need to evolve for neuromorphic platforms is a question yet to be answered. They can be translated in a fairly simplistic way to get them running, but that simple translation won’t necessarily get the best energy performance.
So is the hardware not really the issue, it’s working out how to efficiently build something on top of it?
Furber: Yes, I think the last 20 years has seen proof-of-concept hardware systems emerge at the scales required. It’s working out how to use them to their best advantage that is the gap. And some of that is simply replicating the efficient and useful software stacks that have been developed for GPU-based
machine learning.
It is possible to build applications on neuromorphic hardware, but it’s still unreasonably difficult. The biggest missing components are the high-level software design tools along the lines of
TensorFlow and PyTorch that make it straightforward to build large models without having to go down to the level of describing every neuron in detail.
There’s quite a diversity of different neuromorphic technologies, which can sometimes make it hard to translate findings between different groups. How can you break down those silos?
Furber: Although the hardware implementation is often quite different, the next level up there is quite a lot in common. All neuromorphic platforms use spiking neurons and the neurons themselves are similar. You have a diversity of details at the lower levels, but that can be bridged by implementing a layer of software that matches those lower level hardware differences to higher level commonalities.
We’ve made some progress on that front, because within the EU’s
Human Brain Project, we have a group that’s been developing the
PyNN language. It is supported by both SpiNNaker, which is a many core neuromorphic system, and the University of Heidelberg’s
BrainScaleS system, which is an analog neural model.
But it is the case that a lot of neuromorphic systems are developed in a lab and used only by other people within that lab. And therefore they don’t contribute to the drive towards commonality. Intel has been trying to contribute through building the
Lava software infrastructure on their Loihi system and encouraging others to participate. So there are moves in that direction but it’s far from complete.
A member of the SpiNNaker team checks on the company’s million-core machine.Steve Furber
Opinions differ on how biologically plausible neuromorphic technology needs to be. Does the field need to develop some consensus here?
Furber: I think the diversity of the hardware platforms and of the neuron models that are used is a strength in the research domain. Diversity is a mechanism for exploring the space and giving you the best chance of finding the best answers for developing serious, large-scale applications. But once you do, yes, I think you need to reduce the diversity and focus more on commonality. So if neuromorphic is about to make the transition from a largely research-driven territory to a largely application-driven territory, then we’d expect to see that kind of thing changing.
If the field wants to achieve scale will it have to sacrifice a bit of biological plausibility?
Furber: There is a trade-off between biological fidelity and engineering controllability. Replicating the extremely simple neural models that are used in LLMs does not require a lot of biological fidelity. Now, it’s arguable that if you could incorporate a bit more of the biological detail and functionality, you could reduce the number of neurons required for those models by a significant factor. If that’s true, then it may well be worth ultimately incorporating those more complex models. But it is still big research problem to prove that this is the case.
In recent years there’s been a lot of excitement about memristors—memory devices that mimic some of the functionality of neurons. Is that changing the way people are approaching neuromorphic computing?
Furber: I do think that the technologies that are being developed have the potential to be transformative in terms of improving hardware efficiency at the very low levels. But when I look at the UK neuromorphic research landscape, a very significant proportion of it is focused on novel device technologies. And arguably, there’s a bit too much focus on that, because the systems problems are the same across the board.
Unless we can make progress on the systems level issues it doesn’t really matter what the underpinning technology is, and we already have platforms that will support progress on the systems level issues.
The paper suggest that the time is ripe for large-scale neuromorphic computing. What has changed in recent years that makes you positive about this, or is it more a call to arms?
Furber: It’s a bit in-between. There is evidence it’s happening, there are a number of interesting
startups in the neuromorphic space who are managing to survive. So that’s evidence that people with significant available funds are beginning to be prepared to spend on neuromorphic technology. There’s a belief in the wider community that neuromorphic’s time is coming. And of course, the huge problems facing mainstream machine learning on the energy front, that is a problem which is desperate for a solution. Once there’s a convincing demonstration that neuromorphics can change the equation, then I think we’ll see things beginning to turn.
Neuromorphic pioneer Steve Furber says it's just awaiting a killer app
spectrum.ieee.org