Spoiler Alert: Mike Davies does a fabulous job here completely ignoring the fact that Brainchip has a commercially available neuromorphic processor with clearly defined applications and use cases.
Mike Davies, Intel Labs: ‘We’re reaching the boundaries of basic computing’
The man at the head of the world’s largest neuromorphic system is aiming to mimic the human brain to increase computing capacity and efficiency in meeting the demands of a new era
RAÚL LIMÓN
MAY 25, 2024 - 06:05 CEST
Mike Davies, director Intel Labs and head of development at the largest neuromorphic system.INTEL
The constant increase in network traffic (up 22% last year as compared to 2022, according to DE-CIX) and the new computational demands of
artificial intelligence are taking conventional systems to their limit. There is a need for new formulas, and
quantum computing is not yet a viable option. Electronics company Intel is one of the most advanced when it comes to the development of neuromorphic systems, a meeting of biology and technology that seeks to imitate the way human beings process information. The firm is joined in the race towards a more effective and efficient processing system by IBM, Qualcomm, and research centers like those of Caltech, where the concept was born thanks to Carver Mead, MIT, Germany’s Max Planck Institute for Neurobiology of Behavior, and Stanford University.
This month, Intel announced that it had created the world’s largest neuromorphic system: Hala Point, with 1.15 billion technological neurons and 1,152 Loihi 2 processors (chips) consuming a maximum of 2,600 watts and with a processing capacity equivalent to that of an owl’s brain. A study published in
IEEE Xplore describes it as being more efficient and performing better than systems based on central processing units (CPUs) and graphics processing units (GPUs), the conventional computing engines.
Mike Davies, born in Dallas and turning 48 in July, is the director of neuromorphic computing at Intel Labs and responsible for its latest advances, which seem to be determining the immediate future of computing.
Question. What is a neuromorphic system?
Answer. It’s a computer design architecture that’s inspired by the modern understanding of how brains operate, which means that we are discarding seven, eight decades of conventional computer architecture understanding. We’re trying to understand the principles from modern neuroscience that apply to chips and systems that we can build today to create something that operates and
processes information more like how a brain does.
Q. How does it work?
A. If you were to open up the system, the chips, you see differences, some of them very striking, in the sense that there’s no memory: all the computing and the processing elements in the memory are integrated together. Our Hala Point system, for example, is a three-dimensional grid of chips. It’s similar to how you open up a brain and everything is communicating to everything. A neuron will communicate across the brain to another set of neurons that is connected. In a traditional system, you have memory sitting next to a processor and the processor is reading continuously out of the memory.
Our Hala Point system is a three-dimensional grid of chips. It’s similar to how you open up a brain and everything is communicating to everything. A neuron will communicate across the brain to another set of neurons that is connected
Q. Is this model necessary because we have reached the boundaries of conventional computing?
A. There’s lots of
progress being made in AI and in deep learning, and it’s very exciting. But it’s hard to see how these trends that we observe in the research will continue when you see their increases in computing requirements. These AI models are growing at exponential rates, far faster than the manufacturing advances that are being made. That really is reaching the limit of what basic computer architecture can provide. And also, if we look at just the power efficiency of these traditional AI chips and systems, compared to the brain, there are many orders of magnitude of difference in power efficiency. So, it’s not so much that traditional computer architectures are not capable of providing great gains in computing and AI, it’s more that we’re looking to a broader class of functionality, being able to have computers that operate more like the brain, and do so in a very efficient way.
Q. Is power efficiency the principal advantage of neuromorphic chips?
A. It is one of the main ones. There’s a very dramatic difference between the brain’s efficiency and traditional computer architecture efficiency. But neuromorphic architectures can provide performance advantages as well. We think of GPUs as being incredibly high profound performance devices, but in fact, they’re only high performance if you have a very, very big amount of data to process, and you’re processing that in what we call a batch mode, where you have all of this data available on a disk or right next to the processor, so that it can be read. But if data is arriving from sensors, from cameras or videos in real time, then actually, the efficiency and the power of traditional architectures is much less. That’s where neuromorphic architectures actually can provide a great increase in speed as well as efficiency.
Q. Does artificial intelligence need a neuromorphic system to grow?
A. That’s what we believe. Of course, it’s research. It’s unclear today exactly how to deploy this commercially. There are many research problems still to be solved in terms of the software, the algorithms. Many of these conventional approaches don’t run natively on neuromorphic hardware, because it’s a completely different programming. We do believe that this is the right path forward to achieve the gains that we need in power efficiency and in performance for these kinds of workloads. But it is still an open question.
Q. Will we see a neuromorphic chip in a personal computer or smart phone?
A. I think so, it’s a matter of time frame. You won’t see that in the next year, but the technology will mature and I think that you will see it deployed into edge devices [data processing that takes place close to its origin to ensure speed and efficiency] or mobile devices, autonomous vehicles, drones or your laptop. Our Hala Point device is designed for the data center. It’s a box the size of a large microwave. But if we look at nature, you find brains of all sizes. Insect brains are very impressive, even at that small scale. And then you have, of course, the human brain. We are pursuing both directions of that research. We believe that the commercialization will happen in the edge devices first, but there’s a need to keep pushing and doing research at the large scale.
At the data center, we will probably be able to see these systems in five years
Q. When will they be ready?
A. It’s hard to predict because there are still open research questions. At the data center, it’s probably something like five years. We also see in the future that everything will need to operate off of battery power, and the power savings that neuromorphic can offer are extremely important. There are also somewhat less obvious applications, like wireless base stations for cell phone infrastructure. We are working with Ericsson to better optimize communication channels.
Q. Are neuromorphic systems and quantum computing complementary?
A. I think they’re complementary in some respects. Where they’re very different, I would say, is in the time frame. Quantum computing is looking at device-level manufacturing innovation and trying to scale that up. It’s clear that what it offers is very noble and impressive. It’s also very unclear what the programming model of quantum will be, what kinds of problems or kinds of workloads it will support once it can be scaled all the way up. Neuromorphic computing is available today, and it’s very good for AI type of workloads. But there is, interestingly, an intersection in the application space of quantum and neuromorphic. That is where it’s interesting to think about solving hard optimization problems and allowing people to experiment and prototype and learn how to program these kinds of systems.
Implanting neuromorphic chips in the brain is a very natural application of neuromorphic computing, because this architecture is behaving just like neurons, so it would naturally speak the language of our brain
Q. Could we see neuromorphic systems in our brain?
A. There are some researchers interested in the neuroprosthetic application, which would mean trying to repair problems, pathologies in the brain where there has been some loss of function to
bring back control over your body. I would say that is very early-stage research, but I think in the long term, it is a very natural application of neuromorphic computing, because this architecture is behaving just like neurons, so it would naturally speak the language of our brain.
Q. What kinds of brains are equal to the systems that are currently available?
A. In terms of the numbers of neurons, it’s similar to an owl brain. But if you focus on the cortex, which is where much of the higher-order intelligence happens, it’s about the size of a capuchin monkey cortex. Many of us in this research field have the human brain in mind as a kind of vision for the scale of system we’d like to build. But we’re not trying to get there too fast. We need to know how to make it useful. And that’s why this system is still a research tool so that we can continue to experiment with it.
Q. In what concrete cases are these systems more effective?
A. In finding the best path through a map we see speed-ups of up to 50 times, compared to the best conventional solvers. In terms of energy, they can be reaching levels of 1,000 times more [efficient].
Q. Could Europe take advantage of this new technology to gain sovereignty on making chips, as it’s currently dependent on other continents?
A. If we look into the future, there’s much innovation that we will need over the long term if we’re to achieve the size and the efficiency of real brains in nature, which remains incredibly impressive. There’s still a long way to go, and to get there we do need manufacturing innovation. There needs to be new devices and new memory technology that’s going to make truly brain-like chips technologically possible. It’s not clear that any one geographic region has an advantage in this domain, so it is an opportunity. High technology always involves innovation and nothing ever stays static. There’s always a need for new breakthroughs and it’s unpredictable where these may come from.