The podcast with Dr Alexandre Marcireau from ICNS reminded me that I had long wanted to bring another podcast to your attention, namely the first episode of the new Brains and Machines podcast with Sunny Bains and her interview partner Prof André van Schaik, Director of ICNS, which was recorded during the Capo Caccia Neuromorphic Workshop 2023 (organised by the Zurich Institute of Neuroinformatics) in the first week of May.
Now the tech is way above my head, hence I could be totally off, but I just thought I’d mention it anyway, so the more tech-savvy can give their opinions. So here are my thoughts:
Even though Brainchip is targeting the Edge AI market with Akida, I have been wondering whether André van Schaik could be hinting at Akida being used in the ICNS Deep South project, a large FPGA-based neuromorphic simulator that’s currently in the works. He talks about commercial hardware they will be using to build it, and as completion of building the platform, which will consist of “a bit over a hundred FPGAs”, is scheduled for the end of the year, the timeline would nicely align with the recent release of Akida 2000.
ICNS originally seems to have started on this project back in 2021 in collaboration with Intel, set out as a two year proof-of-concept project at the time:
More details here:
I am wondering, though, whether the ICNS researchers started out with Intel, then may have gotten their hands on the Akida 1000 reference chip, realised that Brainchip’s product(s) would be a much better choice for their envisaged large neural simulator and hence switched to Akida for any further planning, once their proof of concept model with Intel was done and dusted resp have been waiting with bated breath for Akida 2000 to be released?
I find it very weird that André van Schaik does not mention Intel at all, when talking about the Deep South project, as they did start out with them in 2021. Also the option to scale up and down sounds very familiar, doesn’t it? Does anyone know the price tag for an Akida 2000 reference chip?
Here are some excerpts from the podcast that I found relevant to my thoughts:
SB: More recently, you’ve been working on a very ambitious project to build large neural simulators using FPGAs. So, can you start by telling us what you hope to achieve with this project, and then I’ll ask you a little bit more about how it works.
AVS: Sure. What I’m trying to achieve with this project is a similar enabling technology as GPUs were for neural networks when, at the beginning I mentioned neural networks, tanking in the 90s just as I wanted to start on it and coming back when GPUs made it possible to simulate really large, deep neural networks in a reasonable amount of time.
Now a problem for spiking neural networks, which is what we’re interested in at the moment because brains are spiking neural networks, is that they are terrible to simulate on a computer—it’s very slow to do large networks. And so, I want to create a technology that enables you to simulate these large-scale spiking neural networks in a reasonable amount of time.
And I want to do it in such a way that we don’t build our own chips for this, but that we use commercial hardware instead. Because similar to the GPU, was not developed for neural networks. It was a technology that was commercially done for graphical processing on computers.
FPGAs’ reconfigurable hardware is another commercial technology that’s being used for various applications, and one application that we think it’s good for is simulating spiking neural networks.
The advantage of using commercial technology is that we don’t have to develop our own chips in a small university research group, where you can maybe do one generation of chip every so many years. Some of the other groups around the world that are doing spiking neural network accelerators, they’re building their own chip. And you’re seeing that going from generation one to generation two takes typically five, six years to do that. So, that’s a very slow iteration—and my contention is that we don’t yet know what it is exactly that we want on these hardware systems, on these accelerators. What neural model do we need? What plasticity model? What connectivity patterns? That should still be open.
So, the advantage of FPGAs is that we can reconfigure it, so it can be very flexible. We’ll start with an initial design, but then the design can be iterated, and it will be open source, as well. So, anybody in the world will be able to work with the machine, but also add things to it if we think we need it.
(…)
Sb: Do you have a name for it, by the way?
AVS: Not really.
The original design we call Deep South in response to True North, but True North now is getting really old and is not really that active anymore, so we need to come up with a better system, but it was Deep South because we’re based in Australia down under, so it was a nice balance to the True North.
Sunny Bains: So, the new FPGA machine—it’s essentially a simulator. So, although you could use it to solve problems in its own right, right? You could use it as a machine to do stuff. That’s actually not its intention. The main intention is to understand the principles and to optimize models that could then be built in the next generation of optimized, small, power efficient hardware to do those things in all sorts of applications, right?
So, this is almost like an intermediate step, an experimental platform much in the same way that Spinnaker and Loihi are intermediate steps and experimental platforms.
AVS: Absolutely. We just think that this platform based on FPGAs provides more flexibility for this intermediate step to figure out what is it that we actually want on the system, before you distil that down into a really efficient, low-power chip if that’s what you need.
Also, the design is modular, so we can make really large systems, or you can use one FPGA and use a smaller system that you’d want on a robot or on a drone or something like that to do the processing locally. So, you can scale up and down with this system, as well. In theory, all the way to human brain level computation and beyond in terms of the number of operations per second that you’re doing.
SB: Now this is, this is a long-term project that you’re sort of at the beginning of, right? As I understand it, you’ve done some proof of concept, first iteration of your design, but you’ve got quite a bit of work to do to get to what you’ve just described, am I right?
AVS: Yes, but we’ve made a fair bit of progress behind the curtains, I guess. And so, we are looking to build a system that can do human brain scale number of operations per second this year using commercially available FPGAs. That system hopefully will exist at the end of the year. Then it’s a matter of making it also user friendly because we don’t want people to have to do FPGA programming to use the system. So, it’s a matter of providing software interface, user interface, that allows people to specify the networks that they want on it. That might take a little bit longer to be ready for people to use, but I hope that we’ll be pretty far on that next year.
And again, I hope that we’ll do that in an open-source way with contributions from a global community. And then, even longer-term aspect of it is, is if you have that data of a billion neurons in your spiking neural network, how do you analyze that data? And that’s an interesting research question—how do you visualize that? And those are clear areas where we’re going to need help because I don’t really know the answer to those questions.
SB: And I noticed, because we’re recording this at the Capo Caccia Neuromorphic Workshop, I noticed you’re looking for postdocs and people to come and help you in this endeavor.
AVS: Yes, the more the merrier, basically. And, we have positions open in Sydney, pretty much constantly. It’s just hard to find the number of people that we need for these efforts.
And we’re in an interesting time at the moment in neuromorphic engineering where funding is easier to get than people, and that hasn’t always been the case.
But there’s a lot of interest from industry, defense—all those non-academic players in neuromorphic engineering and what can it do. That’s been a real change over the years. It used to always be, I had to explain first coming through the door to somebody what neuromorphic engineering was. Whereas now, company representatives contact me and ask about what can neuromorphic engineering do for them or how can we collaborate.
And that’s a massive change that has happened over the last five years.
SB: So, you’re expecting that by the end of…certainly by the end of 2024, you would have people in different labs around the globe playing with your machine in the cloud, essentially.
AVS: Absolutely. Yes.
SB: And presumably because it’s commercially based FPGAs, they could, if they wanted to have a local one, they could do that very simply as well, right? So, that’s your goal.
AVS: The funding, I can buy one FPGA or a few FPGAs. These are high-end FPGAs, so they are about $10,000 each. So, it’s not something that everybody will buy, but at the same time, as a university piece of equipment, buying several of them is possible, obviously. And our system will cost several million dollars to put all the hardware together—that’s a lot of money. But at the same time, it’s not impossible for somebody to replicate somewhere else if they realize they need a system at their university or at their company.
SB: So how many chips will be on the version of the system that you’re building right now?
AVS: A bit over a hundred FPGAs will be on that system.
(…)
SB: Now I don’t want to encourage you to become a betting man, but you talked about applications.
Looking forward over the next 10-15 years, that kind of timescale, which are the ones that you think are most likely to have neuromorphic elements to them, before the likes of you and I retire?
AVS: The most safe bet for me would be neuromorphic vision systems. Sensing in a larger term too, but vision systems are the ones that have developed the furthest. And I think there will be a fair bit of development in that area. At the moment, we use event cameras as I described earlier. There [is] only one form of camera, of neuromorphic camera.
All each pixel does is detect changes in time. Biological retinas also look at spatial processing, what are the neighboring photoreceptors doing? That’s not in these cameras. We can try and build that in. An advantage of the current camera is that if you keep the camera still, only things that move will generate changes, and therefore you automatically extract things that move.
But if you start moving the camera, everything changes all the time, which is actually a disadvantage for the cameras. We can build cameras that try and compensate for these things.
We can build cameras that don’t just use visible light, but that do infrared or ultraviolet or hyperspectral versions of these cameras.
So, there’s a whole range in applications in sensing, vision sensing in particular, but we’re also doing work in the lab on audio, on olfaction, smell. I’m interested in tactile. I’m interested in electric sense of the shark, or radar, or neuromorphic versions of that. So, I think there we’ll see a lot of first applications happening.
I’m hopeful that we will get applications out of neuromorphic computation with spiking neural networks and with the system that we do, that it inspires stuff, and we saw that with the GPUs. It reinvigorated the field and
once you have a critical mass, things are going really, really fast, and then snowball and progress has been so fast over the last decade in deep neural networks that we might trigger that in spiking neural networks, but that’s much harder to predict, so I wouldn’t want to bet on that.
(…)
REC: Yeah, so definitely,
Andre speaks to this, right. [He] indicates that from the olden days, if you will, what constituted [a] neuromorphic system is very different from what constituted a neuromorphic system now. For example, back in the day, a neuromorphic system had to be hardware, had to be analog, had to mimic parsimoniously as possible the biological system that’s being modelled.
Today, we have the various models like the Deep South that Andre speaks about, which is strictly a digital system. Back then, that would not have been considered to be neuromorphic.