A relevant interview regarding Chris Eliasmith of ABR, makes for an interesting read:
It has a few relevant parts such as:
SB: Your group has also come up with Legendre Memory Units to represent time. Listeners may remember that BrainChip have been using this approach as well. Can you talk about what they are and why they’re useful?
CE: So we were really working on, ‘How does the brain represent time?’ But we also came to realize that, well, this is a problem in machine learning. Time series is a massive area of research, and it’s really hard. And people have all kinds of different recurrent networks, such as LSTNs and GRUs and a million variants on each of these. Transformers are now what people are using, where it’s not a recurrent network but it kind of spreads time out in space so you can just process stuff in a feedforward way. So there are all of these different approaches.
We started applying this core dynamical system to these problems. And so the obvious thing to do, I think, was basically:
you take that linear system—so this is the thing representing the temporal information—and then you put a non-linear layer afterwards, so you can then manipulate that temporal representation however you want. And you’ll learn that using normal backprop. And that’s what we call the Legendre Memory Unit.
More recently,
people have taken that exact same structure and called it a state-space model, for obvious reasons—because basically, having a linear dynamical system and then a non-linear layer, that’s a state-space model.
And that’s what BrainChip is using, for instance.
Also,
CE: And so for the last couple of years, that’s what we’ve really been focused on: building a chip that can natively run state-space models, run it extremely efficiently—because it’s specifically designed to do that—and fabricate that chip, and go to customers and start getting into all the many different products that you might be interested in having that in.
So that’s something that we’re really excited about, because
we just got the chip back for it at the end of August. We got the chip back, we tested it, and we had it up and running and setting world records in low-power running state-space models within a week and a half.
SB: Talk more about this chip. I’m very interested!
CE: It’s not a neuromorphic chip, in the sense that it’s not event-based. It is a neural accelerator, so it has compute right near the memory. The memory is actually a little bit exotic—it’s called MRAM, so magnetoresistive RAM, which means that it’s non-volatile—so you can load your network on there, you can basically leave it, shut off, and it draws almost no power until you need it—and then it can run the model, which is really cool.
We’re able to do full speech recognition.
So it could be sitting here basically typing out everything that I’m saying with about a hundred times less power than other edge hardware that’s available on the market—under 30 milliwatts. We can have it typing out whatever language you’re speaking in.
We can do other things with it, too. We can use it to control a device. So you can basically tell the device what you want to do, using natural language—you don’t have to memorize keywords or key phrases, you just say what you want. We’re also working with customers who want to use it to do things like monitor your biosignals—your heartbeat, your O2, your sweating, you know, anything. You can monitor all that and do on-device inference about the state of the person, warn them that they’re going to have a seizure if it’s EEG, or that they’re having some kind of heart palpitation, or what have you.
And we just started our early access program. So we’re working with customers, getting the hardware in their hands, helping them integrate that into their applications. We’re super excited about what this chip can do. It’s just kind of blowing the competition out of the water from a power and efficiency and performance perspective.
REC: An interesting aside, which is kind of like a reality check for me—a group of us went to speak to a DARPA director not long ago, and we were selling an idea of which one of the primary focuses was that it was basically extremely low power. And she could not care less about the power. She cared mostly about latency. That is the thing. And this was an embodied AI application, Giulia. She said, “I will burn all the power that I need to if you can get me the speed and the decisions to happen as quickly as possible, and as effectively as possible.” Which for me was like, “What!?” I expected that the power efficiency argument would’ve been a winner on the day.
It was interesting to me. We were trying to argue for small drones with small brains and so on, and she was like, eh, but what’s the latency? When do you make the decision? Anyway, very interesting.
A separate thing I don't recall people mentioning was the Youtube presentation he did a few weeks ago in which he went into his new chip in more detail. There were a few things worth noting such as listing some specs (eg. the chip uses 22nm node process, and a comparison with competitors:
View attachment 93607
A few takeaways:
- This gives further confirmation that ABR's LMU and Brainchip's State Space models are very similar competitive technologies.
- The rush to get Akida Cloud was likely partly in response to the threat ABR proposed. Akida's Gen 2 chip is still in production. However, while ABR have a chip now, Akida Gen 2 hardware was available to customers from 5th August (see link below), slightly earlier than ABR's chip. ABR got their chip back at the end of August, then had a few weeks of testing plus logistics before it could be provided to customers. Akida 2 would likely have been the FPGA prototype, so probably not the final version with maximum efficiency, but it allowed people to test at least a month earlier than ABR. In general, any new prototype chip can be released to customers this way, which provides an edge.
- In the Youtube clip, ABR were comparing their new chip to the 1st gen Akida and saying it wasn't comparable. It couldn't do more complex tasks like automatic speech recognition. I would guess Gen 2 woul have similar efficiency / performance to ABR's new chip unless there are significant improvements that Brainchip have made to hardware or algorithm efficiency (patent pending). Both chips are being manufactured in 22nm from memory so performance should be comparable when these values eventually become available.
- It's probably no surprise that ABR are working with customers to implement solutions. However, even if ABR have better hardware it doesn't mean customers will rush to adopt it over Akida Gen 2. Brainchip has spent many years building up their ecosystem and working with customers and has other advantages of it's own. Furthermore, Brainchip is more focused on IP than chips, which means they are working with some diffferent customers
- Brainchip's competitive advantage may not be as high as some people think. Akida Gen 3 will probably be important to ensuring they retain this
BrainChip launches Akida Cloud, providing instant access to the latest neuromorphic technology for developers and edge AI innovators.
brainchip.com
LAGUNA HILLS, Calif. – Aug 5th, 2025 – BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power, fully digital, event-based neuromorphic AI, today announced launch of the BrainChip Developer Akida Cloud, a new cloud-based access point to multiple generations and configurations of the company’s Akida™ neuromorphic technology. The initial Developer Cloud release will feature the latest version of Akida’s 2nd generation technology,