A relevant interview regarding Chris Eliasmith of ABR, makes for an interesting read:
It has a few relevant parts such as:
SB: Your group has also come up with Legendre Memory Units to represent time. Listeners may remember that BrainChip have been using this approach as well. Can you talk about what they are and why theyâre useful?
CE: So we were really working on, âHow does the brain represent time?â But we also came to realize that, well, this is a problem in machine learning. Time series is a massive area of research, and itâs really hard. And people have all kinds of different recurrent networks, such as LSTNs and GRUs and a million variants on each of these. Transformers are now what people are using, where itâs not a recurrent network but it kind of spreads time out in space so you can just process stuff in a feedforward way. So there are all of these different approaches.
We started applying this core dynamical system to these problems. And so the obvious thing to do, I think, was basically: you take that linear systemâso this is the thing representing the temporal informationâand then you put a non-linear layer afterwards, so you can then manipulate that temporal representation however you want. And youâll learn that using normal backprop. And thatâs what we call the Legendre Memory Unit.
More recently, people have taken that exact same structure and called it a state-space model, for obvious reasonsâbecause basically, having a linear dynamical system and then a non-linear layer, thatâs a state-space model. And thatâs what BrainChip is using, for instance.
Also,
CE: And so for the last couple of years, thatâs what weâve really been focused on: building a chip that can natively run state-space models, run it extremely efficientlyâbecause itâs specifically designed to do thatâand fabricate that chip, and go to customers and start getting into all the many different products that you might be interested in having that in.
So thatâs something that weâre really excited about, because we just got the chip back for it at the end of August. We got the chip back, we tested it, and we had it up and running and setting world records in low-power running state-space models within a week and a half.
SB: Talk more about this chip. Iâm very interested!
CE: Itâs not a neuromorphic chip, in the sense that itâs not event-based. It is a neural accelerator, so it has compute right near the memory. The memory is actually a little bit exoticâitâs called MRAM, so magnetoresistive RAM, which means that itâs non-volatileâso you can load your network on there, you can basically leave it, shut off, and it draws almost no power until you need itâand then it can run the model, which is really cool.
Weâre able to do full speech recognition. So it could be sitting here basically typing out everything that Iâm saying with about a hundred times less power than other edge hardware thatâs available on the marketâunder 30 milliwatts. We can have it typing out whatever language youâre speaking in.
We can do other things with it, too. We can use it to control a device. So you can basically tell the device what you want to do, using natural languageâyou donât have to memorize keywords or key phrases, you just say what you want. Weâre also working with customers who want to use it to do things like monitor your biosignalsâyour heartbeat, your O2, your sweating, you know, anything. You can monitor all that and do on-device inference about the state of the person, warn them that theyâre going to have a seizure if itâs EEG, or that theyâre having some kind of heart palpitation, or what have you.
And we just started our early access program. So weâre working with customers, getting the hardware in their hands, helping them integrate that into their applications. Weâre super excited about what this chip can do. Itâs just kind of blowing the competition out of the water from a power and efficiency and performance perspective.
REC: An interesting aside, which is kind of like a reality check for meâa group of us went to speak to a DARPA director not long ago, and we were selling an idea of which one of the primary focuses was that it was basically extremely low power. And she could not care less about the power. She cared mostly about latency. That is the thing. And this was an embodied AI application, Giulia. She said, âI will burn all the power that I need to if you can get me the speed and the decisions to happen as quickly as possible, and as effectively as possible.â Which for me was like, âWhat!?â I expected that the power efficiency argument wouldâve been a winner on the day.
It was interesting to me. We were trying to argue for small drones with small brains and so on, and she was like, eh, but whatâs the latency? When do you make the decision? Anyway, very interesting.
A separate thing I don't recall people mentioning was the Youtube presentation he did a few weeks ago in which he went into his new chip in more detail. There were a few things worth noting such as listing some specs (eg. the chip uses 22nm node process, and a comparison with competitors:
View attachment 93607
A few takeaways:
- This gives further confirmation that ABR's LMU and Brainchip's State Space models are very similar competitive technologies.
- The rush to get Akida Cloud was likely partly in response to the threat ABR proposed. Akida's Gen 2 chip is still in production. However, while ABR have a chip now,Akida's Gen 2 chip is still in production.from 5th August (see link below), slightly earlier than ABR's chip. ABR got their chip back at the end of August, then had a few weeks of testing plus logistics before it could be provided to customers. Akida 2 would likely have been the FPGA prototype, so probably not the final version with maximum efficiency, but it allowed people to test at least a month earlier than ABR. In general, any new prototype chip can be released to customers this way, which provides an edge.
- In the Youtube clip, ABR were comparing their new chip to the 1st gen Akida and saying it wasn't comparable. It couldn't do more complex tasks like automatic speech recognition. I would guess Gen 2 woul have similar efficiency / performance to ABR's new chip unless there are significant improvements that Brainchip have made to hardware or algorithm efficiency (patent pending). Both chips are being manufactured in 22nm from memory so performance should be comparable when these values eventually become available.
- It's probably no surprise that ABR are working with customers to implement solutions. However, even if ABR have better hardware it doesn't mean customers will rush to adopt it over Akida Gen 2. Brainchip has spent many years building up their ecosystem and working with customers and has other advantages of it's own. Furthermore, Brainchip is more focused on IP than chips, which means they are working with some diffferent customers
- Brainchip's competitive advantage may not be as high as some people think. Akida Gen 3 will probably be important to ensuring they retain this
LAGUNA HILLS, Calif. â Aug 5th, 2025 â BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the worldâs first commercial producer of ultra-low power, fully digital, event-based neuromorphic AI, today announced launch of the BrainChip Developer Akida Cloud, a new cloud-based access point to multiple generations and configurations of the companyâs Akida⢠neuromorphic technology. The initial Developer Cloud release will feature the latest version of Akidaâs 2nd generation technology,![]()
BrainChip Launches Akida Cloud for Instant Neuromorphic Access
BrainChip launches Akida Cloud, providing instant access to the latest neuromorphic technology for developers and edge AI innovators.brainchip.com
A relevant interview regarding Chris Eliasmith of ABR, makes for an interesting read:
It has a few relevant parts such as:
SB: Your group has also come up with Legendre Memory Units to represent time. Listeners may remember that BrainChip have been using this approach as well. Can you talk about what they are and why theyâre useful?
CE: So we were really working on, âHow does the brain represent time?â But we also came to realize that, well, this is a problem in machine learning. Time series is a massive area of research, and itâs really hard. And people have all kinds of different recurrent networks, such as LSTNs and GRUs and a million variants on each of these. Transformers are now what people are using, where itâs not a recurrent network but it kind of spreads time out in space so you can just process stuff in a feedforward way. So there are all of these different approaches.
We started applying this core dynamical system to these problems. And so the obvious thing to do, I think, was basically: you take that linear systemâso this is the thing representing the temporal informationâand then you put a non-linear layer afterwards, so you can then manipulate that temporal representation however you want. And youâll learn that using normal backprop. And thatâs what we call the Legendre Memory Unit.
More recently, people have taken that exact same structure and called it a state-space model, for obvious reasonsâbecause basically, having a linear dynamical system and then a non-linear layer, thatâs a state-space model. And thatâs what BrainChip is using, for instance.
Also,
CE: And so for the last couple of years, thatâs what weâve really been focused on: building a chip that can natively run state-space models, run it extremely efficientlyâbecause itâs specifically designed to do thatâand fabricate that chip, and go to customers and start getting into all the many different products that you might be interested in having that in.
So thatâs something that weâre really excited about, because we just got the chip back for it at the end of August. We got the chip back, we tested it, and we had it up and running and setting world records in low-power running state-space models within a week and a half.
SB: Talk more about this chip. Iâm very interested!
CE: Itâs not a neuromorphic chip, in the sense that itâs not event-based. It is a neural accelerator, so it has compute right near the memory. The memory is actually a little bit exoticâitâs called MRAM, so magnetoresistive RAM, which means that itâs non-volatileâso you can load your network on there, you can basically leave it, shut off, and it draws almost no power until you need itâand then it can run the model, which is really cool.
Weâre able to do full speech recognition. So it could be sitting here basically typing out everything that Iâm saying with about a hundred times less power than other edge hardware thatâs available on the marketâunder 30 milliwatts. We can have it typing out whatever language youâre speaking in.
We can do other things with it, too. We can use it to control a device. So you can basically tell the device what you want to do, using natural languageâyou donât have to memorize keywords or key phrases, you just say what you want. Weâre also working with customers who want to use it to do things like monitor your biosignalsâyour heartbeat, your O2, your sweating, you know, anything. You can monitor all that and do on-device inference about the state of the person, warn them that theyâre going to have a seizure if itâs EEG, or that theyâre having some kind of heart palpitation, or what have you.
And we just started our early access program. So weâre working with customers, getting the hardware in their hands, helping them integrate that into their applications. Weâre super excited about what this chip can do. Itâs just kind of blowing the competition out of the water from a power and efficiency and performance perspective.
REC: An interesting aside, which is kind of like a reality check for meâa group of us went to speak to a DARPA director not long ago, and we were selling an idea of which one of the primary focuses was that it was basically extremely low power. And she could not care less about the power. She cared mostly about latency. That is the thing. And this was an embodied AI application, Giulia. She said, âI will burn all the power that I need to if you can get me the speed and the decisions to happen as quickly as possible, and as effectively as possible.â Which for me was like, âWhat!?â I expected that the power efficiency argument wouldâve been a winner on the day.
It was interesting to me. We were trying to argue for small drones with small brains and so on, and she was like, eh, but whatâs the latency? When do you make the decision? Anyway, very interesting.
A separate thing I don't recall people mentioning was the Youtube presentation he did a few weeks ago in which he went into his new chip in more detail. There were a few things worth noting such as listing some specs (eg. the chip uses 22nm node process, and a comparison with competitors:
View attachment 93607
A few takeaways:
- This gives further confirmation that ABR's LMU and Brainchip's State Space models are very similar competitive technologies.
- The rush to get Akida Cloud was likely partly in response to the threat ABR proposed. Akida's Gen 2 chip is still in production. However, while ABR have a chip now, Akida Gen 2 hardware was available to customers from 5th August (see link below), slightly earlier than ABR's chip. ABR got their chip back at the end of August, then had a few weeks of testing plus logistics before it could be provided to customers. Akida 2 would likely have been the FPGA prototype, so probably not the final version with maximum efficiency, but it allowed people to test at least a month earlier than ABR. In general, any new prototype chip can be released to customers this way, which provides an edge.
- In the Youtube clip, ABR were comparing their new chip to the 1st gen Akida and saying it wasn't comparable. It couldn't do more complex tasks like automatic speech recognition. I would guess Gen 2 woul have similar efficiency / performance to ABR's new chip unless there are significant improvements that Brainchip have made to hardware or algorithm efficiency (patent pending). Both chips are being manufactured in 22nm from memory so performance should be comparable when these values eventually become available.
- It's probably no surprise that ABR are working with customers to implement solutions. However, even if ABR have better hardware it doesn't mean customers will rush to adopt it over Akida Gen 2. Brainchip has spent many years building up their ecosystem and working with customers and has other advantages of it's own. Furthermore, Brainchip is more focused on IP than chips, which means they are working with some diffferent customers
- Brainchip's competitive advantage may not be as high as some people think. Akida Gen 3 will probably be important to ensuring they retain this
LAGUNA HILLS, Calif. â Aug 5th, 2025 â BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the worldâs first commercial producer of ultra-low power, fully digital, event-based neuromorphic AI, today announced launch of the BrainChip Developer Akida Cloud, a new cloud-based access point to multiple generations and configurations of the companyâs Akida⢠neuromorphic technology. The initial Developer Cloud release will feature the latest version of Akidaâs 2nd generation technology,![]()
BrainChip Launches Akida Cloud for Instant Neuromorphic Access
BrainChip launches Akida Cloud, providing instant access to the latest neuromorphic technology for developers and edge AI innovators.brainchip.com
| Date | Event | Capabilities / Significance | Source |
|---|---|---|---|
| 10 Sept 2018 | Announcement of Akida 1000 architecture | First neuromorphic SoC design unveiled; eventâbased AI for video analytics, IoT, automotive | BrainChip Release |
| 2 Dec 2020 | ASX announcement: Completion of Akida production design | Final design ready for tapeâout; milestone toward commercial silicon | ASX release 2 Dec'20 |
| 21 Oct 2021 | ASX announcement: BrainChip begins taking orders for Akida AI Development Kits | Kits shipped to partners for evaluation and integration | ASX release. 21 Oct'21 |
| 9 Nov 2021 | BrainChip completes testing of production version of Akida chip | Validated performance and readiness for commercial deployment | ASX release. 9 Nov'21 |
| 28 Aug 2023 | First batch of Akida 1500 chips receivedfrom GlobalFoundries | Second silicon proof point; validated portability across foundries; built on 22nm FDâSOI | BrainChip Release. See below. |
| 4 Nov 2025 | Launch of latest Akida 1500 Edge AI coâprocessorat Embedded World North America | Achieves 800 GOPS under 300 mW; ideal for wearables, smart sensors, constrained environments | BusinessWire Release |
| Early 2026 (Roadmap) | Planned Akida Gen 2 tapeâout | Nextâgeneration neuromorphic IP; enhanced scalability, performance, and efficiency across automotive, IoT, wearables, defense | BrainChip roadmap |
A relevant interview regarding Chris Eliasmith of ABR, makes for an interesting read:
It has a few relevant parts such as:
SB: Your group has also come up with Legendre Memory Units to represent time. Listeners may remember that BrainChip have been using this approach as well. Can you talk about what they are and why theyâre useful?
CE: So we were really working on, âHow does the brain represent time?â But we also came to realize that, well, this is a problem in machine learning. Time series is a massive area of research, and itâs really hard. And people have all kinds of different recurrent networks, such as LSTNs and GRUs and a million variants on each of these. Transformers are now what people are using, where itâs not a recurrent network but it kind of spreads time out in space so you can just process stuff in a feedforward way. So there are all of these different approaches.
We started applying this core dynamical system to these problems. And so the obvious thing to do, I think, was basically: you take that linear systemâso this is the thing representing the temporal informationâand then you put a non-linear layer afterwards, so you can then manipulate that temporal representation however you want. And youâll learn that using normal backprop. And thatâs what we call the Legendre Memory Unit.
More recently, people have taken that exact same structure and called it a state-space model, for obvious reasonsâbecause basically, having a linear dynamical system and then a non-linear layer, thatâs a state-space model. And thatâs what BrainChip is using, for instance.
Also,
CE: And so for the last couple of years, thatâs what weâve really been focused on: building a chip that can natively run state-space models, run it extremely efficientlyâbecause itâs specifically designed to do thatâand fabricate that chip, and go to customers and start getting into all the many different products that you might be interested in having that in.
So thatâs something that weâre really excited about, because we just got the chip back for it at the end of August. We got the chip back, we tested it, and we had it up and running and setting world records in low-power running state-space models within a week and a half.
SB: Talk more about this chip. Iâm very interested!
CE: Itâs not a neuromorphic chip, in the sense that itâs not event-based. It is a neural accelerator, so it has compute right near the memory. The memory is actually a little bit exoticâitâs called MRAM, so magnetoresistive RAM, which means that itâs non-volatileâso you can load your network on there, you can basically leave it, shut off, and it draws almost no power until you need itâand then it can run the model, which is really cool.
Weâre able to do full speech recognition. So it could be sitting here basically typing out everything that Iâm saying with about a hundred times less power than other edge hardware thatâs available on the marketâunder 30 milliwatts. We can have it typing out whatever language youâre speaking in.
We can do other things with it, too. We can use it to control a device. So you can basically tell the device what you want to do, using natural languageâyou donât have to memorize keywords or key phrases, you just say what you want. Weâre also working with customers who want to use it to do things like monitor your biosignalsâyour heartbeat, your O2, your sweating, you know, anything. You can monitor all that and do on-device inference about the state of the person, warn them that theyâre going to have a seizure if itâs EEG, or that theyâre having some kind of heart palpitation, or what have you.
And we just started our early access program. So weâre working with customers, getting the hardware in their hands, helping them integrate that into their applications. Weâre super excited about what this chip can do. Itâs just kind of blowing the competition out of the water from a power and efficiency and performance perspective.
REC: An interesting aside, which is kind of like a reality check for meâa group of us went to speak to a DARPA director not long ago, and we were selling an idea of which one of the primary focuses was that it was basically extremely low power. And she could not care less about the power. She cared mostly about latency. That is the thing. And this was an embodied AI application, Giulia. She said, âI will burn all the power that I need to if you can get me the speed and the decisions to happen as quickly as possible, and as effectively as possible.â Which for me was like, âWhat!?â I expected that the power efficiency argument wouldâve been a winner on the day.
It was interesting to me. We were trying to argue for small drones with small brains and so on, and she was like, eh, but whatâs the latency? When do you make the decision? Anyway, very interesting.
A separate thing I don't recall people mentioning was the Youtube presentation he did a few weeks ago in which he went into his new chip in more detail. There were a few things worth noting such as listing some specs (eg. the chip uses 22nm node process, and a comparison with competitors:
View attachment 93607
A few takeaways:
- This gives further confirmation that ABR's LMU and Brainchip's State Space models are very similar competitive technologies.
- The rush to get Akida Cloud was likely partly in response to the threat ABR proposed. Akida's Gen 2 chip is still in production. However, while ABR have a chip now, Akida Gen 2 hardware was available to customers from 5th August (see link below), slightly earlier than ABR's chip. ABR got their chip back at the end of August, then had a few weeks of testing plus logistics before it could be provided to customers. Akida 2 would likely have been the FPGA prototype, so probably not the final version with maximum efficiency, but it allowed people to test at least a month earlier than ABR. In general, any new prototype chip can be released to customers this way, which provides an edge.
- In the Youtube clip, ABR were comparing their new chip to the 1st gen Akida and saying it wasn't comparable. It couldn't do more complex tasks like automatic speech recognition. I would guess Gen 2 woul have similar efficiency / performance to ABR's new chip unless there are significant improvements that Brainchip have made to hardware or algorithm efficiency (patent pending). Both chips are being manufactured in 22nm from memory so performance should be comparable when these values eventually become available.
- It's probably no surprise that ABR are working with customers to implement solutions. However, even if ABR have better hardware it doesn't mean customers will rush to adopt it over Akida Gen 2. Brainchip has spent many years building up their ecosystem and working with customers and has other advantages of it's own. Furthermore, Brainchip is more focused on IP than chips, which means they are working with some diffferent customers
- Brainchip's competitive advantage may not be as high as some people think. Akida Gen 3 will probably be important to ensuring they retain this
LAGUNA HILLS, Calif. â Aug 5th, 2025 â BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the worldâs first commercial producer of ultra-low power, fully digital, event-based neuromorphic AI, today announced launch of the BrainChip Developer Akida Cloud, a new cloud-based access point to multiple generations and configurations of the companyâs Akida⢠neuromorphic technology. The initial Developer Cloud release will feature the latest version of Akidaâs 2nd generation technology,![]()
BrainChip Launches Akida Cloud for Instant Neuromorphic Access
BrainChip launches Akida Cloud, providing instant access to the latest neuromorphic technology for developers and edge AI innovators.brainchip.com
You're right that I used a poor choice of terminology, I intended to refer to the roadmap early this year where AKD 2 ASIC tapeout was planned for January 2026.A few things.
Not surprising, there will be competitors, if there wasn't any, we should be very worried.
Quote from your post:
Akida's Gen 2 chip is still in production? No it's not, please elaborate what you mean? You mean development, right?
Akida Gen 2 hardware was available to customers from 5th August (see link below)? When it's in the cloud, it's not hardware, right?
Anyway, BRN have a much wider audience IMO.
A few snippets from GPT comparing the TSP1 with AKD1500 since they can be bought over the counter and not only IP (BRN):
When you'd pick TSP1 â when you'd pick AKD1500
- Choose TSP1 if your application is: voice / speech recognition, continuous audio processing, sensor-stream inference, always-on low-power audio/sensor device, simple embedded product â where you need a self-contained, low-power, reliable edge-AI SoC with deterministic latency.
- Choose AKD1500 if your application needs: high-performance edge AI, support for more complex / heavier models (vision, video, multi-sensor fusion), on-device learning, flexibility, future-proofing, or integration as a co-processor alongside a more capable host â especially when you want scalable performance and more computational headroom than a tiny embedded AI chip.
TL;DR â Theyâre complementary, not direct substitutes
- TSP1 = âspecialist, self-contained, efficient for time-series/audio/sensor embedded AI.â
- AKD1500 = âgeneralist, powerful, flexible, co-processor for demanding or varied edge AI tasks (vision, video, multiple modalities, ML-heavy workloads).â
TSP1 vs Akida1000 (processor)
When youâd want TSP1 vs when youâd want AKD1000
Use TSP1 if:
- Your application is primarily speech/audio processing, sensor data, continuous time-series, always-on voice UI, biosignals, low-power wearable devices, IoT â i.e. data is temporal, sequential, streaming.
- You need a self-contained, low-power SoC (MCU + NPU + memory + I/O) â minimal external dependencies, easy to embed in small form-factor devices.
- Power budget is tight (battery-powered, constrained devices), latency & real-time response matter, and the workload fits within modest neural model size.
Use AKD1000 if:
- You need flexibility and generality â want to run vision, audio + vision, sensor fusion, larger or more complex models than strictly small time-series nets.
- Your workload may benefit from event-based / sparsity-aware processing (e.g. event cameras, spatio-temporal sensor data, mixed modalities), or you anticipate growth/flexibility in model architecture over device lifetime.
- You are designing a system with a host processor (or can afford a co-processor architecture) and donât need the chip to be a standalone SoC.
Bottom Line: Not âone is strictly betterâ â they target different ends of the edge-AI spectrum
If I were to pick for voice-first, always-on edge device, TSP1 is the best fit.
- TSP1 = specialist, very efficient, minimal, optimized for time-series / streaming, low-power, embedded AI (speech, audio, sensors).
- AKD1000 = generalist neuromorphic/digital accelerator, more flexible and scalable â suited for vision, multimodal, heavier workloads, at the cost of greater system complexity and higher power when fully used.
If I were to build an edge AI box or sensor+vision device needing flexibility and higher compute, AKD1000 makes more sense.
Australia hasn't got a wicket all day, so could you please leave so we can get a wicket.Hope everyone is going to have a better day than me today as I have a ticket to watch the ashes![]()
"Next year is our year" - every damn year for the last 8 yearsClearly 2026 is BrainChip year with Onsor and many other's bringing Akida to market.
Were going to need a bigger bank![]()
Okay, no worries.You're right that I used a poor choice of terminology, I intended to refer to the roadmap early this year where AKD 2 ASIC tapeout was planned for January 2026.
Secondly, Akida Cloud doesn't mean its just running software in the cloud. There are physical FPGAs that customers can connect to (I think Sean explained that in one of his recent videos). Instead of the FPGA devices being shipped to each customers, they can remotely access them which gives more customers the opportunity to test, plus I'd suspect makes it easier to provide customer support.
Thirdly, AKD1500 has a lot more competition as multiple companies have product offerings at that rough level of power and which may be considered 'good enough' for many use cases. When you say BRN have a much wider audience, if you're basing that on AKD1500 I think that's misleading as a result.
Doing a comparison of AKD1500 to the TSP provides limited value IMO. The TSP is the only one available now that's optimised for time series / state space model applications. The TSP effectively has its own niche market due to its advanced capabilties while still achieving similarly low power levels IMO. That's a much better place to be in, which is why having the earliest possible availability to customers for AKD2000 is crucial.
Honest truth I got bored and left and as soon as I left the gabba I heard an almighty roar lolAustralia hasn't got a wicket all day, so could you please leave so we can get a wicket.
SC
Totally agree, we need to get Gen 2 out there ASAP. The tape out of Gen 2 in early 2026 is important and hopefully it will allow customers who have FGPA to develop protypes in many instances.You're right that I used a poor choice of terminology, I intended to refer to the roadmap early this year where AKD 2 ASIC tapeout was planned for January 2026.
Secondly, Akida Cloud doesn't mean its just running software in the cloud. There are physical FPGAs that customers can connect to (I think Sean explained that in one of his recent videos). Instead of the FPGA devices being shipped to each customers, they can remotely access them which gives more customers the opportunity to test, plus I'd suspect makes it easier to provide customer support.
Thirdly, AKD1500 has a lot more competition as multiple companies have product offerings at that rough level of power and which may be considered 'good enough' for many use cases. When you say BRN have a much wider audience, if you're basing that on AKD1500 I think that's misleading as a result.
Doing a comparison of AKD1500 to the TSP provides limited value IMO. The TSP is the only one available now that's optimised for time series / state space model applications. The TSP effectively has its own niche market due to its advanced capabilties while still achieving similarly low power levels IMO. That's a much better place to be in, which is why having the earliest possible availability to customers for AKD2000 is crucial.
www.meegle.com
A relevant interview regarding Chris Eliasmith of ABR, makes for an interesting read:
It has a few relevant parts such as:
SB: Your group has also come up with Legendre Memory Units to represent time. Listeners may remember that BrainChip have been using this approach as well. Can you talk about what they are and why theyâre useful?
CE: So we were really working on, âHow does the brain represent time?â But we also came to realize that, well, this is a problem in machine learning. Time series is a massive area of research, and itâs really hard. And people have all kinds of different recurrent networks, such as LSTNs and GRUs and a million variants on each of these. Transformers are now what people are using, where itâs not a recurrent network but it kind of spreads time out in space so you can just process stuff in a feedforward way. So there are all of these different approaches.
We started applying this core dynamical system to these problems. And so the obvious thing to do, I think, was basically: you take that linear systemâso this is the thing representing the temporal informationâand then you put a non-linear layer afterwards, so you can then manipulate that temporal representation however you want. And youâll learn that using normal backprop. And thatâs what we call the Legendre Memory Unit.
More recently, people have taken that exact same structure and called it a state-space model, for obvious reasonsâbecause basically, having a linear dynamical system and then a non-linear layer, thatâs a state-space model. And thatâs what BrainChip is using, for instance.
Also,
CE: And so for the last couple of years, thatâs what weâve really been focused on: building a chip that can natively run state-space models, run it extremely efficientlyâbecause itâs specifically designed to do thatâand fabricate that chip, and go to customers and start getting into all the many different products that you might be interested in having that in.
So thatâs something that weâre really excited about, because we just got the chip back for it at the end of August. We got the chip back, we tested it, and we had it up and running and setting world records in low-power running state-space models within a week and a half.
SB: Talk more about this chip. Iâm very interested!
CE: Itâs not a neuromorphic chip, in the sense that itâs not event-based. It is a neural accelerator, so it has compute right near the memory. The memory is actually a little bit exoticâitâs called MRAM, so magnetoresistive RAM, which means that itâs non-volatileâso you can load your network on there, you can basically leave it, shut off, and it draws almost no power until you need itâand then it can run the model, which is really cool.
Weâre able to do full speech recognition. So it could be sitting here basically typing out everything that Iâm saying with about a hundred times less power than other edge hardware thatâs available on the marketâunder 30 milliwatts. We can have it typing out whatever language youâre speaking in.
We can do other things with it, too. We can use it to control a device. So you can basically tell the device what you want to do, using natural languageâyou donât have to memorize keywords or key phrases, you just say what you want. Weâre also working with customers who want to use it to do things like monitor your biosignalsâyour heartbeat, your O2, your sweating, you know, anything. You can monitor all that and do on-device inference about the state of the person, warn them that theyâre going to have a seizure if itâs EEG, or that theyâre having some kind of heart palpitation, or what have you.
And we just started our early access program. So weâre working with customers, getting the hardware in their hands, helping them integrate that into their applications. Weâre super excited about what this chip can do. Itâs just kind of blowing the competition out of the water from a power and efficiency and performance perspective.
REC: An interesting aside, which is kind of like a reality check for meâa group of us went to speak to a DARPA director not long ago, and we were selling an idea of which one of the primary focuses was that it was basically extremely low power. And she could not care less about the power. She cared mostly about latency. That is the thing. And this was an embodied AI application, Giulia. She said, âI will burn all the power that I need to if you can get me the speed and the decisions to happen as quickly as possible, and as effectively as possible.â Which for me was like, âWhat!?â I expected that the power efficiency argument wouldâve been a winner on the day.
It was interesting to me. We were trying to argue for small drones with small brains and so on, and she was like, eh, but whatâs the latency? When do you make the decision? Anyway, very interesting.
A separate thing I don't recall people mentioning was the Youtube presentation he did a few weeks ago in which he went into his new chip in more detail. There were a few things worth noting such as listing some specs (eg. the chip uses 22nm node process, and a comparison with competitors:
View attachment 93607
A few takeaways:
- This gives further confirmation that ABR's LMU and Brainchip's State Space models are very similar competitive technologies.
- The rush to get Akida Cloud was likely partly in response to the threat ABR proposed. Akida's Gen 2 chip is still in production. However, while ABR have a chip now, Akida Gen 2 hardware was available to customers from 5th August (see link below), slightly earlier than ABR's chip. ABR got their chip back at the end of August, then had a few weeks of testing plus logistics before it could be provided to customers. Akida 2 would likely have been the FPGA prototype, so probably not the final version with maximum efficiency, but it allowed people to test at least a month earlier than ABR. In general, any new prototype chip can be released to customers this way, which provides an edge.
- In the Youtube clip, ABR were comparing their new chip to the 1st gen Akida and saying it wasn't comparable. It couldn't do more complex tasks like automatic speech recognition. I would guess Gen 2 woul have similar efficiency / performance to ABR's new chip unless there are significant improvements that Brainchip have made to hardware or algorithm efficiency (patent pending). Both chips are being manufactured in 22nm from memory so performance should be comparable when these values eventually become available.
- It's probably no surprise that ABR are working with customers to implement solutions. However, even if ABR have better hardware it doesn't mean customers will rush to adopt it over Akida Gen 2. Brainchip has spent many years building up their ecosystem and working with customers and has other advantages of it's own. Furthermore, Brainchip is more focused on IP than chips, which means they are working with some diffferent customers
- Brainchip's competitive advantage may not be as high as some people think. Akida Gen 3 will probably be important to ensuring they retain this
LAGUNA HILLS, Calif. â Aug 5th, 2025 â BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the worldâs first commercial producer of ultra-low power, fully digital, event-based neuromorphic AI, today announced launch of the BrainChip Developer Akida Cloud, a new cloud-based access point to multiple generations and configurations of the companyâs Akida⢠neuromorphic technology. The initial Developer Cloud release will feature the latest version of Akidaâs 2nd generation technology,![]()
BrainChip Launches Akida Cloud for Instant Neuromorphic Access
BrainChip launches Akida Cloud, providing instant access to the latest neuromorphic technology for developers and edge AI innovators.brainchip.com
Haha. Same time as I posted. I only came on forum because cricket was a bit boring. Great minds hey. Between us we got 4 wicketsHonest truth I got bored and left and as soon as I left the gabba I heard an almighty roar lol View attachment 93612
Ah Tech!Nice work, good to see that you are still invested in this space!
Chris (ABR) was engaged with Brainchip in the early days as you probably know, but we both went our separate ways, that's
my understanding, around 2017/2018?
This sector isn't a winner take all, producing a product for a given task means many different players will have a role
to play, SWaPC will play a huge role moving forward, interacting with your environment, in real time as humans do
through all our 5 senses every single second of any moment in what we all call time, will never be serviced through a
power hungry, slow in real terms, data center no matter how big or cool you can make or keep it.
Learning on-chip in real time, basically with few-shot learning (at this point in development) at the point of acquisition
is how our race functions, day in day out, well that isn't going to be surpassed unless things like, teleportation, visibly
seeing the future in one's mind before it physically transpires in the physical sense, as in the modalities of how we as
humans sense everyday things now.
ABR, like Brainchip and others will all provide a key in the clog wheel of time.
Just my thoughts.........Tech.
One would think Pico should be adapted into various projects in the very near future considering it was at the request of clients if I remember correctly. 2026 we should imo see some announcements or financials around this.With FGPA reducing the time to market for Gen 2 by 30 to 60% (see my last post for links) and plenty of avenues for developers to get on board including our own website Development Hub we should see prototypes appearing in 2026.
I note Gen 2 is scheduled to be taped out in 2026.
Its all coming together.
Anyone else noticed the pace of build up in all facets for BRN has picked up hugely in the last 12 months.