BRN Discussion Ongoing

7für7

Top 20
The only problem with that logic is one company will introduce something and then their competitors will introduce similar to compete. Greed is another motivator because a Billionaire will want to be a Trillionaire and then Squllionaire. It will be someone else's problem to fix. Too late by then. Bit like nuclear weapons. Surely none would want them. One has them, they all want them.

SC

Hmmm sure companies want profit, but politicians and regulators tend to step in to slow things down with safety rules. We can be glad nuclear weapons aren’t in ‘normal’ use — and that’s exactly why high-risk technologies won’t just get a blanket green light. Air taxis show it well: the tech exists, but safety, insurance, and infrastructure still don’t fit cleanly within the regulatory framework.

My opinion… we will see it when it happens I guess
 
  • Like
  • Thinking
Reactions: 2 users
Hmmm sure companies want profit, but politicians and regulators tend to step in to slow things down with safety rules. We can be glad nuclear weapons aren’t in ‘normal’ use — and that’s exactly why high-risk technologies won’t just get a blanket green light. Air taxis show it well: the tech exists, but safety, insurance, and infrastructure still don’t fit cleanly within the regulatory framework.

My opinion… we will see it when it happens I guess
Fair enough but my main point was mankind is it's own worse enemy. There was regulation around the new ev technology but Elon Musk started DOGE and got rid of the people who were carrying out the 2000 odd safety investigations of his Tesla cars. Problem fixed. People in power that have narcissistic tendencies, ego power trips or greed can cause a lot of harm. IMO

SC
 
  • Like
  • Thinking
  • Fire
Reactions: 9 users

7für7

Top 20
Fair enough but my main point was mankind is it's own worse enemy. There was regulation around the new ev technology but Elon Musk started DOGE and got rid of the people who were carrying out the 2000 odd safety investigations of his Tesla cars. Problem fixed. People in power that have narcissistic tendencies, ego power trips or greed can cause a lot of harm. IMO

SC

I don’t know maaan I just want to get richer

Elon Musk Smoking GIF
 
  • Haha
  • Like
  • Thinking
Reactions: 5 users
A relevant interview regarding Chris Eliasmith of ABR, makes for an interesting read:

It has a few relevant parts such as:

SB: Your group has also come up with Legendre Memory Units to represent time. Listeners may remember that BrainChip have been using this approach as well. Can you talk about what they are and why they’re useful?

CE: So we were really working on, ‘How does the brain represent time?’ But we also came to realize that, well, this is a problem in machine learning. Time series is a massive area of research, and it’s really hard. And people have all kinds of different recurrent networks, such as LSTNs and GRUs and a million variants on each of these. Transformers are now what people are using, where it’s not a recurrent network but it kind of spreads time out in space so you can just process stuff in a feedforward way. So there are all of these different approaches.

We started applying this core dynamical system to these problems. And so the obvious thing to do, I think, was basically: you take that linear system—so this is the thing representing the temporal information—and then you put a non-linear layer afterwards, so you can then manipulate that temporal representation however you want. And you’ll learn that using normal backprop. And that’s what we call the Legendre Memory Unit.

More recently, people have taken that exact same structure and called it a state-space model, for obvious reasons—because basically, having a linear dynamical system and then a non-linear layer, that’s a state-space model. And that’s what BrainChip is using, for instance.


Also,

CE: And so for the last couple of years, that’s what we’ve really been focused on: building a chip that can natively run state-space models, run it extremely efficiently—because it’s specifically designed to do that—and fabricate that chip, and go to customers and start getting into all the many different products that you might be interested in having that in.

So that’s something that we’re really excited about, because we just got the chip back for it at the end of August. We got the chip back, we tested it, and we had it up and running and setting world records in low-power running state-space models within a week and a half.

SB: Talk more about this chip. I’m very interested!

CE: It’s not a neuromorphic chip, in the sense that it’s not event-based. It is a neural accelerator, so it has compute right near the memory. The memory is actually a little bit exotic—it’s called MRAM, so magnetoresistive RAM, which means that it’s non-volatile—so you can load your network on there, you can basically leave it, shut off, and it draws almost no power until you need it—and then it can run the model, which is really cool.

We’re able to do full speech recognition. So it could be sitting here basically typing out everything that I’m saying with about a hundred times less power than other edge hardware that’s available on the market—under 30 milliwatts. We can have it typing out whatever language you’re speaking in.

We can do other things with it, too. We can use it to control a device. So you can basically tell the device what you want to do, using natural language—you don’t have to memorize keywords or key phrases, you just say what you want. We’re also working with customers who want to use it to do things like monitor your biosignals—your heartbeat, your O2, your sweating, you know, anything. You can monitor all that and do on-device inference about the state of the person, warn them that they’re going to have a seizure if it’s EEG, or that they’re having some kind of heart palpitation, or what have you.

And we just started our early access program. So we’re working with customers, getting the hardware in their hands, helping them integrate that into their applications. We’re super excited about what this chip can do. It’s just kind of blowing the competition out of the water from a power and efficiency and performance perspective.

REC: An interesting aside, which is kind of like a reality check for me—a group of us went to speak to a DARPA director not long ago, and we were selling an idea of which one of the primary focuses was that it was basically extremely low power. And she could not care less about the power. She cared mostly about latency. That is the thing. And this was an embodied AI application, Giulia. She said, “I will burn all the power that I need to if you can get me the speed and the decisions to happen as quickly as possible, and as effectively as possible.” Which for me was like, “What!?” I expected that the power efficiency argument would’ve been a winner on the day.

It was interesting to me. We were trying to argue for small drones with small brains and so on, and she was like, eh, but what’s the latency? When do you make the decision? Anyway, very interesting.


A separate thing I don't recall people mentioning was the Youtube presentation he did a few weeks ago in which he went into his new chip in more detail. There were a few things worth noting such as listing some specs (eg. the chip uses 22nm node process, and a comparison with competitors:




1765067674249.png



A few takeaways:
  1. This gives further confirmation that ABR's LMU and Brainchip's State Space models are very similar competitive technologies.
  2. The rush to get Akida Cloud was likely partly in response to the threat ABR proposed. Akida's Gen 2 chip is still in production. However, while ABR have a chip now, Akida Gen 2 hardware was available to customers from 5th August (see link below), slightly earlier than ABR's chip. ABR got their chip back at the end of August, then had a few weeks of testing plus logistics before it could be provided to customers. Akida 2 would likely have been the FPGA prototype, so probably not the final version with maximum efficiency, but it allowed people to test at least a month earlier than ABR. In general, any new prototype chip can be released to customers this way, which provides an edge.
  3. In the Youtube clip, ABR were comparing their new chip to the 1st gen Akida and saying it wasn't comparable. It couldn't do more complex tasks like automatic speech recognition. I would guess Gen 2 woul have similar efficiency / performance to ABR's new chip unless there are significant improvements that Brainchip have made to hardware or algorithm efficiency (patent pending). Both chips are being manufactured in 22nm from memory so performance should be comparable when these values eventually become available.
  4. It's probably no surprise that ABR are working with customers to implement solutions. However, even if ABR have better hardware it doesn't mean customers will rush to adopt it over Akida Gen 2. Brainchip has spent many years building up their ecosystem and working with customers and has other advantages of it's own. Furthermore, Brainchip is more focused on IP than chips, which means they are working with some diffferent customers
  5. Brainchip's competitive advantage may not be as high as some people think. Akida Gen 3 will probably be important to ensuring they retain this


LAGUNA HILLS, Calif. – Aug 5th, 2025 – BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power, fully digital, event-based neuromorphic AI, today announced launch of the BrainChip Developer Akida Cloud, a new cloud-based access point to multiple generations and configurations of the company’s Akida™ neuromorphic technology. The initial Developer Cloud release will feature the latest version of Akida’s 2nd generation technology,
 
  • Like
  • Fire
  • Love
Reactions: 20 users
Hope everyone is going to have a better day than me today as I have a ticket to watch the ashes 😂
 

Attachments

  • IMG_4006.jpeg
    3.7 MB · Views: 39
  • Haha
  • Like
  • Fire
Reactions: 9 users

Guzzi62

Regular
A relevant interview regarding Chris Eliasmith of ABR, makes for an interesting read:

It has a few relevant parts such as:

SB: Your group has also come up with Legendre Memory Units to represent time. Listeners may remember that BrainChip have been using this approach as well. Can you talk about what they are and why they’re useful?

CE: So we were really working on, ‘How does the brain represent time?’ But we also came to realize that, well, this is a problem in machine learning. Time series is a massive area of research, and it’s really hard. And people have all kinds of different recurrent networks, such as LSTNs and GRUs and a million variants on each of these. Transformers are now what people are using, where it’s not a recurrent network but it kind of spreads time out in space so you can just process stuff in a feedforward way. So there are all of these different approaches.

We started applying this core dynamical system to these problems. And so the obvious thing to do, I think, was basically: you take that linear system—so this is the thing representing the temporal information—and then you put a non-linear layer afterwards, so you can then manipulate that temporal representation however you want. And you’ll learn that using normal backprop. And that’s what we call the Legendre Memory Unit.

More recently, people have taken that exact same structure and called it a state-space model, for obvious reasons—because basically, having a linear dynamical system and then a non-linear layer, that’s a state-space model. And that’s what BrainChip is using, for instance.


Also,

CE: And so for the last couple of years, that’s what we’ve really been focused on: building a chip that can natively run state-space models, run it extremely efficiently—because it’s specifically designed to do that—and fabricate that chip, and go to customers and start getting into all the many different products that you might be interested in having that in.

So that’s something that we’re really excited about, because we just got the chip back for it at the end of August. We got the chip back, we tested it, and we had it up and running and setting world records in low-power running state-space models within a week and a half.

SB: Talk more about this chip. I’m very interested!

CE: It’s not a neuromorphic chip, in the sense that it’s not event-based. It is a neural accelerator, so it has compute right near the memory. The memory is actually a little bit exotic—it’s called MRAM, so magnetoresistive RAM, which means that it’s non-volatile—so you can load your network on there, you can basically leave it, shut off, and it draws almost no power until you need it—and then it can run the model, which is really cool.

We’re able to do full speech recognition. So it could be sitting here basically typing out everything that I’m saying with about a hundred times less power than other edge hardware that’s available on the market—under 30 milliwatts. We can have it typing out whatever language you’re speaking in.

We can do other things with it, too. We can use it to control a device. So you can basically tell the device what you want to do, using natural language—you don’t have to memorize keywords or key phrases, you just say what you want. We’re also working with customers who want to use it to do things like monitor your biosignals—your heartbeat, your O2, your sweating, you know, anything. You can monitor all that and do on-device inference about the state of the person, warn them that they’re going to have a seizure if it’s EEG, or that they’re having some kind of heart palpitation, or what have you.

And we just started our early access program. So we’re working with customers, getting the hardware in their hands, helping them integrate that into their applications. We’re super excited about what this chip can do. It’s just kind of blowing the competition out of the water from a power and efficiency and performance perspective.

REC: An interesting aside, which is kind of like a reality check for me—a group of us went to speak to a DARPA director not long ago, and we were selling an idea of which one of the primary focuses was that it was basically extremely low power. And she could not care less about the power. She cared mostly about latency. That is the thing. And this was an embodied AI application, Giulia. She said, “I will burn all the power that I need to if you can get me the speed and the decisions to happen as quickly as possible, and as effectively as possible.” Which for me was like, “What!?” I expected that the power efficiency argument would’ve been a winner on the day.

It was interesting to me. We were trying to argue for small drones with small brains and so on, and she was like, eh, but what’s the latency? When do you make the decision? Anyway, very interesting.


A separate thing I don't recall people mentioning was the Youtube presentation he did a few weeks ago in which he went into his new chip in more detail. There were a few things worth noting such as listing some specs (eg. the chip uses 22nm node process, and a comparison with competitors:




View attachment 93607


A few takeaways:
  1. This gives further confirmation that ABR's LMU and Brainchip's State Space models are very similar competitive technologies.
  2. The rush to get Akida Cloud was likely partly in response to the threat ABR proposed. Akida's Gen 2 chip is still in production. However, while ABR have a chip now,Akida's Gen 2 chip is still in production.from 5th August (see link below), slightly earlier than ABR's chip. ABR got their chip back at the end of August, then had a few weeks of testing plus logistics before it could be provided to customers. Akida 2 would likely have been the FPGA prototype, so probably not the final version with maximum efficiency, but it allowed people to test at least a month earlier than ABR. In general, any new prototype chip can be released to customers this way, which provides an edge.
  3. In the Youtube clip, ABR were comparing their new chip to the 1st gen Akida and saying it wasn't comparable. It couldn't do more complex tasks like automatic speech recognition. I would guess Gen 2 woul have similar efficiency / performance to ABR's new chip unless there are significant improvements that Brainchip have made to hardware or algorithm efficiency (patent pending). Both chips are being manufactured in 22nm from memory so performance should be comparable when these values eventually become available.
  4. It's probably no surprise that ABR are working with customers to implement solutions. However, even if ABR have better hardware it doesn't mean customers will rush to adopt it over Akida Gen 2. Brainchip has spent many years building up their ecosystem and working with customers and has other advantages of it's own. Furthermore, Brainchip is more focused on IP than chips, which means they are working with some diffferent customers
  5. Brainchip's competitive advantage may not be as high as some people think. Akida Gen 3 will probably be important to ensuring they retain this


LAGUNA HILLS, Calif. – Aug 5th, 2025 – BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power, fully digital, event-based neuromorphic AI, today announced launch of the BrainChip Developer Akida Cloud, a new cloud-based access point to multiple generations and configurations of the company’s Akida™ neuromorphic technology. The initial Developer Cloud release will feature the latest version of Akida’s 2nd generation technology,

A few things.

Not surprising, there will be competitors, if there wasn't any, we should be very worried.

Quote from your post:
Akida's Gen 2 chip is still in production? No it's not, please elaborate what you mean? You mean development, right?

Akida Gen 2 hardware was available to customers from 5th August (see link below)? When it's in the cloud, it's not hardware, right?

Anyway, BRN have a much wider audience IMO.

A few snippets from GPT comparing the TSP1 with AKD1500 since they can be bought over the counter and not only IP (BRN):

When you'd pick TSP1 — when you'd pick AKD1500​


  • Choose TSP1 if your application is: voice / speech recognition, continuous audio processing, sensor-stream inference, always-on low-power audio/sensor device, simple embedded product — where you need a self-contained, low-power, reliable edge-AI SoC with deterministic latency.
  • Choose AKD1500 if your application needs: high-performance edge AI, support for more complex / heavier models (vision, video, multi-sensor fusion), on-device learning, flexibility, future-proofing, or integration as a co-processor alongside a more capable host — especially when you want scalable performance and more computational headroom than a tiny embedded AI chip.



🧮 TL;DR — They’re complementary, not direct substitutes​


  • TSP1 = “specialist, self-contained, efficient for time-series/audio/sensor embedded AI.”
  • AKD1500 = “generalist, powerful, flexible, co-processor for demanding or varied edge AI tasks (vision, video, multiple modalities, ML-heavy workloads).”

TSP1 vs Akida1000 (processor)

When you’d want TSP1 vs when you’d want AKD1000

Use TSP1 if:​

  • Your application is primarily speech/audio processing, sensor data, continuous time-series, always-on voice UI, biosignals, low-power wearable devices, IoT — i.e. data is temporal, sequential, streaming.
  • You need a self-contained, low-power SoC (MCU + NPU + memory + I/O) — minimal external dependencies, easy to embed in small form-factor devices.
  • Power budget is tight (battery-powered, constrained devices), latency & real-time response matter, and the workload fits within modest neural model size.

Use AKD1000 if:​

  • You need flexibility and generality — want to run vision, audio + vision, sensor fusion, larger or more complex models than strictly small time-series nets.
  • Your workload may benefit from event-based / sparsity-aware processing (e.g. event cameras, spatio-temporal sensor data, mixed modalities), or you anticipate growth/flexibility in model architecture over device lifetime.
  • You are designing a system with a host processor (or can afford a co-processor architecture) and don’t need the chip to be a standalone SoC.

🧮 Bottom Line: Not “one is strictly better” — they target different ends of the edge-AI spectrum​

  • TSP1 = specialist, very efficient, minimal, optimized for time-series / streaming, low-power, embedded AI (speech, audio, sensors).
  • AKD1000 = generalist neuromorphic/digital accelerator, more flexible and scalable — suited for vision, multimodal, heavier workloads, at the cost of greater system complexity and higher power when fully used.
If I were to pick for voice-first, always-on edge device, TSP1 is the best fit.
If I were to build an edge AI box or sensor+vision device needing flexibility and higher compute, AKD1000 makes more sense.
 
  • Like
Reactions: 6 users

Diogenese

Top 20
A relevant interview regarding Chris Eliasmith of ABR, makes for an interesting read:

It has a few relevant parts such as:

SB: Your group has also come up with Legendre Memory Units to represent time. Listeners may remember that BrainChip have been using this approach as well. Can you talk about what they are and why they’re useful?

CE: So we were really working on, ‘How does the brain represent time?’ But we also came to realize that, well, this is a problem in machine learning. Time series is a massive area of research, and it’s really hard. And people have all kinds of different recurrent networks, such as LSTNs and GRUs and a million variants on each of these. Transformers are now what people are using, where it’s not a recurrent network but it kind of spreads time out in space so you can just process stuff in a feedforward way. So there are all of these different approaches.

We started applying this core dynamical system to these problems. And so the obvious thing to do, I think, was basically: you take that linear system—so this is the thing representing the temporal information—and then you put a non-linear layer afterwards, so you can then manipulate that temporal representation however you want. And you’ll learn that using normal backprop. And that’s what we call the Legendre Memory Unit.

More recently, people have taken that exact same structure and called it a state-space model, for obvious reasons—because basically, having a linear dynamical system and then a non-linear layer, that’s a state-space model. And that’s what BrainChip is using, for instance.


Also,

CE: And so for the last couple of years, that’s what we’ve really been focused on: building a chip that can natively run state-space models, run it extremely efficiently—because it’s specifically designed to do that—and fabricate that chip, and go to customers and start getting into all the many different products that you might be interested in having that in.

So that’s something that we’re really excited about, because we just got the chip back for it at the end of August. We got the chip back, we tested it, and we had it up and running and setting world records in low-power running state-space models within a week and a half.

SB: Talk more about this chip. I’m very interested!

CE: It’s not a neuromorphic chip, in the sense that it’s not event-based. It is a neural accelerator, so it has compute right near the memory. The memory is actually a little bit exotic—it’s called MRAM, so magnetoresistive RAM, which means that it’s non-volatile—so you can load your network on there, you can basically leave it, shut off, and it draws almost no power until you need it—and then it can run the model, which is really cool.

We’re able to do full speech recognition. So it could be sitting here basically typing out everything that I’m saying with about a hundred times less power than other edge hardware that’s available on the market—under 30 milliwatts. We can have it typing out whatever language you’re speaking in.

We can do other things with it, too. We can use it to control a device. So you can basically tell the device what you want to do, using natural language—you don’t have to memorize keywords or key phrases, you just say what you want. We’re also working with customers who want to use it to do things like monitor your biosignals—your heartbeat, your O2, your sweating, you know, anything. You can monitor all that and do on-device inference about the state of the person, warn them that they’re going to have a seizure if it’s EEG, or that they’re having some kind of heart palpitation, or what have you.

And we just started our early access program. So we’re working with customers, getting the hardware in their hands, helping them integrate that into their applications. We’re super excited about what this chip can do. It’s just kind of blowing the competition out of the water from a power and efficiency and performance perspective.

REC: An interesting aside, which is kind of like a reality check for me—a group of us went to speak to a DARPA director not long ago, and we were selling an idea of which one of the primary focuses was that it was basically extremely low power. And she could not care less about the power. She cared mostly about latency. That is the thing. And this was an embodied AI application, Giulia. She said, “I will burn all the power that I need to if you can get me the speed and the decisions to happen as quickly as possible, and as effectively as possible.” Which for me was like, “What!?” I expected that the power efficiency argument would’ve been a winner on the day.

It was interesting to me. We were trying to argue for small drones with small brains and so on, and she was like, eh, but what’s the latency? When do you make the decision? Anyway, very interesting.


A separate thing I don't recall people mentioning was the Youtube presentation he did a few weeks ago in which he went into his new chip in more detail. There were a few things worth noting such as listing some specs (eg. the chip uses 22nm node process, and a comparison with competitors:




View attachment 93607


A few takeaways:
  1. This gives further confirmation that ABR's LMU and Brainchip's State Space models are very similar competitive technologies.
  2. The rush to get Akida Cloud was likely partly in response to the threat ABR proposed. Akida's Gen 2 chip is still in production. However, while ABR have a chip now, Akida Gen 2 hardware was available to customers from 5th August (see link below), slightly earlier than ABR's chip. ABR got their chip back at the end of August, then had a few weeks of testing plus logistics before it could be provided to customers. Akida 2 would likely have been the FPGA prototype, so probably not the final version with maximum efficiency, but it allowed people to test at least a month earlier than ABR. In general, any new prototype chip can be released to customers this way, which provides an edge.
  3. In the Youtube clip, ABR were comparing their new chip to the 1st gen Akida and saying it wasn't comparable. It couldn't do more complex tasks like automatic speech recognition. I would guess Gen 2 woul have similar efficiency / performance to ABR's new chip unless there are significant improvements that Brainchip have made to hardware or algorithm efficiency (patent pending). Both chips are being manufactured in 22nm from memory so performance should be comparable when these values eventually become available.
  4. It's probably no surprise that ABR are working with customers to implement solutions. However, even if ABR have better hardware it doesn't mean customers will rush to adopt it over Akida Gen 2. Brainchip has spent many years building up their ecosystem and working with customers and has other advantages of it's own. Furthermore, Brainchip is more focused on IP than chips, which means they are working with some diffferent customers
  5. Brainchip's competitive advantage may not be as high as some people think. Akida Gen 3 will probably be important to ensuring they retain this


LAGUNA HILLS, Calif. – Aug 5th, 2025 – BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power, fully digital, event-based neuromorphic AI, today announced launch of the BrainChip Developer Akida Cloud, a new cloud-based access point to multiple generations and configurations of the company’s Akida™ neuromorphic technology. The initial Developer Cloud release will feature the latest version of Akida’s 2nd generation technology,

Hi IDD,

We had a joint promotion agreement with ABR back in 2016:

https://www.marketscreener.com/quot...Announces-Joint-Marketing-Agreement-38113998/

BrainChip Holdings Limited and Applied Brain Research will promote and refer each other's products and services in a joint marketing agreement. BrainChip is a developer of a new Spiking Neuron Adaptive Processor (SNAP) technology that has the ability to learn autonomously, evolve and associate information just like the human brain. Applied Brain Research is an owner and provider of an integrated technology software platform focused on building integrated artificial intelligence systems, developing the functional brain simulation, Spaun.

The partnership will see each company offering customers an integrated hardware and software offering
.

I wonder if there was a tech sharing agreement?
 
  • Like
  • Thinking
Reactions: 8 users

manny100

Top 20
Thought i would ask 'chat' to produce a table with links for verification of the history to date of AKIDA SOC's.
I asked 'chat' to put it in table for. LLM's being what they are it needed some intervention here and there.
The first news release of the beginning of a pivot from Studio software to AKIDA Hardware was 10th September 2018. There were a few more Studio releases after that date.
As you can see it took 3 years just to get AKIDA 1000 delivered and production tested.
Estimated timeframes for early business adoption and consumer adoption for another time There is now a fair bit of information available plus our own known experience with Onsor.
Please note that Full Commerciaiisation occured on 17th Jan'22.
' Full Commercialisation' news release 17th January 2022. See link.
Note its in Jan'26 its been 4 years. So much for all the its been 10 years etc.
BrainChip Achieves Full Commercialization of AKD1000
"BrainChip Achieves Full Commercialization of Akida AKD1000 Processor

Company welcomes enterprises to the Edge with AIoT PCIe board and board design layout"

Timeline of Akida Chips (Updated with ASX Announcements)​

DateEventCapabilities / SignificanceSource
10 Sept 2018Announcement of Akida 1000 architectureFirst neuromorphic SoC design unveiled; event‑based AI for video analytics, IoT, automotiveBrainChip Release
2 Dec 2020ASX announcement: Completion of Akida production designFinal design ready for tape‑out; milestone toward commercial siliconASX release 2 Dec'20
21 Oct 2021ASX announcement: BrainChip begins taking orders for Akida AI Development KitsKits shipped to partners for evaluation and integrationASX release. 21 Oct'21
9 Nov 2021BrainChip completes testing of production version of Akida chipValidated performance and readiness for commercial deploymentASX release. 9 Nov'21
28 Aug 2023First batch of Akida 1500 chips receivedfrom GlobalFoundriesSecond silicon proof point; validated portability across foundries; built on 22nm FD‑SOIBrainChip Release. See below.
4 Nov 2025Launch of latest Akida 1500 Edge AI co‑processorat Embedded World North AmericaAchieves 800 GOPS under 300 mW; ideal for wearables, smart sensors, constrained environmentsBusinessWire Release
Early 2026 (Roadmap)Planned Akida Gen 2 tape‑outNext‑generation neuromorphic IP; enhanced scalability, performance, and efficiency across automotive, IoT, wearables, defenseBrainChip roadmap


✅ Takeaway: The timeline from the 2018 architecture announcement2020 production design completionOct 2021 dev kit ordersNov 2021 production testing completed2023 Akida 1500 silicon2025 UPDATED AKIDA 1500 co‑processor launch2026 Gen 2 tape‑out milestone.

Add to that expected tech announcements later this year or early this year concerning tech advances eg Gen AI.

2018 NEWS RELEASE of pivot to hardware: BrainChip Announces Akida Neuromorphic SoC
BrainChip Unveils Breakthrough AKD1500 Edge AI Co-Processor at Embedded World North AmericaAppointment of Sean Hehir, ASX ANN 15 Nov'21 2924-02451934-2A1338758&v=undefined
ASX anns.
2924-02449398-2A1337454&v=undefined
2924-02317787-2A1267910&v=undefined
2924-02438858-2A1332482&v=undefined

 
  • Like
  • Thinking
Reactions: 7 users

manny100

Top 20
A relevant interview regarding Chris Eliasmith of ABR, makes for an interesting read:

It has a few relevant parts such as:

SB: Your group has also come up with Legendre Memory Units to represent time. Listeners may remember that BrainChip have been using this approach as well. Can you talk about what they are and why they’re useful?

CE: So we were really working on, ‘How does the brain represent time?’ But we also came to realize that, well, this is a problem in machine learning. Time series is a massive area of research, and it’s really hard. And people have all kinds of different recurrent networks, such as LSTNs and GRUs and a million variants on each of these. Transformers are now what people are using, where it’s not a recurrent network but it kind of spreads time out in space so you can just process stuff in a feedforward way. So there are all of these different approaches.

We started applying this core dynamical system to these problems. And so the obvious thing to do, I think, was basically: you take that linear system—so this is the thing representing the temporal information—and then you put a non-linear layer afterwards, so you can then manipulate that temporal representation however you want. And you’ll learn that using normal backprop. And that’s what we call the Legendre Memory Unit.

More recently, people have taken that exact same structure and called it a state-space model, for obvious reasons—because basically, having a linear dynamical system and then a non-linear layer, that’s a state-space model. And that’s what BrainChip is using, for instance.


Also,

CE: And so for the last couple of years, that’s what we’ve really been focused on: building a chip that can natively run state-space models, run it extremely efficiently—because it’s specifically designed to do that—and fabricate that chip, and go to customers and start getting into all the many different products that you might be interested in having that in.

So that’s something that we’re really excited about, because we just got the chip back for it at the end of August. We got the chip back, we tested it, and we had it up and running and setting world records in low-power running state-space models within a week and a half.

SB: Talk more about this chip. I’m very interested!

CE: It’s not a neuromorphic chip, in the sense that it’s not event-based. It is a neural accelerator, so it has compute right near the memory. The memory is actually a little bit exotic—it’s called MRAM, so magnetoresistive RAM, which means that it’s non-volatile—so you can load your network on there, you can basically leave it, shut off, and it draws almost no power until you need it—and then it can run the model, which is really cool.

We’re able to do full speech recognition. So it could be sitting here basically typing out everything that I’m saying with about a hundred times less power than other edge hardware that’s available on the market—under 30 milliwatts. We can have it typing out whatever language you’re speaking in.

We can do other things with it, too. We can use it to control a device. So you can basically tell the device what you want to do, using natural language—you don’t have to memorize keywords or key phrases, you just say what you want. We’re also working with customers who want to use it to do things like monitor your biosignals—your heartbeat, your O2, your sweating, you know, anything. You can monitor all that and do on-device inference about the state of the person, warn them that they’re going to have a seizure if it’s EEG, or that they’re having some kind of heart palpitation, or what have you.

And we just started our early access program. So we’re working with customers, getting the hardware in their hands, helping them integrate that into their applications. We’re super excited about what this chip can do. It’s just kind of blowing the competition out of the water from a power and efficiency and performance perspective.

REC: An interesting aside, which is kind of like a reality check for me—a group of us went to speak to a DARPA director not long ago, and we were selling an idea of which one of the primary focuses was that it was basically extremely low power. And she could not care less about the power. She cared mostly about latency. That is the thing. And this was an embodied AI application, Giulia. She said, “I will burn all the power that I need to if you can get me the speed and the decisions to happen as quickly as possible, and as effectively as possible.” Which for me was like, “What!?” I expected that the power efficiency argument would’ve been a winner on the day.

It was interesting to me. We were trying to argue for small drones with small brains and so on, and she was like, eh, but what’s the latency? When do you make the decision? Anyway, very interesting.


A separate thing I don't recall people mentioning was the Youtube presentation he did a few weeks ago in which he went into his new chip in more detail. There were a few things worth noting such as listing some specs (eg. the chip uses 22nm node process, and a comparison with competitors:




View attachment 93607


A few takeaways:
  1. This gives further confirmation that ABR's LMU and Brainchip's State Space models are very similar competitive technologies.
  2. The rush to get Akida Cloud was likely partly in response to the threat ABR proposed. Akida's Gen 2 chip is still in production. However, while ABR have a chip now, Akida Gen 2 hardware was available to customers from 5th August (see link below), slightly earlier than ABR's chip. ABR got their chip back at the end of August, then had a few weeks of testing plus logistics before it could be provided to customers. Akida 2 would likely have been the FPGA prototype, so probably not the final version with maximum efficiency, but it allowed people to test at least a month earlier than ABR. In general, any new prototype chip can be released to customers this way, which provides an edge.
  3. In the Youtube clip, ABR were comparing their new chip to the 1st gen Akida and saying it wasn't comparable. It couldn't do more complex tasks like automatic speech recognition. I would guess Gen 2 woul have similar efficiency / performance to ABR's new chip unless there are significant improvements that Brainchip have made to hardware or algorithm efficiency (patent pending). Both chips are being manufactured in 22nm from memory so performance should be comparable when these values eventually become available.
  4. It's probably no surprise that ABR are working with customers to implement solutions. However, even if ABR have better hardware it doesn't mean customers will rush to adopt it over Akida Gen 2. Brainchip has spent many years building up their ecosystem and working with customers and has other advantages of it's own. Furthermore, Brainchip is more focused on IP than chips, which means they are working with some diffferent customers
  5. Brainchip's competitive advantage may not be as high as some people think. Akida Gen 3 will probably be important to ensuring they retain this


LAGUNA HILLS, Calif. – Aug 5th, 2025 – BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power, fully digital, event-based neuromorphic AI, today announced launch of the BrainChip Developer Akida Cloud, a new cloud-based access point to multiple generations and configurations of the company’s Akida™ neuromorphic technology. The initial Developer Cloud release will feature the latest version of Akida’s 2nd generation technology,

Looks like a good chip. However i cannot see where it claims it has 'on chip' learning capabilities which is AKIDA's secret sauce. If it does not gave those capabilities then its still a fair way back.
 
  • Like
  • Love
Reactions: 2 users
A few things.

Not surprising, there will be competitors, if there wasn't any, we should be very worried.

Quote from your post:
Akida's Gen 2 chip is still in production? No it's not, please elaborate what you mean? You mean development, right?

Akida Gen 2 hardware was available to customers from 5th August (see link below)? When it's in the cloud, it's not hardware, right?

Anyway, BRN have a much wider audience IMO.

A few snippets from GPT comparing the TSP1 with AKD1500 since they can be bought over the counter and not only IP (BRN):

When you'd pick TSP1 — when you'd pick AKD1500​


  • Choose TSP1 if your application is: voice / speech recognition, continuous audio processing, sensor-stream inference, always-on low-power audio/sensor device, simple embedded product — where you need a self-contained, low-power, reliable edge-AI SoC with deterministic latency.
  • Choose AKD1500 if your application needs: high-performance edge AI, support for more complex / heavier models (vision, video, multi-sensor fusion), on-device learning, flexibility, future-proofing, or integration as a co-processor alongside a more capable host — especially when you want scalable performance and more computational headroom than a tiny embedded AI chip.



🧮 TL;DR — They’re complementary, not direct substitutes​


  • TSP1 = “specialist, self-contained, efficient for time-series/audio/sensor embedded AI.”
  • AKD1500 = “generalist, powerful, flexible, co-processor for demanding or varied edge AI tasks (vision, video, multiple modalities, ML-heavy workloads).”

TSP1 vs Akida1000 (processor)

When you’d want TSP1 vs when you’d want AKD1000

Use TSP1 if:​

  • Your application is primarily speech/audio processing, sensor data, continuous time-series, always-on voice UI, biosignals, low-power wearable devices, IoT — i.e. data is temporal, sequential, streaming.
  • You need a self-contained, low-power SoC (MCU + NPU + memory + I/O) — minimal external dependencies, easy to embed in small form-factor devices.
  • Power budget is tight (battery-powered, constrained devices), latency & real-time response matter, and the workload fits within modest neural model size.

Use AKD1000 if:​

  • You need flexibility and generality — want to run vision, audio + vision, sensor fusion, larger or more complex models than strictly small time-series nets.
  • Your workload may benefit from event-based / sparsity-aware processing (e.g. event cameras, spatio-temporal sensor data, mixed modalities), or you anticipate growth/flexibility in model architecture over device lifetime.
  • You are designing a system with a host processor (or can afford a co-processor architecture) and don’t need the chip to be a standalone SoC.

🧮 Bottom Line: Not “one is strictly better” — they target different ends of the edge-AI spectrum​

  • TSP1 = specialist, very efficient, minimal, optimized for time-series / streaming, low-power, embedded AI (speech, audio, sensors).
  • AKD1000 = generalist neuromorphic/digital accelerator, more flexible and scalable — suited for vision, multimodal, heavier workloads, at the cost of greater system complexity and higher power when fully used.
If I were to pick for voice-first, always-on edge device, TSP1 is the best fit.
If I were to build an edge AI box or sensor+vision device needing flexibility and higher compute, AKD1000 makes more sense.
You're right that I used a poor choice of terminology, I intended to refer to the roadmap early this year where AKD 2 ASIC tapeout was planned for January 2026.

Secondly, Akida Cloud doesn't mean its just running software in the cloud. There are physical FPGAs that customers can connect to (I think Sean explained that in one of his recent videos). Instead of the FPGA devices being shipped to each customers, they can remotely access them which gives more customers the opportunity to test, plus I'd suspect makes it easier to provide customer support.

Thirdly, AKD1500 has a lot more competition as multiple companies have product offerings at that rough level of power and which may be considered 'good enough' for many use cases. When you say BRN have a much wider audience, if you're basing that on AKD1500 I think that's misleading as a result.

Doing a comparison of AKD1500 to the TSP provides limited value IMO. The TSP is the only one available now that's optimised for time series / state space model applications. The TSP effectively has its own niche market due to its advanced capabilties while still achieving similarly low power levels IMO. That's a much better place to be in, which is why having the earliest possible availability to customers for AKD2000 is crucial.
 
  • Like
Reactions: 5 users
Hope everyone is going to have a better day than me today as I have a ticket to watch the ashes 😂
Australia hasn't got a wicket all day, so could you please leave so we can get a wicket.

SC
 
  • Like
Reactions: 1 users
Wow that was quick. Got a wicket in 10 seconds. What a catch by Smith

SC
 
  • Like
Reactions: 3 users

DK6161

Regular
Clearly 2026 is BrainChip year with Onsor and many other's bringing Akida to market.
Were going to need a bigger bank 🏦
"Next year is our year" - every damn year for the last 8 years 😂
 
  • Like
  • Sad
  • Love
Reactions: 5 users

7für7

Top 20
I just got a newsletter .. maybe because I visited the exhibition recently in Osaka. There will be an AI exhibition as well next year. Maybe someone is interested?

 
  • Like
  • Thinking
Reactions: 3 users

Guzzi62

Regular
You're right that I used a poor choice of terminology, I intended to refer to the roadmap early this year where AKD 2 ASIC tapeout was planned for January 2026.

Secondly, Akida Cloud doesn't mean its just running software in the cloud. There are physical FPGAs that customers can connect to (I think Sean explained that in one of his recent videos). Instead of the FPGA devices being shipped to each customers, they can remotely access them which gives more customers the opportunity to test, plus I'd suspect makes it easier to provide customer support.

Thirdly, AKD1500 has a lot more competition as multiple companies have product offerings at that rough level of power and which may be considered 'good enough' for many use cases. When you say BRN have a much wider audience, if you're basing that on AKD1500 I think that's misleading as a result.

Doing a comparison of AKD1500 to the TSP provides limited value IMO. The TSP is the only one available now that's optimised for time series / state space model applications. The TSP effectively has its own niche market due to its advanced capabilties while still achieving similarly low power levels IMO. That's a much better place to be in, which is why having the earliest possible availability to customers for AKD2000 is crucial.
Okay, no worries.

When I said BRN has a much wider audience, I mean both regarding the whole line up, offered: PICO, TENNs and that AKD1500 is more multi capabilities than the TSP as I understand it.

Yes, I read about the FPGA's (Prototyping: Developers can use FPGAs to test designs and mature them before committing to a more expensive, fixed-function chip like an ASIC (Application-Specific Integrated Circuit).

Yes, it's likely important to get the AKD2000 out in some physically form, but the 1500 should sell if the price is right.

Let's see some IP deals, please!
 
  • Like
Reactions: 5 users
Australia hasn't got a wicket all day, so could you please leave so we can get a wicket.

SC
Honest truth I got bored and left and as soon as I left the gabba I heard an almighty roar lol
IMG_4014.png
 
  • Haha
  • Like
Reactions: 3 users

manny100

Top 20
You're right that I used a poor choice of terminology, I intended to refer to the roadmap early this year where AKD 2 ASIC tapeout was planned for January 2026.

Secondly, Akida Cloud doesn't mean its just running software in the cloud. There are physical FPGAs that customers can connect to (I think Sean explained that in one of his recent videos). Instead of the FPGA devices being shipped to each customers, they can remotely access them which gives more customers the opportunity to test, plus I'd suspect makes it easier to provide customer support.

Thirdly, AKD1500 has a lot more competition as multiple companies have product offerings at that rough level of power and which may be considered 'good enough' for many use cases. When you say BRN have a much wider audience, if you're basing that on AKD1500 I think that's misleading as a result.

Doing a comparison of AKD1500 to the TSP provides limited value IMO. The TSP is the only one available now that's optimised for time series / state space model applications. The TSP effectively has its own niche market due to its advanced capabilties while still achieving similarly low power levels IMO. That's a much better place to be in, which is why having the earliest possible availability to customers for AKD2000 is crucial.
Totally agree, we need to get Gen 2 out there ASAP. The tape out of Gen 2 in early 2026 is important and hopefully it will allow customers who have FGPA to develop protypes in many instances.
FGPA to prototype is fairly common. It certainly can reduce time to market.
" Reduce ASIC time-to-market by 30–60%" Bottom of page f422 on above link.
 
  • Like
Reactions: 6 users

TECH

Regular
A relevant interview regarding Chris Eliasmith of ABR, makes for an interesting read:

It has a few relevant parts such as:

SB: Your group has also come up with Legendre Memory Units to represent time. Listeners may remember that BrainChip have been using this approach as well. Can you talk about what they are and why they’re useful?

CE: So we were really working on, ‘How does the brain represent time?’ But we also came to realize that, well, this is a problem in machine learning. Time series is a massive area of research, and it’s really hard. And people have all kinds of different recurrent networks, such as LSTNs and GRUs and a million variants on each of these. Transformers are now what people are using, where it’s not a recurrent network but it kind of spreads time out in space so you can just process stuff in a feedforward way. So there are all of these different approaches.

We started applying this core dynamical system to these problems. And so the obvious thing to do, I think, was basically: you take that linear system—so this is the thing representing the temporal information—and then you put a non-linear layer afterwards, so you can then manipulate that temporal representation however you want. And you’ll learn that using normal backprop. And that’s what we call the Legendre Memory Unit.

More recently, people have taken that exact same structure and called it a state-space model, for obvious reasons—because basically, having a linear dynamical system and then a non-linear layer, that’s a state-space model. And that’s what BrainChip is using, for instance.


Also,

CE: And so for the last couple of years, that’s what we’ve really been focused on: building a chip that can natively run state-space models, run it extremely efficiently—because it’s specifically designed to do that—and fabricate that chip, and go to customers and start getting into all the many different products that you might be interested in having that in.

So that’s something that we’re really excited about, because we just got the chip back for it at the end of August. We got the chip back, we tested it, and we had it up and running and setting world records in low-power running state-space models within a week and a half.

SB: Talk more about this chip. I’m very interested!

CE: It’s not a neuromorphic chip, in the sense that it’s not event-based. It is a neural accelerator, so it has compute right near the memory. The memory is actually a little bit exotic—it’s called MRAM, so magnetoresistive RAM, which means that it’s non-volatile—so you can load your network on there, you can basically leave it, shut off, and it draws almost no power until you need it—and then it can run the model, which is really cool.

We’re able to do full speech recognition. So it could be sitting here basically typing out everything that I’m saying with about a hundred times less power than other edge hardware that’s available on the market—under 30 milliwatts. We can have it typing out whatever language you’re speaking in.

We can do other things with it, too. We can use it to control a device. So you can basically tell the device what you want to do, using natural language—you don’t have to memorize keywords or key phrases, you just say what you want. We’re also working with customers who want to use it to do things like monitor your biosignals—your heartbeat, your O2, your sweating, you know, anything. You can monitor all that and do on-device inference about the state of the person, warn them that they’re going to have a seizure if it’s EEG, or that they’re having some kind of heart palpitation, or what have you.

And we just started our early access program. So we’re working with customers, getting the hardware in their hands, helping them integrate that into their applications. We’re super excited about what this chip can do. It’s just kind of blowing the competition out of the water from a power and efficiency and performance perspective.

REC: An interesting aside, which is kind of like a reality check for me—a group of us went to speak to a DARPA director not long ago, and we were selling an idea of which one of the primary focuses was that it was basically extremely low power. And she could not care less about the power. She cared mostly about latency. That is the thing. And this was an embodied AI application, Giulia. She said, “I will burn all the power that I need to if you can get me the speed and the decisions to happen as quickly as possible, and as effectively as possible.” Which for me was like, “What!?” I expected that the power efficiency argument would’ve been a winner on the day.

It was interesting to me. We were trying to argue for small drones with small brains and so on, and she was like, eh, but what’s the latency? When do you make the decision? Anyway, very interesting.


A separate thing I don't recall people mentioning was the Youtube presentation he did a few weeks ago in which he went into his new chip in more detail. There were a few things worth noting such as listing some specs (eg. the chip uses 22nm node process, and a comparison with competitors:




View attachment 93607


A few takeaways:
  1. This gives further confirmation that ABR's LMU and Brainchip's State Space models are very similar competitive technologies.
  2. The rush to get Akida Cloud was likely partly in response to the threat ABR proposed. Akida's Gen 2 chip is still in production. However, while ABR have a chip now, Akida Gen 2 hardware was available to customers from 5th August (see link below), slightly earlier than ABR's chip. ABR got their chip back at the end of August, then had a few weeks of testing plus logistics before it could be provided to customers. Akida 2 would likely have been the FPGA prototype, so probably not the final version with maximum efficiency, but it allowed people to test at least a month earlier than ABR. In general, any new prototype chip can be released to customers this way, which provides an edge.
  3. In the Youtube clip, ABR were comparing their new chip to the 1st gen Akida and saying it wasn't comparable. It couldn't do more complex tasks like automatic speech recognition. I would guess Gen 2 woul have similar efficiency / performance to ABR's new chip unless there are significant improvements that Brainchip have made to hardware or algorithm efficiency (patent pending). Both chips are being manufactured in 22nm from memory so performance should be comparable when these values eventually become available.
  4. It's probably no surprise that ABR are working with customers to implement solutions. However, even if ABR have better hardware it doesn't mean customers will rush to adopt it over Akida Gen 2. Brainchip has spent many years building up their ecosystem and working with customers and has other advantages of it's own. Furthermore, Brainchip is more focused on IP than chips, which means they are working with some diffferent customers
  5. Brainchip's competitive advantage may not be as high as some people think. Akida Gen 3 will probably be important to ensuring they retain this


LAGUNA HILLS, Calif. – Aug 5th, 2025 – BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power, fully digital, event-based neuromorphic AI, today announced launch of the BrainChip Developer Akida Cloud, a new cloud-based access point to multiple generations and configurations of the company’s Akida™ neuromorphic technology. The initial Developer Cloud release will feature the latest version of Akida’s 2nd generation technology,


Nice work, good to see that you are still invested in this space!

Chris (ABR) was engaged with Brainchip in the early days as you probably know, but we both went our separate ways, that's
my understanding, around 2017/2018?

This sector isn't a winner take all, producing a product for a given task means many different players will have a role
to play, SWaPC will play a huge role moving forward, interacting with your environment, in real time as humans do
through all our 5 senses every single second of any moment in what we all call time, will never be serviced through a
power hungry, slow in real terms, data center no matter how big or cool you can make or keep it.

Learning on-chip in real time, basically with few-shot learning (at this point in development) at the point of acquisition
is how our race functions, day in day out, well that isn't going to be surpassed unless things like, teleportation, visibly
seeing the future in one's mind before it physically transpires in the physical sense, as in the modalities of how we as
humans sense everyday things now.

ABR, like Brainchip and others will all provide a key in the clog wheel of time.

Just my thoughts.........Tech.
 
  • Like
  • Love
  • Fire
Reactions: 9 users
  • Haha
Reactions: 1 users

Diogenese

Top 20
Nice work, good to see that you are still invested in this space!

Chris (ABR) was engaged with Brainchip in the early days as you probably know, but we both went our separate ways, that's
my understanding, around 2017/2018?

This sector isn't a winner take all, producing a product for a given task means many different players will have a role
to play, SWaPC will play a huge role moving forward, interacting with your environment, in real time as humans do
through all our 5 senses every single second of any moment in what we all call time, will never be serviced through a
power hungry, slow in real terms, data center no matter how big or cool you can make or keep it.

Learning on-chip in real time, basically with few-shot learning (at this point in development) at the point of acquisition
is how our race functions, day in day out, well that isn't going to be surpassed unless things like, teleportation, visibly
seeing the future in one's mind before it physically transpires in the physical sense, as in the modalities of how we as
humans sense everyday things now.

ABR, like Brainchip and others will all provide a key in the clog wheel of time.

Just my thoughts.........Tech.
Ah Tech!

The clog wheel of time - that's sticking the old sabot into the great mandala.
 
  • Haha
  • Fire
  • Like
Reactions: 4 users
Top Bottom