BRN Discussion Ongoing

FuzM

Regular
I wouldn’t be surprised if it turned out we’re also somehow connected to South Korean ASIC design house ASICLAND, whose engineers work closely with global partners, including TSMC and Arm.

The fact that both their CMO & Head of Overseas Sales and the company’s Global Strategy Manager “celebrated” this week’s LinkedIn post about our redesigned website with a clapping-hands emoji each is a little too much of a coincidence, don’t you think? 😉

“ASICLAND is a leading design house specializing in application-specific integrated circuit (ASIC) design, offering high-performance, low-power, and cost-optimized design services. As an official Value Chain Alliance (VCA) partner of TSMC—the world’s No.1 foundry—ASICLAND serves as a trusted bridge between customers and TSMC. We deliver full turnkey support, from architecture design to GDS delivery, across a wide range of industries including AI, automotive, IoT, and memory.”


View attachment 86920


View attachment 86922

View attachment 86935


View attachment 86930




View attachment 86926 View attachment 86927 View attachment 86928 View attachment 86929


In this context, I was wondering whether the undisclosed “Leading U.S. IDM Company” in yesterday’s press release 👇🏻 could by any chance be our licensee Renesas Electronics America, a wholly owned subsidiary of Tokyo-headquartered Renesas Electronics Corporation, which in turn happens to be a global semiconductor player? Just a wild guess, though…

“▶ Strengthening automotive semiconductor design capabilities through collaboration with a global semiconductor client

(…) [2025-06-11] ASICLAND has signed a supply agreement with a leading U.S. integrated device manufacturer (IDM) to jointly target the global automotive semiconductor market.

(…) The U.S. semiconductor company involved is an IDM providing essential chip designs and power management solutions for automotive electronics systems, with active operations across various industrial sectors. Through this collaboration, ASICLAND will expand its technological foundation and expertise in automotive chip design.

(…) Meanwhile, ASICLAND is accelerating efforts to enter global markets by establishing an advanced R&D center in Hsinchu, Taiwan*. The company is actively securing cutting-edge design technologies for 3nm and 5nm process nodes as well as CoWos (Chip-on-Wafer-on-Substrate) packaging technologies.”


*Hsinchu Science Park is Taiwan’s Silicon Valley and home to about 500 high-tech companies, among them TSMC, UMC and MediaTek, as well as our partner Andes Technology.




View attachment 86936




View attachment 86932

Not a surprise after all
 
  • Like
Reactions: 2 users

jtardif999

Regular
FF

Let’s compare production time lines:

AKD1000
2018 Design work completed and IP tested by select customers leading to decision to completely redesign AKD1000
2019 July redesigned AKD1000 IP released to select customers for feedback
2020 March Feedback received AKD1000 produced in FPGA and tested
2020 April final design sent to Socionext for engineering.
2020 September AKD1000 engineering sample back from TSMC & testing takes place
2020 October AKD1000 engineering samples released to early access customers
2020 December AKD1000 engineering samples released to NASA
2021 March Brainchip announces taking customer feedback into account design changes made to AKD1000 and ready to send to tapeout of reference AKD1000 by TSMC
2021 November AKD1000 reference chips have been received and tested proving redesign shows a 30 percent increase in efficiency over AKD1000 engineering samples.
2023 January After extensive customer engagement Brainchip design a further iteration of the AKD 1000 series named AK1500

AKIDA 2000
2022 Brainchip announce acceleration of the development of AKIDA2.0 IP
2023 March AKIDA2.0 IP announced and released to select customers before general availability to obtain customer feedback.
2023 October AKIDA2.0 IP availability extended
2024 Brainchip prioritises development of models and TENNS software to support AKIDA 2.0
2024 - 2025 Brainchip works on and completes engineering design for demonstrating AKIDA 2.0 on FPGA
2025 Brainchip demonstrates AKIDA 2.0 on FPGA and makes it available for testing by select customers.
2025 August Brainchip announces the AKIDA2.0 FPGA Cloud Access to customers/developers.
2026 February Brainchip announces the refined AKD2.0 named AKIDA2500 to be produced in silicon and taped out on 12nm at TSMC as an engineering sample prior to any sort of volume production.

There is appears to be a certain amount of similarity in the time lines for developing revolutionary neuromorphic technology regardless of who is on the Board, who is the Chair, and who is the CEO of Brainchip.

Note I have excluded references to AKIDA PICO from the timeline for AKIDA2.0 as well as the various platforms such as the Edge Box and M.2 Cards for AKD1000
Bit confusing nomenclature 🙂;

Akida1.0 is the underlying architecture associated with the AKD1000 and AKD1500 chips.

Akida2.0 is the underlying architecture for the AKD2500 chip when it has been fabricated.

AFAIK Akida Pico is a further modification of underlying Akida2.0 architecture allowing one neural processing unit to be the singular unit of processing in the neural fabric (the entire neural fabric) - not limited to one though I think.

Modifications have been made to each of the Akida architectures in line with customer requests as have modifications made directly to the chips themselves.

I hope this makes a bit more sense. I think all of this would be quite confusing to anyone without an engineering background.
 
  • Like
Reactions: 4 users

Diogenese

Top 20
Hi @manny100

Don’t want to be a negative Nelly on the topic but Megachips are also partnered with Quadric and have been pumping their tyres up publicly:

Quadric​

“Chimera General Purpose Neural Processing Unit” (GPNPU), which Quadric independently developed, can provide high performance computing regardless of sensor types. Conventional AI engines usually only run at high speed in neural networks while off-loading pre-processing and boundary processing onto a host system. However, with Quadric’s technology, it is possible to run at high speed with pre-processing, boundary processing, and neural network inference on a single hardware unit.
This unique ability that can support all kinds of data processing will allow for the accommodation of new types of neural networks released in the future without any hardware changes, regardless of the applications.

Features​

  • Scalability for both low power consumption and accuracy
  • Accelerate entire steps of AI pipeline processing
  • Flexible architecture to support the evolution of AI models
  • IPs for silicon implementation
  • Software development kits available


Douglas Fairburn who once did a podcast with Brainchip says:

View attachment 95056


I think we need to be realistic that Megachips are possibly using Quadric vs BC.

Love to be wrong of course but I haven’t seen any evidence of that.
Hi SG,

I'd forgotten about the Megachips/Quadric partnership:


Quadric and MegaChips Form Partnership to Bring IP Products to ASIC and SoC Market - Embedded Computing Design

Quadric and MegaChips announced a strategic partnership to deliver ASIC and SoC solutions built on Quadric’s edge AI processor architecture.

MegaChips announced an equity stake in Quadric in January 2022 and is also a major investor in a $21M Series B funding round announced in March through their MegaChips LSI USA Corporation subsidiary. The round aims to help Quadric release the next version of its processor architecture, improve the performance and breadth of the Quadric software development kit (SDK), and roll out IP products to be integrated in MegaChips’ ASICs and SoCs
.

I looked at their patents at the time, and, if I recall correctly, was not overly impressed. However they have a new patent which refers to their old patents as Prior Art. (This does not necessarily mean they are obsolete, as it could refer to an improvement):

US20260023715A1 SYSTEMS AND METHODS FOR IMPLEMENTING DIRECTIONAL OPERAND BROADCAST AND MULTIPLY-ACCUMULATE EXECUTION USING A CONFIGURABLE PATCH MESH IN A MULTI-CORE PROCESSING ARRAY OF AN INTEGRATED CIRCUIT 20240719 - Published 20260122

1770966533622.png


[0099] Each array processing core includes a local register memory 310 that may be logically divided into a broadcast region 309 , a source region 318 , and a destination region 320 . The broadcast region 309 may be configured to hold operand values intended to be fed into a patch operation. In one or more embodiments, the source region 318 preferably stores local data operands to be used in multiply-accumulate computations, while the destination region 320 receives accumulation results.

[0100] A feed path interconnect enables operand values from one or more array processing cores to be routed to the origin core 303 . Once the operand value may be received by the origin core 303 , the operand value may be staged in a broadcast staging register or similar holding buffer. The origin core 303 then initiates a unidirectional wavefront broadcast of the operand value using a patch mesh interconnect fabric 312 .

[0101] In one or more embodiments, the patch mesh interconnect 312 may be configured to propagate operand values from the origin core 303 to other processing cores within the patch region using a wavefront propagation scheme. In one embodiment, the wavefront progresses outward from the origin core 303 in a Manhattan-distance order, such that
each processing core within the patch region receives the broadcast value with a fixed delay relative to the number of inter-core hops from the origin core. The broadcast timing model enables deterministic compute scheduling across the patch.

Basically, a pulse originating at 303 (top left) propagates through the matrix with a delay determined by the number pf cores between 303 and the destination core, creating a propagating "wave front".

This sounds to me like it is intended to mimic the propagation of signals through the brain. The reason for the developement is set out here"

[0006] Accordingly, there remains a need in the integrated circuitry field for operand distribution techniques that permit localized, directionally controlled broadcasting of operands to selected subsets of processing elements. There also remains a need for compute scheduling frameworks that enable deterministic multiply-accumulate operations across such subsets while minimizing interconnect congestion and improving temporal alignment of data arrival and compute triggering.


CTO Nigel Drego cut his teeth with cryptocurrency tech.

The Quadric IO webpage has been 404'd. Their last investment round was 20260115.
 
Last edited:
  • Like
  • Fire
  • Wow
Reactions: 7 users

manny100

Top 20
Hi SG,

I'd forgotten about the Megachips/Quadric partnership:


Quadric and MegaChips Form Partnership to Bring IP Products to ASIC and SoC Market - Embedded Computing Design

Quadric and MegaChips announced a strategic partnership to deliver ASIC and SoC solutions built on Quadric’s edge AI processor architecture.

MegaChips announced an equity stake in Quadric in January 2022 and is also a major investor in a $21M Series B funding round announced in March through their MegaChips LSI USA Corporation subsidiary. The round aims to help Quadric release the next version of its processor architecture, improve the performance and breadth of the Quadric software development kit (SDK), and roll out IP products to be integrated in MegaChips’ ASICs and SoCs
.

I looked at their patents at the time, and, if I recall correctly, was not overly impressed. However they have a new patent which refers to their old patents as Prior Art. (This does not necessarily mean they are obsolete, as it could refer to an improvement):

US20260023715A1 SYSTEMS AND METHODS FOR IMPLEMENTING DIRECTIONAL OPERAND BROADCAST AND MULTIPLY-ACCUMULATE EXECUTION USING A CONFIGURABLE PATCH MESH IN A MULTI-CORE PROCESSING ARRAY OF AN INTEGRATED CIRCUIT 20240719 - Published 20260122

View attachment 95059

[0099] Each array processing core includes a local register memory 310 that may be logically divided into a broadcast region 309 , a source region 318 , and a destination region 320 . The broadcast region 309 may be configured to hold operand values intended to be fed into a patch operation. In one or more embodiments, the source region 318 preferably stores local data operands to be used in multiply-accumulate computations, while the destination region 320 receives accumulation results.

[0100] A feed path interconnect enables operand values from one or more array processing cores to be routed to the origin core 303 . Once the operand value may be received by the origin core 303 , the operand value may be staged in a broadcast staging register or similar holding buffer. The origin core 303 then initiates a unidirectional wavefront broadcast of the operand value using a patch mesh interconnect fabric 312 .

[0101] In one or more embodiments, the patch mesh interconnect 312 may be configured to propagate operand values from the origin core 303 to other processing cores within the patch region using a wavefront propagation scheme. In one embodiment, the wavefront progresses outward from the origin core 303 in a Manhattan-distance order, such that
each processing core within the patch region receives the broadcast value with a fixed delay relative to the number of inter-core hops from the origin core. The broadcast timing model enables deterministic compute scheduling across the patch.

Basically, a pulse originating at 303 (top left) propagates through the matrix with a delay determined by the number pf cores between 303 and the destination core, creating a propagating "wave front".

This sounds to me like it is intended to mimic the propagation of signals through the brain. The reason for the developement is set out here"

[0006] Accordingly, there remains a need in the integrated circuitry field for operand distribution techniques that permit localized, directionally controlled broadcasting of operands to selected subsets of processing elements. There also remains a need for compute scheduling frameworks that enable deterministic multiply-accumulate operations across such subsets while minimizing interconnect congestion and improving temporal alignment of data arrival and compute triggering.


CTO Nigel Drego cut his teeth with cryptocurrency tech.

The Quadric IO webpage has been 404'd. Their last investment round was 20260115.
Its likely horses for courses. Some robot tasks will be suited to AKIDA1000 and other tasks to Quadric.
Its likely both will get a 'gig'.
 
  • Like
Reactions: 2 users
Hi SG,

I'd forgotten about the Megachips/Quadric partnership:


Quadric and MegaChips Form Partnership to Bring IP Products to ASIC and SoC Market - Embedded Computing Design

Quadric and MegaChips announced a strategic partnership to deliver ASIC and SoC solutions built on Quadric’s edge AI processor architecture.

MegaChips announced an equity stake in Quadric in January 2022 and is also a major investor in a $21M Series B funding round announced in March through their MegaChips LSI USA Corporation subsidiary. The round aims to help Quadric release the next version of its processor architecture, improve the performance and breadth of the Quadric software development kit (SDK), and roll out IP products to be integrated in MegaChips’ ASICs and SoCs
.

I looked at their patents at the time, and, if I recall correctly, was not overly impressed. However they have a new patent which refers to their old patents as Prior Art. (This does not necessarily mean they are obsolete, as it could refer to an improvement):

US20260023715A1 SYSTEMS AND METHODS FOR IMPLEMENTING DIRECTIONAL OPERAND BROADCAST AND MULTIPLY-ACCUMULATE EXECUTION USING A CONFIGURABLE PATCH MESH IN A MULTI-CORE PROCESSING ARRAY OF AN INTEGRATED CIRCUIT 20240719 - Published 20260122

View attachment 95059

[0099] Each array processing core includes a local register memory 310 that may be logically divided into a broadcast region 309 , a source region 318 , and a destination region 320 . The broadcast region 309 may be configured to hold operand values intended to be fed into a patch operation. In one or more embodiments, the source region 318 preferably stores local data operands to be used in multiply-accumulate computations, while the destination region 320 receives accumulation results.

[0100] A feed path interconnect enables operand values from one or more array processing cores to be routed to the origin core 303 . Once the operand value may be received by the origin core 303 , the operand value may be staged in a broadcast staging register or similar holding buffer. The origin core 303 then initiates a unidirectional wavefront broadcast of the operand value using a patch mesh interconnect fabric 312 .

[0101] In one or more embodiments, the patch mesh interconnect 312 may be configured to propagate operand values from the origin core 303 to other processing cores within the patch region using a wavefront propagation scheme. In one embodiment, the wavefront progresses outward from the origin core 303 in a Manhattan-distance order, such that
each processing core within the patch region receives the broadcast value with a fixed delay relative to the number of inter-core hops from the origin core. The broadcast timing model enables deterministic compute scheduling across the patch.

Basically, a pulse originating at 303 (top left) propagates through the matrix with a delay determined by the number pf cores between 303 and the destination core, creating a propagating "wave front".

This sounds to me like it is intended to mimic the propagation of signals through the brain. The reason for the developement is set out here"

[0006] Accordingly, there remains a need in the integrated circuitry field for operand distribution techniques that permit localized, directionally controlled broadcasting of operands to selected subsets of processing elements. There also remains a need for compute scheduling frameworks that enable deterministic multiply-accumulate operations across such subsets while minimizing interconnect congestion and improving temporal alignment of data arrival and compute triggering.


CTO Nigel Drego cut his teeth with cryptocurrency tech.

The Quadric IO webpage has been 404'd. Their last investment round was 20260115.

Probably sick of me banging on about Quadric but whether we like it or not Quadric Is a competitor and a partner of Megachips. Maybe it isn’t as good as Akida but in some cases it must be good enough.


The article attached states Quadric just raised another $30M taking it’s funding to 72M so it’s not going anywhere in a hurry.

Further discussion about the speed of the industry and how technology advancements are quicker than chips can be manufactured which may point to why Brainchip took its time and consulted widely prior to taping out:


I’m still wrapped AKD 2500 has been taped out because it indicates industry has generally approved what BC is building or they wouldn’t spend the $2.5M to produce it.

It’s not going to be a winner takes all scenario anyway. It will eventually be a big TAM!

:)
 
  • Like
  • Fire
Reactions: 7 users

gex

Regular
Whats the difference between 3 years ago when people were saying where goimg to the moon,
Our company was nothing back then.
Look at us now on how far we have developed and how many varieties of avenues of potential income with the array of products availiable.
Why did people believe then but not now,
Look at the prices? Do yourself a favour
What is satire pleb
 
  • Haha
Reactions: 1 users

Diogenese

Top 20
Probably sick of me banging on about Quadric but whether we like it or not Quadric Is a competitor and a partner of Megachips. Maybe it isn’t as good as Akida but in some cases it must be good enough.


The article attached states Quadric just raised another $30M taking it’s funding to 72M so it’s not going anywhere in a hurry.

Further discussion about the speed of the industry and how technology advancements are quicker than chips can be manufactured which may point to why Brainchip took its time and consulted widely prior to taping out:


I’m still wrapped AKD 2500 has been taped out because it indicates industry has generally approved what BC is building or they wouldn’t spend the $2.5M to produce it.

It’s not going to be a winner takes all scenario anyway. It will eventually be a big TAM!

:)
Yes. Akida 2 has TENNs - that's streets ahead of the competition (Applied Brain Research excepted). Akida 3 adds the latency and power benefits of solid state switching mesh v packet switching.
 
  • Like
  • Fire
Reactions: 3 users
Top Bottom