BRN Discussion Ongoing

Maybe my above question was not formulated clearly enough:

I would like to know on which company‘s FPGA chip aka hardware (e.g. Xilinx, Altera, Lattice Semiconductor, Microchip, …) were used for the „software/algorithm/IP“ demos (the ones that didn’t run on a Akida 1000 or similar).
 
  • Like
  • Fire
Reactions: 3 users

7für7

Top 20
  • Like
  • Thinking
Reactions: 3 users

manny100

Top 20
Possibly been posted before.
See the link to the paper written by Tata. Co Authored by Sounak Dey a Tata Senior Scientist on wearables/implant transforming the diagnosis and treatment of cardiac disorders below.
" ... This system efficiently manages ECG classification tasks, greatly reducing computational complexity without compromising accuracy. Furthermore, Banerjee et al.[96]optimized SNNs for ECG classification in wearable and implantable devices such as smartwatches and pacemakers. Their approach in designing both reservoir-based and feed-forward SNNs, and integrating a new peak-based spike encoder, has led to significant enhancements in network efficiency. ..."
" ... Using these state-of-theart techniques, medical practitioners may better comprehend heart issues 67 , allowing for earlier intervention and more successful treatment regimens. Combining imaging technologies, molecular diagnostics 68 , and ECG analysis offers a viable path toward transforming the diagnosis and treatment of cardiac disorders69, eventually leading to better patient outcomes. ..."
I do not need to state how huge transforming the diagnosis and treatment of cardiac disorders via a wearable or implant would be for both Tata and Brainchip.
(PDF) An SNN Based ECG Classifier For Wearable Edge Devices
I doubt Tata would only be working on one Health wearable.
Also as we know Tata already have a patent on ' gesture classification' for IOT. See link below:
https://www.linkedin.com/posts/soun...chip-akida-activity-7305909010166071296-u0f4/
 
  • Like
  • Love
  • Fire
Reactions: 22 users

Diogenese

Top 20
Maybe I have missed it, but has anyone found out already which FPGA provider Brainchip‘s IP has been demoed on?
(I‘m hoping for AMD‘s Xilinx, but Intel‘s Altera would be nice too …)
Hi Crabman,

Hadn't given it any thought, although we have used Xilinx in the past.

AMD Versal has been developed for AI:

https://www.xilinx.com/content/dam/...s/xilinx-versal-ai-compute-solution-brief.pdf

The Versal AI Core series solves the unique and most difficult problem of AI inference—compute efficiency—by coupling ASIC-class compute engines (AI Engines) together with flexible fabric (Adaptable Engines) to build accelerators with maximum efficiency for any given network, while delivering low power and low latency. Through its integrated shell—enabled by a programmable network on chip and hardened interfaces— Versal SoCs are built from the ground up to ensure streamlined connectivity to data center compute infrastructure, simplifying accelerator card development.

1742822888416.png



AI Engines
> Tiled array of vector processors, flexible interconnect, and local memory enabling massive parallelism
> Up to 133 INT8 TOPS with the Versal AI Core VC1902 device, scales up to 405 INT4 TOPS in the portfolio
> Compiles models in minutes based on TensorFlow, PyTorch, and Caffe using Python or C++ APIs
> Ideal for neural networks ranging from CNN, RNN, and MLP; hardware adaptable to optimize for evolving algorithms


I'm out of my depth here, but what is a puzzle to me is that Akida/TENNs has unique NPU architecture, so a pre-baked arrangement might not provide an accurate simulacrum. Maybe we need a more freestyle FPGA so we can build our own NPUs?
 
  • Like
  • Fire
  • Love
Reactions: 13 users
Hi Crabman,

Hadn't given it any thought, although we have used Xilinx in the past.

AMD Versal has been developed for AI:

https://www.xilinx.com/content/dam/...s/xilinx-versal-ai-compute-solution-brief.pdf

The Versal AI Core series solves the unique and most difficult problem of AI inference—compute efficiency—by coupling ASIC-class compute engines (AI Engines) together with flexible fabric (Adaptable Engines) to build accelerators with maximum efficiency for any given network, while delivering low power and low latency. Through its integrated shell—enabled by a programmable network on chip and hardened interfaces— Versal SoCs are built from the ground up to ensure streamlined connectivity to data center compute infrastructure, simplifying accelerator card development.

View attachment 80162


AI Engines
> Tiled array of vector processors, flexible interconnect, and local memory enabling massive parallelism
> Up to 133 INT8 TOPS with the Versal AI Core VC1902 device, scales up to 405 INT4 TOPS in the portfolio
> Compiles models in minutes based on TensorFlow, PyTorch, and Caffe using Python or C++ APIs
> Ideal for neural networks ranging from CNN, RNN, and MLP; hardware adaptable to optimize for evolving algorithms


I'm out of my depth here, but what is a puzzle to me is that Akida/TENNs has unique NPU architecture, so a pre-baked arrangement might not provide an accurate simulacrum. Maybe we need a more freestyle FPGA so we can build our own NPUs?
Thanks for your input. Basically I have no real idea of how an FPGA internally really works. I always imagined some basic I/O (Ram, Network etc.) and a big Lego-like configurable part (+ maybe some kind of Cache).

In the end I‘m by far not knowledgeable enough to estimate which vendor or which product series might be a good fit for running Akida IP.

I‘m just curious about finding out if Akida IP could be run on a potentially wide spread FPGA flavor or if something more „niche“ might be required.
 
  • Like
  • Fire
Reactions: 6 users

Diogenese

Top 20
Thanks for your input. Basically I have no real idea of how an FPGA internally really works. I always imagined some basic I/O (Ram, Network etc.) and a big Lego-like configurable part (+ maybe some kind of Cache).

In the end I‘m by far not knowledgeable enough to estimate which vendor or which product series might be a good fit for running Akida IP.

I‘m just curious about finding out if Akida IP could be run on a potentially wide spread FPGA flavor or if something more „niche“ might be required.
FPGAs are quite inefficient for a commercial product because the architecture is not optimize, and includes a lot of unutilized circuitry which also makes them a more expensive solution.

The main use of FPGAs would be for proof of concept/demonstration.

An Akida compatible FPGA would be one with an adequate number of appropriate components and function blocks to enable the required Akida NPUs to be configured, (which is a circular definition).
 
  • Like
  • Fire
  • Thinking
Reactions: 9 users
FPGAs are quite inefficient for a commercial product because the architecture is not optimize, and includes a lot of unutilized circuitry which also makes them a more expensive solution.

The main use of FPGAs would be for proof of concept/demonstration.

An Akida compatible FPGA would be one with an adequate number of appropriate components and function blocks to enable the required Akida NPUs to be configured, (which is a circular definition).
Also @CrabmansFriend

Way out of my tech depth but when I did a skim around Prophesee and FPGA compatibility, the project below comes up on GitHub and also a lot of the info on their site and Framos etc all appear to primarily lead back to AMD boards.

Given our blurb on the EW25 demo states we are using the EVK4 Dev Camera and Prophesee support site says the following to a question on the EVK4 vs Kria starter kit, is it given we are using a standalone AMD FPGA via PC?


  1. EVK4 is a USB camera that connects directly to a PC, making it ideal for exploring event-based vision, experimenting with sensor tuning, and performing data acquisitions.
Our release:


"gesture recognition using the Akida 2 FPGA platform in conjunction with the Prophesee EVK4 development camera."




Screenshot_2025-03-24-22-49-37-78_4641ebc0df1485bf6b47ebd018b5ee76.jpg


 
  • Like
  • Love
Reactions: 17 users
FPGAs are quite inefficient for a commercial product because the architecture is not optimize, and includes a lot of unutilized circuitry which also makes them a more expensive solution.

The main use of FPGAs would be for proof of concept/demonstration.

An Akida compatible FPGA would be one with an adequate number of appropriate components and function blocks to enable the required Akida NPUs to be configured, (which is a circular definition).
Regarding inefficiency: you‘re absolutely right and even more so if we only consider energy restricted devices or applications.

But the more I‘am reading about the current trends in the semiconductor space (especially chiplets and packaging memory on top of logic etc.) and the increasing statements in industry interviews about the breakneck speed regarding AI/ML development in general and how fast new approaches/algorithms get established, I started wondering if we might see some transition period (on the inference side), at least in areas where energy consumption is not a (primary) problem as long as we‘re not talking GPU levels but flexibility (future proofing). Lets say devices that are plugged but might need updates for 10 years or so (in an area of rapid changes like AI/ML), maybe mobile network base stations, industrial wireless communication systems etc.

I might be totally off, but I could imagine for custom semi customers there might even be a FPGA chiplet option available in the future - e.g integrated into an AMD/Intel CPU (maybe just as a safety net alternative to / or accompanying additional tensor cores or whatever highly specialized accelerator flavor there might be integrated in the future). - So basically a trade off regarding energy consumption & cost in favor of flexibility.

Edit - examples added:

FPGA as a chiplet:

Ultra low power FPGA (starting at 25 µW):
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 5 users

manny100

Top 20
FPGAs are quite inefficient for a commercial product because the architecture is not optimize, and includes a lot of unutilized circuitry which also makes them a more expensive solution.

The main use of FPGAs would be for proof of concept/demonstration.

An Akida compatible FPGA would be one with an adequate number of appropriate components and function blocks to enable the required Akida NPUs to be configured, (which is a circular definition).
I have the impression that Tony L is pushing Edge FGPA'S for LLMs past demonstration into the realms of commercialism.
It's evolving and it's watch this space for announcements pre AGM.
Could be the Teams new invention Tony L mentioned in a post pre CES25?
Others are working on it.m eg Achronix Data Accelleration.
 
  • Like
Reactions: 7 users

yogi

Regular
Just a thought I know it will be mentioned as NDA, but so many of our so called partners and talking about other tech but its been a while any of them coming out mentioning Brainchip. Like @MDhere said last week we have too many bed partners which come to mind Brainchip is desired to be bed partners but no body wants to take home bhahahaha
 
  • Like
Reactions: 3 users

manny100

Top 20
Just a thought I know it will be mentioned as NDA, but so many of our so called partners and talking about other tech but its been a while any of them coming out mentioning Brainchip. Like @MDhere said last week we have too many bed partners which come to mind Brainchip is desired to be bed partners but no body wants to take home bhahahaha
We have had a few lately, Onsor, QV cyversecurity, Ai Labs -oil rigs, Frontgrade -space.
Positives on US AFRL, Bascom Hunter - Navy Transition.
The Tata Gesture recognition patent is encouraging plus their work on wearables for early detection and treatment of heart issues.
Be Emotion have software on the VVDN Box just waiting for a customer ann.
I may have forgotten one or 2 but it's a start.
.... and of course what lies beneath via NDA'S??
Agree, want more though.
 
  • Like
  • Fire
  • Love
Reactions: 23 users

DK6161

Regular
Today is going to be a great day.
At least 10% up pls 🙏
 
  • Like
  • Fire
  • Haha
Reactions: 5 users

Diogenese

Top 20
Regarding inefficiency: you‘re absolutely right and even more so if we only consider energy restricted devices or applications.

But the more I‘am reading about the current trends in the semiconductor space (especially chiplets and packaging memory on top of logic etc.) and the increasing statements in industry interviews about the breakneck speed regarding AI/ML development in general and how fast new approaches/algorithms get established, I started wondering if we might see some transition period (on the inference side), at least in areas where energy consumption is not a (primary) problem as long as we‘re not talking GPU levels but flexibility (future proofing). Lets say devices that are plugged but might need updates for 10 years or so (in an area of rapid changes like AI/ML), maybe mobile network base stations, industrial wireless communication systems etc.

I might be totally off, but I could imagine for custom semi customers there might even be a FPGA chiplet option available in the future - e.g integrated into an AMD/Intel CPU (maybe just as a safety net alternative to / or accompanying additional tensor cores or whatever highly specialized accelerator flavor there might be integrated in the future). - So basically a trade off regarding energy consumption & cost in favor of flexibility.

Edit - examples added:

FPGA as a chiplet:

Ultra low power FPGA (starting at 25 µW):
Not sure that "FPGA" and "chiplet" go in the same sentence. An FPGA needs a lot of spare circuit functions to make it useful. Chiplets are usually a single purpose functional block.

That said, Akida is field programmable, in that the configuration (number of layers, NPUs per layer, intrconnexion between NPUs of adjacent layers, long skip) of the neural fabric can be changed by "programming" the interconnexion communication matrix network bus.
 
  • Like
  • Love
  • Fire
Reactions: 12 users

manny100

Top 20
Not sure that "FPGA" and "chiplet" go in the same sentence. An FPGA needs a lot of spare circuit functions to make it useful. Chiplets are usually a single purpose functional block.

That said, Akida is field programmable, in that the configuration (number of layers, NPUs per layer, intrconnexion between NPUs of adjacent layers, long skip) of the neural fabric can be changed by "programming" the interconnexion communication matrix network bus.
The plan may be to use the Hybrid AI that was talked about prior to CES 25?
See my earlier post.
That combines our Edge with minimal cloud.
Tony L and his team may have found a way to efficiently utilise FGPA with TENNs using hybrid AI?
There is a lot of work on FGPA at the Edge going on ATM.
 
  • Like
  • Fire
  • Love
Reactions: 4 users

7für7

Top 20
Mmhhhh I can smell some green today MF

1742860956656.gif
 
  • Haha
  • Like
Reactions: 5 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Maybe my above question was not formulated clearly enough:

I would like to know on which company‘s FPGA chip aka hardware (e.g. Xilinx, Altera, Lattice Semiconductor, Microchip, …) were used for the „software/algorithm/IP“ demos (the ones that didn’t run on a Akida 1000 or similar).

Hi CF,

Also way out my depth on this one but have been hoping it could be Microchip's radiation-tolerant PolarFire® SoC FPGA.

I’m not on my PC ATM, and am not used to posting via my i-pad, so hopefully the link below to a previous post from September 2024 works.

Microchip should be announcing very shortly the release of their higher performance PIC64-HPSC processor - the GX1100 is expected to be available for sampling in March this year!

This new processor GX1100 is going to be boosted with new AI inferencing hardware and is compatible with the existing PolarFire FPGA. Obviously I’m holding out some hope that this is where we might fit into the whole NASA High Performance Spaceflight Computer, assuming AKIDA is the AI inferencing hardware (assuming being the operative word).

It makes sense to me that the FPGA should be radiation tolerant given the interest coming from space and defense sectors in AKDA ATM.

 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 25 users

Diogenese

Top 20
The plan may be to use the Hybrid AI that was talked about prior to CES 25?
See my earlier post.
That combines our Edge with minimal cloud.
Tony L and his team may have found a way to efficiently utilise FGPA with TENNs using hybrid AI?
There is a lot of work on FGPA at the Edge going on ATM.
Hi Manny,

Yes. FPGAs may be useful for low volume niche applications where speed/power efficiency do not need to be maximally optimized, but they are primarily a development enabler and prouf-of-concept tool.

The flexibility, in-field upgradability, and interoperability of Field Programmable Gate Arrays (FPGAs), combined with low power, low latency, and parallel processing capabilities make them an essential tool for developers looking to overcome these challenges and optimize their Contextual Edge AI applications.

[Yes - everybody claims low latency/low power, but they are usually an order of magnitude or more worse than Akida's SNN, and worse again compared to TENNs]

Gate arrays have been around for donkey's years. Then came programmabel gate arrays which were set in stone once the configuration was programmed. Then came FPGAs which allow the configuration to be changed by having a programmable interconnexion network. Now we have AI optimized FPGAs in which selectable NPUs are part of the FPGA. Most NPUs use MACs (a sort of electronic mathematical "abacus"). The Akida NPU are spike/event based which utilized sparsity more effectively than MACs.

But commercializing FPGAs is moving away from our stated primary business model of IP licensing. Don't get me wrong - i've always been an advocate for COTS Akida in silicon, but I don't see FPGA as the mass market commercial solution. Obviously I don't have Tony Lewis' expertise in the field, but was he talking about a COTS FPGA or a demonstrator?
 
  • Like
  • Fire
  • Love
Reactions: 14 users
  • Like
Reactions: 2 users

7für7

Top 20
It’s interesting that Germany closed up 5% yesterday, while Australia was down around 2%

🧐
 
  • Like
Reactions: 1 users

Diogenese

Top 20
The plan may be to use the Hybrid AI that was talked about prior to CES 25?
See my earlier post.
That combines our Edge with minimal cloud.
Tony L and his team may have found a way to efficiently utilise FGPA with TENNs using hybrid AI?
There is a lot of work on FGPA at the Edge going on ATM.
Hi Manny,

RAG (retrieval augmented generation) would be one area where cloud hybrid [as distinct from analog/digital hybrid] could be implemented. A cloud based LLM could be sub-divided into specific sub-topic models which could be downloaded as required.

For a cloud-free implementation, the main LLM would be stored on a separate memory in the same device as the NN. A variation on this would be where the main LLM could be updated via the cloud.
 
  • Like
  • Fire
  • Love
Reactions: 6 users
Top Bottom