BRN Discussion Ongoing

manny100

Regular
FPGAs are quite inefficient for a commercial product because the architecture is not optimize, and includes a lot of unutilized circuitry which also makes them a more expensive solution.

The main use of FPGAs would be for proof of concept/demonstration.

An Akida compatible FPGA would be one with an adequate number of appropriate components and function blocks to enable the required Akida NPUs to be configured, (which is a circular definition).
I have the impression that Tony L is pushing Edge FGPA'S for LLMs past demonstration into the realms of commercialism.
It's evolving and it's watch this space for announcements pre AGM.
Could be the Teams new invention Tony L mentioned in a post pre CES25?
Others are working on it.m eg Achronix Data Accelleration.
 
  • Like
Reactions: 7 users

yogi

Regular
Just a thought I know it will be mentioned as NDA, but so many of our so called partners and talking about other tech but its been a while any of them coming out mentioning Brainchip. Like @MDhere said last week we have too many bed partners which come to mind Brainchip is desired to be bed partners but no body wants to take home bhahahaha
 
  • Like
Reactions: 3 users

manny100

Regular
Just a thought I know it will be mentioned as NDA, but so many of our so called partners and talking about other tech but its been a while any of them coming out mentioning Brainchip. Like @MDhere said last week we have too many bed partners which come to mind Brainchip is desired to be bed partners but no body wants to take home bhahahaha
We have had a few lately, Onsor, QV cyversecurity, Ai Labs -oil rigs, Frontgrade -space.
Positives on US AFRL, Bascom Hunter - Navy Transition.
The Tata Gesture recognition patent is encouraging plus their work on wearables for early detection and treatment of heart issues.
Be Emotion have software on the VVDN Box just waiting for a customer ann.
I may have forgotten one or 2 but it's a start.
.... and of course what lies beneath via NDA'S??
Agree, want more though.
 
  • Like
  • Fire
  • Love
Reactions: 22 users

DK6161

Regular
Today is going to be a great day.
At least 10% up pls 🙏
 
  • Like
  • Fire
  • Haha
Reactions: 5 users

Diogenese

Top 20
Regarding inefficiency: you‘re absolutely right and even more so if we only consider energy restricted devices or applications.

But the more I‘am reading about the current trends in the semiconductor space (especially chiplets and packaging memory on top of logic etc.) and the increasing statements in industry interviews about the breakneck speed regarding AI/ML development in general and how fast new approaches/algorithms get established, I started wondering if we might see some transition period (on the inference side), at least in areas where energy consumption is not a (primary) problem as long as we‘re not talking GPU levels but flexibility (future proofing). Lets say devices that are plugged but might need updates for 10 years or so (in an area of rapid changes like AI/ML), maybe mobile network base stations, industrial wireless communication systems etc.

I might be totally off, but I could imagine for custom semi customers there might even be a FPGA chiplet option available in the future - e.g integrated into an AMD/Intel CPU (maybe just as a safety net alternative to / or accompanying additional tensor cores or whatever highly specialized accelerator flavor there might be integrated in the future). - So basically a trade off regarding energy consumption & cost in favor of flexibility.

Edit - examples added:

FPGA as a chiplet:

Ultra low power FPGA (starting at 25 µW):
Not sure that "FPGA" and "chiplet" go in the same sentence. An FPGA needs a lot of spare circuit functions to make it useful. Chiplets are usually a single purpose functional block.

That said, Akida is field programmable, in that the configuration (number of layers, NPUs per layer, intrconnexion between NPUs of adjacent layers, long skip) of the neural fabric can be changed by "programming" the interconnexion communication matrix network bus.
 
  • Like
  • Love
  • Fire
Reactions: 11 users

manny100

Regular
Not sure that "FPGA" and "chiplet" go in the same sentence. An FPGA needs a lot of spare circuit functions to make it useful. Chiplets are usually a single purpose functional block.

That said, Akida is field programmable, in that the configuration (number of layers, NPUs per layer, intrconnexion between NPUs of adjacent layers, long skip) of the neural fabric can be changed by "programming" the interconnexion communication matrix network bus.
The plan may be to use the Hybrid AI that was talked about prior to CES 25?
See my earlier post.
That combines our Edge with minimal cloud.
Tony L and his team may have found a way to efficiently utilise FGPA with TENNs using hybrid AI?
There is a lot of work on FGPA at the Edge going on ATM.
 
  • Like
  • Fire
  • Love
Reactions: 4 users

7für7

Top 20
Mmhhhh I can smell some green today MF

1742860956656.gif
 
  • Haha
  • Like
Reactions: 5 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Maybe my above question was not formulated clearly enough:

I would like to know on which company‘s FPGA chip aka hardware (e.g. Xilinx, Altera, Lattice Semiconductor, Microchip, …) were used for the „software/algorithm/IP“ demos (the ones that didn’t run on a Akida 1000 or similar).

Hi CF,

Also way out my depth on this one but have been hoping it could be Microchip's radiation-tolerant PolarFire® SoC FPGA.

I’m not on my PC ATM, and am not used to posting via my i-pad, so hopefully the link below to a previous post from September 2024 works.

Microchip should be announcing very shortly the release of their higher performance PIC64-HPSC processor - the GX1100 is expected to be available for sampling in March this year!

This new processor GX1100 is going to be boosted with new AI inferencing hardware and is compatible with the existing PolarFire FPGA. Obviously I’m holding out some hope that this is where we might fit into the whole NASA High Performance Spaceflight Computer, assuming AKIDA is the AI inferencing hardware (assuming being the operative word).

It makes sense to me that the FPGA should be radiation tolerant given the interest coming from space and defense sectors in AKDA ATM.

 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 25 users

Diogenese

Top 20
The plan may be to use the Hybrid AI that was talked about prior to CES 25?
See my earlier post.
That combines our Edge with minimal cloud.
Tony L and his team may have found a way to efficiently utilise FGPA with TENNs using hybrid AI?
There is a lot of work on FGPA at the Edge going on ATM.
Hi Manny,

Yes. FPGAs may be useful for low volume niche applications where speed/power efficiency do not need to be maximally optimized, but they are primarily a development enabler and prouf-of-concept tool.

The flexibility, in-field upgradability, and interoperability of Field Programmable Gate Arrays (FPGAs), combined with low power, low latency, and parallel processing capabilities make them an essential tool for developers looking to overcome these challenges and optimize their Contextual Edge AI applications.

[Yes - everybody claims low latency/low power, but they are usually an order of magnitude or more worse than Akida's SNN, and worse again compared to TENNs]

Gate arrays have been around for donkey's years. Then came programmabel gate arrays which were set in stone once the configuration was programmed. Then came FPGAs which allow the configuration to be changed by having a programmable interconnexion network. Now we have AI optimized FPGAs in which selectable NPUs are part of the FPGA. Most NPUs use MACs (a sort of electronic mathematical "abacus"). The Akida NPU are spike/event based which utilized sparsity more effectively than MACs.

But commercializing FPGAs is moving away from our stated primary business model of IP licensing. Don't get me wrong - i've always been an advocate for COTS Akida in silicon, but I don't see FPGA as the mass market commercial solution. Obviously I don't have Tony Lewis' expertise in the field, but was he talking about a COTS FPGA or a demonstrator?
 
  • Like
  • Fire
  • Love
Reactions: 13 users
Today is going to be a great day.
At least 10% up pls 🙏
Maybe once LDA have sold
 
  • Like
Reactions: 2 users

7für7

Top 20
It’s interesting that Germany closed up 5% yesterday, while Australia was down around 2%

🧐
 
  • Like
Reactions: 1 users

Diogenese

Top 20
The plan may be to use the Hybrid AI that was talked about prior to CES 25?
See my earlier post.
That combines our Edge with minimal cloud.
Tony L and his team may have found a way to efficiently utilise FGPA with TENNs using hybrid AI?
There is a lot of work on FGPA at the Edge going on ATM.
Hi Manny,

RAG (retrieval augmented generation) would be one area where cloud hybrid [as distinct from analog/digital hybrid] could be implemented. A cloud based LLM could be sub-divided into specific sub-topic models which could be downloaded as required.

For a cloud-free implementation, the main LLM would be stored on a separate memory in the same device as the NN. A variation on this would be where the main LLM could be updated via the cloud.
 
  • Like
  • Fire
  • Love
Reactions: 6 users

Iseki

Regular
Maybe my above question was not formulated clearly enough:

I would like to know on which company‘s FPGA chip aka hardware (e.g. Xilinx, Altera, Lattice Semiconductor, Microchip, …) were used for the „software/algorithm/IP“ demos (the ones that didn’t run on a Akida 1000 or similar).
I asked Tony but didn't get a response.
In the mean time I see that Frontgrade produce a couple. (Certus)
I'm not sure how many synapses we need to run Akida2.
 
  • Like
Reactions: 1 users

Doz

Regular
My guess for the supply of the FPGA’s is QuickLogic and maybe Edward got us a special deal .


1742871759560.png


1742871815109.png
 
  • Like
  • Fire
  • Thinking
Reactions: 14 users

Diogenese

Top 20
Regarding inefficiency: you‘re absolutely right and even more so if we only consider energy restricted devices or applications.

But the more I‘am reading about the current trends in the semiconductor space (especially chiplets and packaging memory on top of logic etc.) and the increasing statements in industry interviews about the breakneck speed regarding AI/ML development in general and how fast new approaches/algorithms get established, I started wondering if we might see some transition period (on the inference side), at least in areas where energy consumption is not a (primary) problem as long as we‘re not talking GPU levels but flexibility (future proofing). Lets say devices that are plugged but might need updates for 10 years or so (in an area of rapid changes like AI/ML), maybe mobile network base stations, industrial wireless communication systems etc.

I might be totally off, but I could imagine for custom semi customers there might even be a FPGA chiplet option available in the future - e.g integrated into an AMD/Intel CPU (maybe just as a safety net alternative to / or accompanying additional tensor cores or whatever highly specialized accelerator flavor there might be integrated in the future). - So basically a trade off regarding energy consumption & cost in favor of flexibility.

Edit - examples added:

FPGA as a chiplet:

Ultra low power FPGA (starting at 25 µW):
Achronix

Chiplets Built with Speedcore eFPGA IP​


Think about that ... you get a licence for the IP to make an eFPGA so you can make an inferior version of a SNN at great expense.

... and the Lettuce one ...

iCE40 LP/HX FPGAs can be used in countless ways to add differentiation to mobile products. Shown below are four of the most common iCE40 LP/HX design categories along with specific application examples.

Enhance Application Processor Connectivity​


Increase Battery Life by Offloading Timing Critical Functions​



Increase System Performance through Hardware Acceleration​

  • Reduce processor workload by pre-processing sensor data to generate nine-axis output
  • Rotate, combine and scale image data with efficient FPGA-based implementations
  • Use logic-based multipliers to implement high-performance digital signal filtering
1742872492781.png



Not sure you could use any of those to make Akida SNN.
 
Last edited:
  • Like
  • Fire
  • Thinking
Reactions: 6 users

Diogenese

Top 20
  • Haha
  • Like
Reactions: 8 users

Guzzi62

Regular
Page 40:

The Role of
Neuromorphic Chips
Neuromorphic chips represent an
emerging technology that is designed
to mimic the human brain’s neural
architecture. These chips are inherently
efficient at processing sensory data
in real time due to their event-driven
nature. Therefore, they hold promise
to advance edge AI based on a new
wave of low-power solutions that will
be handling complex tasks like pattern
recognition or anomaly detection.
In the next few years, neuromorphic
chips will become embedded in
smartphones, enabling real-time AI
capabilities without relying on the
cloud. This will allow tasks like speech
recognition, image processing, and
adaptive learning to be performed
locally on these devices with minimal
power consumption. Companies
like Intel and IBM are advancing
neuromorphic chip designs (e.g., Loihi
2 [71] and TrueNorth [72] , respectively)
that consume 15–300 times less
energy than traditional chips while at
the same time delivering exceptional
performance.

WTF!!!


Page 64:

Neuromorphic Chips + 6G +
Quantum Computing
The long-term trajectory of
neuromorphic computing extends
beyond existing edge AI systems. The
integration of neuromorphic AI with 6G
networks and quantum computing is
expected to enable ultra-low-latency,
massively parallel processing at the
edge. BrainChip’s Akida processor and
state-space models, such as Temporal
Event-Based Neural Networks
(TENNs), are early indicators of this
direction, demonstrating the feasibility
of lightweight, event-driven AI
architectures for high-speed, real-time
applications [120] .
As scaling challenges are addressed,
neuromorphic chips will move from
niche applications to mainstream
adoption, powering the next
generation of autonomous machines,
decentralized AI, and real-time
adaptive systems. The future of edge
AI will depend on how efficiently
intelligence is deployed. Neuromorphic
computing is positioned to make that
shift possible.

Whole page 65 is about Brainchip:

EDGE AI INSIGHTS: Brainchip

65
New Approaches for GenAI
Innovation at the Edge

Recent advances in LLM training algorithms, such as
Deepseek’s release of V3, have rattled the traditional AI
marketplace and challenged the belief that large language
models require high computational investments to achieve
performant results. This demonstrates that trying a new
approach can have a big impact on a key challenge the
AI market faces: high computational power and resultant
costs to train and execute LLM models. New approaches
must be considered to address similar challenges at the
edge, such as the large compute and memory bandwidth
requirements of transformer-based GenAI models that
result in on costly and power-hungry edge AI devices.


State Space Models:
More Efficient than Transformers

State Space Models (SSMs), with high-performance models
like Mamba, have emerged in the last two years in cloud-
based LLM applications to address the high computational
complexity and power required to execute transformers in
data centers. Now, there is growing interest in using SSMs
to implement LLMs at the edge and replace transformers,
as they can achieve comparable performance with fewer
parameters and less overall complexity.
Like transformers, SSMs can process long sequences
(context windows). However, their complexity is on the
order of the sequence length O(L), compared to the order of
the square of the sequence length O(L 2 ) for transformers—
and with 1/3 as many parameters. Not to mention that SSM
implementations are less costly and require less energy.
These models leverage efficient algorithms that deliver
comparable or even superior performance. Brainchip’s
unique approach is to constrain an SSM model to better
fit physical time series or streaming data and achieve
higher model accuracy and efficiency. BrainChip innovation
of SSM models constrained to streaming data are called
Temporal Enabled Neural Networks, or TENNs. Combined
with optimized LLM training, they pave the way for a new
category of price and performance LLM and VLM solutions
at the edge.

Deploying LLMs at the Edge:
Efficiency and Scalability
BrainChip addresses the challenge of deploying LLMs at
the edge by using SSMs that minimize computations, model
size, and memory bandwidth while producing state-of-the-
art (SOTA) accuracy and performance results to support
applications like real-time translation, contextual voice
commands, and complete LLM models with RAG extensions.
Brainchip can condense the software model and the
implementation into a tiny hardware design. A specialized
LLM design can execute the edge LLM execution in under
a watt and for a few dollars using a dedicated IP core that
can be integrated into the customer’s SoCs. This enables a
whole new class of consumer products that do not require
costly cloud connectivity and services.
This ultra low power execution makes edge LLMs viable
for always-on devices like smart assistants and wearables.
Cloud LLM services are neither private nor personalized. A
completely local edge AI design enables real-time GenAI
capabilities without compromising privacy, ensuring users
have greater control over their data and enabling a new
class of personalization you can bring wherever you go.
Emerging designs like BrainChip’s Akida core offer a
scalable and efficient solution for engineers and product
developers who want to integrate advanced AI capabilities
into private, personalized consumer products, including
home, mobile, and wearable products.


I might have missed something, it was a quick skim over!

BrainChip is one of the many sponsors, therefore, mentioned in the end.
 
  • Like
  • Fire
  • Love
Reactions: 22 users

manny100

Regular
Hi Manny,

RAG (retrieval augmented generation) would be one area where cloud hybrid [as distinct from analog/digital hybrid] could be implemented. A cloud based LLM could be sub-divided into specific sub-topic models which could be downloaded as required.

For a cloud-free implementation, the main LLM would be stored on a separate memory in the same device as the NN. A variation on this would be where the main LLM could be updated via the cloud.
Tony has up his sleeve. Looks like we will have wait and see. May well be using RAG.
Interesting t
Hi Manny,

Yes. FPGAs may be useful for low volume niche applications where speed/power efficiency do not need to be maximally optimized, but they are primarily a development enabler and prouf-of-concept tool.

The flexibility, in-field upgradability, and interoperability of Field Programmable Gate Arrays (FPGAs), combined with low power, low latency, and parallel processing capabilities make them an essential tool for developers looking to overcome these challenges and optimize their Contextual Edge AI applications.

[Yes - everybody claims low latency/low power, but they are usually an order of magnitude or more worse than Akida's SNN, and worse again compared to TENNs]

Gate arrays have been around for donkey's years. Then came programmabel gate arrays which were set in stone once the configuration was programmed. Then came FPGAs which allow the configuration to be changed by having a programmable interconnexion network. Now we have AI optimized FPGAs in which selectable NPUs are part of the FPGA. Most NPUs use MACs (a sort of electronic mathematical "abacus"). The Akida NPU are spike/event based which utilized sparsity more effectively than MACs.

But commercializing FPGAs is moving away from our stated primary business model of IP licensing. Don't get me wrong - i've always been an advocate for COTS Akida in silicon, but I don't see FPGA as the mass market commercial solution. Obviously I don't have Tony Lewis' expertise in the field, but was he talking about a COTS FPGA or a demonstrator?
Ta, hopefully the expected tech advance ann will be a beauty. We should get the ann before the AGM.
Given all the client and tech news since September'24 i also expect we will see news from BRN concerning value prior to the AGM. This will also assist getting support for the possible US move.
I also expect we will likely get some positive client news before the AGM.
Looking forward to April onwards.
 
  • Like
Reactions: 12 users
Comment from crapper

Coby at weebit today in a webinar said they are talking with brainchip on a few different projects….
thought that was interesting
 
  • Like
  • Fire
  • Thinking
Reactions: 41 users
Top Bottom