BRN Discussion Ongoing

Diogenese

Top 20
Thanks FK,

The roadmap is chock full of groundbreaking advances. I felt that question time could have been better utilized by addressing the new opportunities these advances provide.
Akida GenAI & Akida 3 have been adapted to handle 16-bit integer and 32-bit FP. This, in addition to the malleable architecture, enables these two chips to be flexibly configured to handle all types of models and to be adapted for future applications.

The provision of a LUT in place of an activation function seems like a patentable idea if original. We are also told by JT that a patent application is in the pipeline for a new technique for retrieving data from memory. This is the most energy intensive action so the invention will further increase the power efficiency and probably latency.

Akida 2 is 8 times more efficient than Akida 1, and presumably that also applies to GenAI & Akida 3 for equivalent Akida 1 tasks. However, 16-bit integer and 32-bit FP seem to provide excessive capabilities for an edge device. Does Nvidia need to look over its shoulder "like one that on a lonesome road doth walk in fear and dread, and having once turned round, walks on, and turns no more his head, because he knows that close behind a frightful fiend doth tread"?
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 41 users

Go Sean …. !

IMG_4385.jpeg
 
  • Like
  • Fire
Reactions: 20 users

TECH

Regular
Akida GenAI & Akida 3 have been adapted to handle 16-bit integer and 32-bit FP. This, in addition to the malleable architecture, enables these two chips to be flexibly configured to handle all types of models and to be adapted for future applications.

The provision of a LUT in place of an activation function seems like a patentable idea if original. We are also told by JT that a patent application is in the pipeline for a new technique for retrieving data from memory. This is the most energy intensive action so the invention will further increase the power efficiency and probably latency.

Akida 2 is 8 times more efficient than Akida 1, and presumably that also applies to GenAI & Akida 3 for equivalent Akida 1 tasks. However, 16-bit integer and 32-bit FP seem to provide excessive capabilities for an edge device. Does Nvidia need to look over its shoulder "like one that on a lonesome road doth walk in fear and dread, and having once turned round, walks on and turns no more his head, because he knows that close behind a frightful fiend doth tread"?

Hi Dio,

We, Brainchip are at an amazing point in our development, it's super exciting!

You, being a retired engineer could appreciate the amount of work that's been quietly going on behind the scenes over the last
12 months, we all thought no more doors could possibly open up to this technology, but this latest news out of the engineering
department dispels that idea, is it just me, or do you and others think that Nvidia are just plain stubborn in their approach to our
architecture? (if that makes sense).

Is the gap going to potentially widen after the release of our advancement of the Akida suite of offerings?

Our solid roadmap will definitely get the attention of all our potential competition, I'm hoping like many I'd suggest who would like
to see us succeed on our own two feet for a few years before considering an offer, if it happened to surface at some future point.

Regards....Tech
 
  • Like
  • Fire
  • Thinking
Reactions: 16 users

MDhere

Top 20
Todaya price .24c x 10 would be where we need to be before we enter any redomicils at the very minimum.

.24 x 10 would be where we need to be for shareholders, all shareholders that experienced the Mercedes hype to feel better.

I am hoping 1 or two million bookings brings what a mountain of revenue and positive onflow which is expected.

Bring on zero shot learning after that and we are on a winner for the long run!

But bring on the .24 x 10 first then we can get back to where we were.

I'm encouraged by the roadmap but moreso encouraged by Sean axknowledging that he needs to deliver the promise of "watch the financials".
 
  • Like
  • Fire
  • Thinking
Reactions: 16 users
Something’s wrong we are green on a Friday 😂
 
  • Like
  • Haha
  • Fire
Reactions: 12 users
For those who missed the board meeting, here’s an important update:


In the past, BrainChip focused on pursuing large deals with major corporations—essentially targeting “big whale” contracts that promised substantial immediate revenue. However, this approach has faced challenges due to the slow decision-making processes typical of large enterprises. (That's what they said, my personal opinion is Brainchip couldn't offer enough features with old Akida)


The company has now shifted its strategy to focus on signing deals of all sizes, with an emphasis on faster execution. This new approach aligns with market demand, as many clients today are willing to pay quickly in order to get working solutions delivered without delay. This has already proven to be the right move, as seen with recent deals involving Onsor, Frontgrade Gaisler, and others.

With the integration of state space model use cases, BrainChip is well-positioned to see a significant uptick in deal volume this year. While individual deal values may be smaller, the ability to deliver repeatable solutions across a niche can generate strong cumulative returns.


Additionally, since many modern products already use state space models—often implicitly—combining them with Akida’s spiking neural network (SNN) and TENNS technology enables ultra-low power consumption. This gives BrainChip a major competitive edge and positions it to quickly dominate the edge AI and IoT device markets.
 
  • Like
  • Love
Reactions: 20 users

Diogenese

Top 20
Hi Dio,

We, Brainchip are at an amazing point in our development, it's super exciting!

You, being a retired engineer could appreciate the amount of work that's been quietly going on behind the scenes over the last
12 months, we all thought no more doors could possibly open up to this technology, but this latest news out of the engineering
department dispels that idea, is it just me, or do you and others think that Nvidia are just plain stubborn in their approach to our
architecture? (if that makes sense).

Is the gap going to potentially widen after the release of our advancement of the Akida suite of offerings?

Our solid roadmap will definitely get the attention of all our potential competition, I'm hoping like many I'd suggest who would like
to see us succeed on our own two feet for a few years before considering an offer, if it happened to surface at some future point.

Regards....Tech
Hi tech,

I think that the Akida 3/GenAI adaptability and capability to handle any model or to pass incompatible models to the CPU would be a major advantage in the cloud. I still see these as working as a coprocessor with a CPU/GPU because there will still be a need to run software. However using SSMs (state space machines) like TENNs in the cloud has the potential to substantially reduce the cooling power requirements.

Qualcomm's hexagon is designed to split AI workloads between CPU/GPU/NPU on the basis of workload size:

https://www.qualcomm.com/content/da...I-with-an-NPU-and-heterogeneous-computing.pdf

A personal assistant that offers a natural voice user interface (UI) to improve productivity and enhance user experiences is expected to be a popular generative AI application. The speech recognition, LLM, and speech models must all run with some concurrency, so it is desirable to split the models between the NPU, GPU, CPU, and the sensor processor. For PCs, agents are expected to run pervasively (always-on), so as much of it as possible should run on the NPU for performance and power efficiency.
...

...

7.1 The processors of the Qualcomm AI Engine

Our latest Hexagon NPU offers significant improvements for generative AI, delivering 98% faster performance and 40% improved performance per watt. It includes micro-architecture upgrades, enhanced micro-tile inferencing, reduced memory bandwidth, and a dedicated power rail for optimal performance and efficiency. These enhancements, along with INT4 hardware acceleration, make the Hexagon NPU the leading processor for on-device AI inferencing.

The Adreno GPU, besides being the powerhouse engine behind high-performance graphics and rich user experiences with low power consumption, is designed for parallel processing AI in high precision formats, supporting 32-bit floating point (FP32), 16-bit floating point (FP16), and 8-bit integer (INT8). The upgraded Adreno GPU in Snapdragon 8 Gen 3 yields 25% improved GPU power efficiency, enhanced AI, gaming, and streaming. Llama 2-7B can generate more than 13 tokens per second on the Adreno GPU.

...

As previously mentioned, most generative AI use cases can be categorized into on-demand, sustained, or pervasive. For on-demand applications, latency is the KPI since users do not want to wait. When these applications use small models, the CPU is usually the right choice. When models get bigger (e.g., billions of parameters), the GPU and NPU tend to be more appropriate. For sustained and pervasive use cases, in which battery life is vital and power efficiency is the critical factor, the NPU is the best option.

...

As mentioned in the prior section, CPUs can perform well for low-compute AI workloads that require low latency.

...


Note that Adreno GPU does 32-bit FP and INT8 for high precision. Akida 3 will also have 32-bit FP, but will have INT16.

Qualcomm recommends CPU for low compute AI workloads!!???

Looking forward to the Hexagon/Akida 3 High Noon!
 
  • Like
  • Fire
  • Love
Reactions: 12 users
 
  • Like
  • Fire
  • Love
Reactions: 5 users

jrp173

Regular
Todaya price .24c x 10 would be where we need to be before we enter any redomicils at the very minimum.

.24 x 10 would be where we need to be for shareholders, all shareholders that experienced the Mercedes hype to feel better.

I am hoping 1 or two million bookings brings what a mountain of revenue and positive onflow which is expected.

Bring on zero shot learning after that and we are on a winner for the long run!

But bring on the .24 x 10 first then we can get back to where we were.

I'm encouraged by the roadmap but moreso encouraged by Sean axknowledging that he needs to deliver the promise of "watch the financials".

MDhere, what do you mean by "I am hoping 1 or two million bookings brings what a mountain of revenue and positive onflow which is expected?

Are you hoping that one two million dollars of bookings is going to bring in a mountain of revenue, or are you hoping we are going to get one or two million individual bookings?
 
  • Haha
Reactions: 1 users
Talks a fair bit about AX45MP

 
  • Like
Reactions: 2 users
MDhere, what do you mean by "I am hoping 1 or two million bookings brings what a mountain of revenue and positive onflow which is expected?

Are you hoping that one two million dollars of bookings is going to bring in a mountain of revenue, or are you hoping we are going to get one or two million individual bookings?
The latter sounds the better 👍
 
  • Fire
  • Haha
Reactions: 3 users

Drewski

Regular
Todaya price .24c x 10 would be where we need to be before we enter any redomicils at the very minimum.

.24 x 10 would be where we need to be for shareholders, all shareholders that experienced the Mercedes hype to feel better.

I am hoping 1 or two million bookings brings what a mountain of revenue and positive onflow which is expected.

Bring on zero shot learning after that and we are on a winner for the long run!

But bring on the .24 x 10 first then we can get back to where we were.

I'm encouraged by the roadmap but moreso encouraged by Sean axknowledging that he needs to deliver the promise of "watch the financials".
One might say, the prescription and the medicine.
 
  • Like
Reactions: 1 users

MDhere

Top 20
MDhere, what do you mean by "I am hoping 1 or two million bookings brings what a mountain of revenue and positive onflow which is expected?

Are you hoping that one two million dollars of bookings is going to bring in a mountain of revenue, or are you hoping we are going to get one or two million individual bookings?
I look at the lowest point of the needle then that needle moves up as Sean mentioned 9mil usd.
 

Rach2512

Regular
Mentions neuromorphic processors at 55second mark.

 
  • Fire
  • Like
  • Love
Reactions: 5 users

mcm

Regular
Akida GenAI & Akida 3 have been adapted to handle 16-bit integer and 32-bit FP. This, in addition to the malleable architecture, enables these two chips to be flexibly configured to handle all types of models and to be adapted for future applications.

The provision of a LUT in place of an activation function seems like a patentable idea if original. We are also told by JT that a patent application is in the pipeline for a new technique for retrieving data from memory. This is the most energy intensive action so the invention will further increase the power efficiency and probably latency.

Akida 2 is 8 times more efficient than Akida 1, and presumably that also applies to GenAI & Akida 3 for equivalent Akida 1 tasks. However, 16-bit integer and 32-bit FP seem to provide excessive capabilities for an edge device. Does Nvidia need to look over its shoulder "like one that on a lonesome road doth walk in fear and dread, and having once turned round, walks on, and turns no more his head, because he knows that close behind a frightful fiend doth tread"?
Hey Diogenese,

Have you had a chance to look at what Nanovue is offering and whether or not it represents any serious competition to Akida? This is an interview the NVU CEO did with Stocks Down Under very recently in which he says "Having a very computational efficient processor that takes up very little space is very key in things like wearable glasses, medical devices, drones, putting it into cell phones ... anything that requires good battery management, also high computational power. EMASS right now benchmarks against the best of the best in those regards.":
It's the closest sounding tech to Akida I've come across however am not tech savvy enough to know how close it really is.
Cheers,
mcm
 
  • Like
  • Love
  • Wow
Reactions: 5 users
Top Bottom