BRN Discussion Ongoing

7für7

Top 20
  • Haha
Reactions: 6 users

TECH

Regular
Agree,

It looks like there will be no Akida IP silicon in ADAS in the near future, and Akida simulation software appears to be impractical for EVs. As DB says, there's always hope it can make an appearance in infotainment.

However, the automotive NN market is dwarfed by the cybersecurity NN market. This is something I've dreamt of since the DUTH days, although I would like to see Akida in USB format for the retro-fit market.

I do think it is becoming clearer day-by-day that the IP only strategy was a strategic mistake. The barriers to entry were just too great, the time-to-market too long, and the customer base too small, particularly considering their sunk cost bias.

Hindsight, spilt milk, water under the bridge, missed opportunities - we're where we are, and the mis-steps are behind us - now we've learnt the foxtrot, we're ready to tango!

Dio, you have at least twice posted about your dissatisfaction when Sean convinced the Board not long after his appointment to abandon the AKD 1000 NSoC approach we all thought was our next phase, I fully understand the reasoning behind this (monetary) decision, and I have been rather outspoken on Seans comments regarding his "too narrow" comments about our brilliant first run chip, this message I have personally mentioned to both Peter and Anil, as I felt it was disrespectful, to which Sean has watered down his comments numerous times since.

Many shareholders will have noticed the traction that AKD 1000 has gained, it's referenced, photographed, researched, published in all tech articles.

You, I and a number of others familar with the IP approach understand the journey is longer, harder, in that we are trying to not only sell the technology in " blocks", we are saying to clients, you go away and secure a fab and take all the financial risk, it's not an easy sell, especially in this current worldwide financial instability.

Personally, with Sean into his 4th year of a 5 year business plan I can't see us changing direction, but I do currently see a few alternate paths being tested.

The key thing for me with our company is, major tier 1 companies continue to engage, partner, collaborate and generally praise our technology, that's the hardest hurdle and we have cleared it with flying colours !

Yes it's painfully slow, but my belief and loyalty are as strong as back in late 2015.

❤️ Akida Tech.
 
  • Like
  • Love
Reactions: 33 users

Meatloaf

Regular
Even better would be for Intel to buy us out at US$5 / share. They will then become a behemoth and each and everyone of us will have atleast tripled our money. Some of us will have made 248x our investment and this is better than nothing :)
Make it $10 and you’ve got a deal😊
 
  • Like
  • Haha
  • Love
Reactions: 15 users
Dio, you have at least twice posted about your dissatisfaction when Sean convinced the Board not long after his appointment to abandon the AKD 1000 NSoC approach we all thought was our next phase, I fully understand the reasoning behind this (monetary) decision, and I have been rather outspoken on Seans comments regarding his "too narrow" comments about our brilliant first run chip, this message I have personally mentioned to both Peter and Anil, as I felt it was disrespectful, to which Sean has watered down his comments numerous times since.

Many shareholders will have noticed the traction that AKD 1000 has gained, it's referenced, photographed, researched, published in all tech articles.

You, I and a number of others familar with the IP approach understand the journey is longer, harder, in that we are trying to not only sell the technology in " blocks", we are saying to clients, you go away and secure a fab and take all the financial risk, it's not an easy sell, especially in this current worldwide financial instability.

Personally, with Sean into his 4th year of a 5 year business plan I can't see us changing direction, but I do currently see a few alternate paths being tested.

The key thing for me with our company is, major tier 1 companies continue to engage, partner, collaborate and generally praise our technology, that's the hardest hurdle and we have cleared it with flying colours !

Yes it's painfully slow, but my belief and loyalty are as strong as back in late 2015.

❤️ Akida Tech.
Time is ticking and very fast and me like probably many other share holders are waiting for this one decent contract in the next 2-3 months for Sean to save his job. If not he should be

1738222972671.gif
 
  • Like
  • Fire
  • Haha
Reactions: 8 users
Time is ticking and very fast and me like probably many other share holders are waiting for this one decent contract in the next 2-3 months for Sean to save his job. If not he should be

View attachment 76948
The recent upgrade to the BRN website to include a detailed list of were Pico will be going and into what products to me means they are about to sign a deal Very soon
Go brainchip
 
  • Like
  • Fire
Reactions: 9 users

Tothemoon24

Top 20
The good Doctor is an absolute ⭐


IMG_0583.jpeg



DeepSeek R1's breakthrough results are a boon for BrainChip and the extreme-edge AI industry. Here's how it will impact inference at the extreme edge:

DeepSeek has shattered the "No one ever got fired for going with IBM" mentality, liberating customers from the behemoths' models. This paves the way for a Cambrian explosion of specialized LLMs delivered by smaller companies. It's akin to breaking the 4-minute mile – now that it's been done, we're poised to witness an explosion of innovation. This is excellent news for smaller companies like BrainChip.

Now imagine AI models running on whispers of energy, delivering powerful performance without massive data centers or cloud connectivity. This is the promise of extreme-edge AI, which is BrainChip's sweet spot.

While DeepSeek is an impressive transformer based model, TENNs technology, based on State Space Models (SSMs), currently offers a superior solution for extreme-edge applications:

1. No explosive KV cache: Unlike traditional transformer models that rely on rapidly expanding key-value (KV) caches, TENNs sidesteps this issue with fixed memory requirements, regardless of input length. This fundamental property enables LLMs at the extreme edge.

2. Competitive Performance:preliminary experiments pitting the DeepSeek 1.5B model against TENNs 1.2B model in various tasks, such as trip planning and simple programming, showed comparable or slightly better results for TENNs.

3. Extremely Low Training Costs: BrainChip's focus on small specialized models means training costs are less than a premium economy flight from Los Angeles to Sydney.

DeepSeek's success highlights areas where TENNs can be further enhanced. We can leverage many of the tricks learned from DeepSeek to improve their TENNs LLM even more.

The future of extreme-edge AI is bright, with DeepSeek demonstrating that small companies can compete effectively in this space.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 92 users

IloveLamp

Top 20
View attachment 76949

DeepSeek R1's breakthrough results are a boon for BrainChip and the extreme-edge AI industry. Here's how it will impact inference at the extreme edge:

DeepSeek has shattered the "No one ever got fired for going with IBM" mentality, liberating customers from the behemoths' models. This paves the way for a Cambrian explosion of specialized LLMs delivered by smaller companies. It's akin to breaking the 4-minute mile – now that it's been done, we're poised to witness an explosion of innovation. This is excellent news for smaller companies like BrainChip.

Now imagine AI models running on whispers of energy, delivering powerful performance without massive data centers or cloud connectivity. This is the promise of extreme-edge AI, which is BrainChip's sweet spot.

While DeepSeek is an impressive transformer based model, TENNs technology, based on State Space Models (SSMs), currently offers a superior solution for extreme-edge applications:

1. No explosive KV cache: Unlike traditional transformer models that rely on rapidly expanding key-value (KV) caches, TENNs sidesteps this issue with fixed memory requirements, regardless of input length. This fundamental property enables LLMs at the extreme edge.

2. Competitive Performance:preliminary experiments pitting the DeepSeek 1.5B model against TENNs 1.2B model in various tasks, such as trip planning and simple programming, showed comparable or slightly better results for TENNs.

3. Extremely Low Training Costs: BrainChip's focus on small specialized models means training costs are less than a premium economy flight from Los Angeles to Sydney.

DeepSeek's success highlights areas where TENNs can be further enhanced. We can leverage many of the tricks learned from DeepSeek to improve their TENNs LLM even more.

The future of extreme-edge AI is bright, with DeepSeek demonstrating that small companies can compete effectively in this space.


That's what i thought i knew but wanted to hear. You're a keeper Tony, buy that man a beer.

Bring on 2025, the instos are coming for your shares MAKE THEM PAY THROUGH THE NOSE.

imo dyor
 
  • Like
  • Love
  • Fire
Reactions: 44 users
  • Like
  • Fire
  • Love
Reactions: 52 users

Diogenese

Top 20
The good Doctor is an absolute ⭐


View attachment 76950


DeepSeek R1's breakthrough results are a boon for BrainChip and the extreme-edge AI industry. Here's how it will impact inference at the extreme edge:

DeepSeek has shattered the "No one ever got fired for going with IBM" mentality, liberating customers from the behemoths' models. This paves the way for a Cambrian explosion of specialized LLMs delivered by smaller companies. It's akin to breaking the 4-minute mile – now that it's been done, we're poised to witness an explosion of innovation. This is excellent news for smaller companies like BrainChip.

Now imagine AI models running on whispers of energy, delivering powerful performance without massive data centers or cloud connectivity. This is the promise of extreme-edge AI, which is BrainChip's sweet spot.

While DeepSeek is an impressive transformer based model, TENNs technology, based on State Space Models (SSMs), currently offers a superior solution for extreme-edge applications:

1. No explosive KV cache: Unlike traditional transformer models that rely on rapidly expanding key-value (KV) caches, TENNs sidesteps this issue with fixed memory requirements, regardless of input length. This fundamental property enables LLMs at the extreme edge.

2. Competitive Performance:preliminary experiments pitting the DeepSeek 1.5B model against TENNs 1.2B model in various tasks, such as trip planning and simple programming, showed comparable or slightly better results for TENNs.

3. Extremely Low Training Costs: BrainChip's focus on small specialized models means training costs are less than a premium economy flight from Los Angeles to Sydney.

DeepSeek's success highlights areas where TENNs can be further enhanced. We can leverage many of the tricks learned from DeepSeek to improve their TENNs LLM even more.

The future of extreme-edge AI is bright, with DeepSeek demonstrating that small companies can compete effectively in this space.
"equivalent to breaking the 4 minute mile" = Nvidia sliding down the Bannister, while Akida takes the express lift (= elevator in USese) to the top Landying.
 
Last edited:
  • Like
  • Fire
  • Haha
Reactions: 25 users

Diogenese

Top 20
The good Doctor is an absolute ⭐


View attachment 76950


DeepSeek R1's breakthrough results are a boon for BrainChip and the extreme-edge AI industry. Here's how it will impact inference at the extreme edge:

DeepSeek has shattered the "No one ever got fired for going with IBM" mentality, liberating customers from the behemoths' models. This paves the way for a Cambrian explosion of specialized LLMs delivered by smaller companies. It's akin to breaking the 4-minute mile – now that it's been done, we're poised to witness an explosion of innovation. This is excellent news for smaller companies like BrainChip.

Now imagine AI models running on whispers of energy, delivering powerful performance without massive data centers or cloud connectivity. This is the promise of extreme-edge AI, which is BrainChip's sweet spot.

While DeepSeek is an impressive transformer based model, TENNs technology, based on State Space Models (SSMs), currently offers a superior solution for extreme-edge applications:

1. No explosive KV cache: Unlike traditional transformer models that rely on rapidly expanding key-value (KV) caches, TENNs sidesteps this issue with fixed memory requirements, regardless of input length. This fundamental property enables LLMs at the extreme edge.

2. Competitive Performance:preliminary experiments pitting the DeepSeek 1.5B model against TENNs 1.2B model in various tasks, such as trip planning and simple programming, showed comparable or slightly better results for TENNs.

3. Extremely Low Training Costs: BrainChip's focus on small specialized models means training costs are less than a premium economy flight from Los Angeles to Sydney.

DeepSeek's success highlights areas where TENNs can be further enhanced. We can leverage many of the tricks learned from DeepSeek to improve their TENNs LLM even more.

The future of extreme-edge AI is bright, with DeepSeek demonstrating that small companies can compete effectively in this space.
... and I've still got 2 wishes left!
 
  • Like
  • Haha
Reactions: 7 users
IMG_0208.jpeg


First port of call for Utae is to conduct internal audits on the whereabouts of Brainchip revenue landing date ……!
 
  • Like
  • Fire
  • Thinking
Reactions: 7 users

Diogenese

Top 20


The A.I. Model wars are intensifying, with 2 new Chinese ones introduced, in the last several hours.

One by Alibaba no less..

Thanks DB,

I mistook the Alibaba patent for DeepSeeks because it had a "Wenfeng" as inventor.

CN118798303A Large language model training method, question and answer method, equipment, medium and productPatent Translate 20240913

ALIYUN FEITIAN HANGZHOU CLOUD COMPUTING TECH CO LTD

Inventors FENG WENFENG; ZHANG YUEWEI; ZENG ZHENYU

The invention provides a large language model training method, a question and answer method, equipment, a medium and a product, and relates to the technical field of artificial intelligence, the training method comprises the following steps: obtaining long text training data, the sequence length of the long text training data being greater than the maximum length of an input text sequence of a pre-trained large language model; increasing a rotation angle base number of a rotation position code of the pre-trained large language model to obtain a modified pre-trained large language model; and training the modified pre-trained large language model by using the long text training data to obtain a trained large language model. In the embodiment, the pre-trained large language model is trained by acquiring the long text training data and increasing the base number of the rotation angles of the rotation position codes, so that the length of the input text sequence is amplified, and the trained large language model can process the long text sequence; and the answer integrity and accuracy of the large language model on questions dependent on long texts and multi-document comparison are improved.
 
  • Like
  • Fire
Reactions: 6 users

Diogenese

Top 20
I agree Dio, but I think Sean reiterated again in that recent podcast interview he did that we are (still) an IP company. Someone correct me if my memory is not serving me well (I couldn’t be bothered listening to it again 😬)
Yes - he did say that, but there are several Akida 1 SoC products.

PCIe, M2, Raspberry Pi, Edge Box, Edge Server, Bascom Hunter 3U VPX SNAP card, ...

I wonder where the chips are coming from?
 
  • Like
  • Thinking
  • Fire
Reactions: 12 users

rgupta

Regular
Yes - he did say that, but there are several Akida 1 SoC products.

PCIe, M2, Raspberry Pi, Edge Box, Edge Server, Bascom Hunter 3U VPX SNAP card, ...

I wonder where the chips are coming from?
I wonder the where the products are selling, there is no sales revenue either.
Jokes apart I assume akida 1000 was fabricated based on demand related orders and there is possibility company can order more chips from tsmc.
 
  • Like
Reactions: 3 users

AusEire

Founding Member. It's ok to say No to Dot Joining
  • Haha
Reactions: 1 users

AusEire

Founding Member. It's ok to say No to Dot Joining
That's what i thought i knew but wanted to hear. You're a keeper Tony, buy that man a beer.

Bring on 2025, the instos are coming for your shares MAKE THEM PAY THROUGH THE NOSE.

imo dyor
Can we ensure it's not VB?
 
  • Haha
Reactions: 3 users

JDelekto

Regular
The good Doctor is an absolute ⭐


View attachment 76954


DeepSeek R1's breakthrough results are a boon for BrainChip and the extreme-edge AI industry. Here's how it will impact inference at the extreme edge:

DeepSeek has shattered the "No one ever got fired for going with IBM" mentality, liberating customers from the behemoths' models. This paves the way for a Cambrian explosion of specialized LLMs delivered by smaller companies. It's akin to breaking the 4-minute mile – now that it's been done, we're poised to witness an explosion of innovation. This is excellent news for smaller companies like BrainChip.

Now imagine AI models running on whispers of energy, delivering powerful performance without massive data centers or cloud connectivity. This is the promise of extreme-edge AI, which is BrainChip's sweet spot.

While DeepSeek is an impressive transformer based model, TENNs technology, based on State Space Models (SSMs), currently offers a superior solution for extreme-edge applications:

1. No explosive KV cache: Unlike traditional transformer models that rely on rapidly expanding key-value (KV) caches, TENNs sidesteps this issue with fixed memory requirements, regardless of input length. This fundamental property enables LLMs at the extreme edge.

2. Competitive Performance:preliminary experiments pitting the DeepSeek 1.5B model against TENNs 1.2B model in various tasks, such as trip planning and simple programming, showed comparable or slightly better results for TENNs.

3. Extremely Low Training Costs: BrainChip's focus on small specialized models means training costs are less than a premium economy flight from Los Angeles to Sydney.

DeepSeek's success highlights areas where TENNs can be further enhanced. We can leverage many of the tricks learned from DeepSeek to improve their TENNs LLM even more.

The future of extreme-edge AI is bright, with DeepSeek demonstrating that small companies can compete effectively in this space.
This has me excited:

1. No explosive KV cache: Unlike traditional transformer models that rely on rapidly expanding key-value (KV) caches, TENNs sidesteps this issue with fixed memory requirements, regardless of input length. This fundamental property enables LLMs at the extreme edge.

If I understand this correctly it seems to have a fixed memory requirement regardless of the context length. I notice that when using local inferencing with models that support up to a 128k context window, the smaller the context window size I choose, the less memory it consumes (in addition to the memory used by the model data itself).

Removing the context window altogether is like the holy grail for LLMs. You could feed it volumes of information to give it more and more context for a more accurate response to your query.

To put it into perspective, there are about 0.75 tokens per word, with around 96,000 words in a 128k context window. Average novels run around 60,000 to 100,000 words, with "Harry Potter and the Sorcerer's Stone" with about 77,000 words.

Probably an oversimplification, but imagine stuffing a model up front with the Harry Potter novel and being able to ask it to summarize, look for specific parts of the text, create character biographies, write a new imaginative piece using the same writing style, isolate heroes vs. villains, etc.

Now an alternate route, like filling up context for a programming language, examples, the documentation for some 3rd party APIs, and then asking the model to write code to solve a specific problem using all those tools.

If they can do extreme RAG (Retrieval Augment Generation, where current data is retrieved and used to provide the model context) in memory-constrained Edge cases, that is a boon for TENNS. An additional advantage when performing accurate and up-to-date inferencing in conjunction with Akida's ability to update its model as it learns from new sensor input.
 
  • Like
  • Fire
  • Love
Reactions: 38 users

Schwale

Regular
This has me excited:

1. No explosive KV cache: Unlike traditional transformer models that rely on rapidly expanding key-value (KV) caches, TENNs sidesteps this issue with fixed memory requirements, regardless of input length. This fundamental property enables LLMs at the extreme edge.

If I understand this correctly it seems to have a fixed memory requirement regardless of the context length. I notice that when using local inferencing with models that support up to a 128k context window, the smaller the context window size I choose, the less memory it consumes (in addition to the memory used by the model data itself).

Removing the context window altogether is like the holy grail for LLMs. You could feed it volumes of information to give it more and more context for a more accurate response to your query.

To put it into perspective, there are about 0.75 tokens per word, with around 96,000 words in a 128k context window. Average novels run around 60,000 to 100,000 words, with "Harry Potter and the Sorcerer's Stone" with about 77,000 words.

Probably an oversimplification, but imagine stuffing a model up front with the Harry Potter novel and being able to ask it to summarize, look for specific parts of the text, create character biographies, write a new imaginative piece using the same writing style, isolate heroes vs. villains, etc.

Now an alternate route, like filling up context for a programming language, examples, the documentation for some 3rd party APIs, and then asking the model to write code to solve a specific problem using all those tools.

If they can do extreme RAG (Retrieval Augment Generation, where current data is retrieved and used to provide the model context) in memory-constrained Edge cases, that is a boon for TENNS. An additional advantage when performing accurate and up-to-date inferencing in conjunction with Akida's ability to update its model as it learns from new sensor input.
Thank you. This is an extremely informative post. It summarises how the process works and how components feed into one another.
 
  • Like
Reactions: 4 users
This has me excited:

1. No explosive KV cache: Unlike traditional transformer models that rely on rapidly expanding key-value (KV) caches, TENNs sidesteps this issue with fixed memory requirements, regardless of input length. This fundamental property enables LLMs at the extreme edge.

If I understand this correctly it seems to have a fixed memory requirement regardless of the context length. I notice that when using local inferencing with models that support up to a 128k context window, the smaller the context window size I choose, the less memory it consumes (in addition to the memory used by the model data itself).

Removing the context window altogether is like the holy grail for LLMs. You could feed it volumes of information to give it more and more context for a more accurate response to your query.

To put it into perspective, there are about 0.75 tokens per word, with around 96,000 words in a 128k context window. Average novels run around 60,000 to 100,000 words, with "Harry Potter and the Sorcerer's Stone" with about 77,000 words.

Probably an oversimplification, but imagine stuffing a model up front with the Harry Potter novel and being able to ask it to summarize, look for specific parts of the text, create character biographies, write a new imaginative piece using the same writing style, isolate heroes vs. villains, etc.

Now an alternate route, like filling up context for a programming language, examples, the documentation for some 3rd party APIs, and then asking the model to write code to solve a specific problem using all those tools.

If they can do extreme RAG (Retrieval Augment Generation, where current data is retrieved and used to provide the model context) in memory-constrained Edge cases, that is a boon for TENNS. An additional advantage when performing accurate and up-to-date inferencing in conjunction with Akida's ability to update its model as it learns from new sensor input.
The only thing that would get me excited now is Brainchip actually selling their product and making some explosive revenue. NVIDIA would never have thought something like this could've happened, but it did. Imagine waking up one morning and finding out that the PRC have managed to develop and sell some type of device with on chip learning, independant of the cloud. Don't think because its Chinese people won't buy it and its at the right price..... also no secutity concern because it doesn't need to be connected to the cloud....
Tony needs to give Sean and the rest of the selling machine a good kick up the arse.
 
  • Like
  • Love
Reactions: 7 users

charles2

Regular
  • Like
  • Fire
Reactions: 4 users
Top Bottom