7für7
Top 20
Actually I was walking into my garage and saw my grey Aston Martin ….
Jacket I baught in a second hand shop
Actually I was walking into my garage and saw my grey Aston Martin ….
Agree,
It looks like there will be no Akida IP silicon in ADAS in the near future, and Akida simulation software appears to be impractical for EVs. As DB says, there's always hope it can make an appearance in infotainment.
However, the automotive NN market is dwarfed by the cybersecurity NN market. This is something I've dreamt of since the DUTH days, although I would like to see Akida in USB format for the retro-fit market.
I do think it is becoming clearer day-by-day that the IP only strategy was a strategic mistake. The barriers to entry were just too great, the time-to-market too long, and the customer base too small, particularly considering their sunk cost bias.
Hindsight, spilt milk, water under the bridge, missed opportunities - we're where we are, and the mis-steps are behind us - now we've learnt the foxtrot, we're ready to tango!
Make it $10 and you’ve got a dealEven better would be for Intel to buy us out at US$5 / share. They will then become a behemoth and each and everyone of us will have atleast tripled our money. Some of us will have made 248x our investment and this is better than nothing
Time is ticking and very fast and me like probably many other share holders are waiting for this one decent contract in the next 2-3 months for Sean to save his job. If not he should beDio, you have at least twice posted about your dissatisfaction when Sean convinced the Board not long after his appointment to abandon the AKD 1000 NSoC approach we all thought was our next phase, I fully understand the reasoning behind this (monetary) decision, and I have been rather outspoken on Seans comments regarding his "too narrow" comments about our brilliant first run chip, this message I have personally mentioned to both Peter and Anil, as I felt it was disrespectful, to which Sean has watered down his comments numerous times since.
Many shareholders will have noticed the traction that AKD 1000 has gained, it's referenced, photographed, researched, published in all tech articles.
You, I and a number of others familar with the IP approach understand the journey is longer, harder, in that we are trying to not only sell the technology in " blocks", we are saying to clients, you go away and secure a fab and take all the financial risk, it's not an easy sell, especially in this current worldwide financial instability.
Personally, with Sean into his 4th year of a 5 year business plan I can't see us changing direction, but I do currently see a few alternate paths being tested.
The key thing for me with our company is, major tier 1 companies continue to engage, partner, collaborate and generally praise our technology, that's the hardest hurdle and we have cleared it with flying colours !
Yes it's painfully slow, but my belief and loyalty are as strong as back in late 2015.
Akida Tech.
The recent upgrade to the BRN website to include a detailed list of were Pico will be going and into what products to me means they are about to sign a deal Very soonTime is ticking and very fast and me like probably many other share holders are waiting for this one decent contract in the next 2-3 months for Sean to save his job. If not he should be
View attachment 76948
That's what i thought i knew but wanted to hear. You're a keeper Tony, buy that man a beer.View attachment 76949
DeepSeek R1's breakthrough results are a boon for BrainChip and the extreme-edge AI industry. Here's how it will impact inference at the extreme edge:
DeepSeek has shattered the "No one ever got fired for going with IBM" mentality, liberating customers from the behemoths' models. This paves the way for a Cambrian explosion of specialized LLMs delivered by smaller companies. It's akin to breaking the 4-minute mile – now that it's been done, we're poised to witness an explosion of innovation. This is excellent news for smaller companies like BrainChip.
Now imagine AI models running on whispers of energy, delivering powerful performance without massive data centers or cloud connectivity. This is the promise of extreme-edge AI, which is BrainChip's sweet spot.
While DeepSeek is an impressive transformer based model, TENNs technology, based on State Space Models (SSMs), currently offers a superior solution for extreme-edge applications:
1. No explosive KV cache: Unlike traditional transformer models that rely on rapidly expanding key-value (KV) caches, TENNs sidesteps this issue with fixed memory requirements, regardless of input length. This fundamental property enables LLMs at the extreme edge.
2. Competitive Performancereliminary experiments pitting the DeepSeek 1.5B model against TENNs 1.2B model in various tasks, such as trip planning and simple programming, showed comparable or slightly better results for TENNs.
3. Extremely Low Training Costs: BrainChip's focus on small specialized models means training costs are less than a premium economy flight from Los Angeles to Sydney.
DeepSeek's success highlights areas where TENNs can be further enhanced. We can leverage many of the tricks learned from DeepSeek to improve their TENNs LLM even more.
The future of extreme-edge AI is bright, with DeepSeek demonstrating that small companies can compete effectively in this space.
DeepSeek R1's Promise And Peril For News - TV News Check
DeepSeek R1, a new AI model from China, challenges OpenAI with advanced capabilities at lower cost. Concerns about censorship and bias raise red flags for newsrooms considering its use.tvnewscheck.com
"equivalent to breaking the 4 minute mile" = Nvidia sliding down the Bannister, while Akida takes the express lift (= elevator in USese) to the top Landying.The good Doctor is an absolute ️
View attachment 76950
DeepSeek R1's breakthrough results are a boon for BrainChip and the extreme-edge AI industry. Here's how it will impact inference at the extreme edge:
DeepSeek has shattered the "No one ever got fired for going with IBM" mentality, liberating customers from the behemoths' models. This paves the way for a Cambrian explosion of specialized LLMs delivered by smaller companies. It's akin to breaking the 4-minute mile – now that it's been done, we're poised to witness an explosion of innovation. This is excellent news for smaller companies like BrainChip.
Now imagine AI models running on whispers of energy, delivering powerful performance without massive data centers or cloud connectivity. This is the promise of extreme-edge AI, which is BrainChip's sweet spot.
While DeepSeek is an impressive transformer based model, TENNs technology, based on State Space Models (SSMs), currently offers a superior solution for extreme-edge applications:
1. No explosive KV cache: Unlike traditional transformer models that rely on rapidly expanding key-value (KV) caches, TENNs sidesteps this issue with fixed memory requirements, regardless of input length. This fundamental property enables LLMs at the extreme edge.
2. Competitive Performancereliminary experiments pitting the DeepSeek 1.5B model against TENNs 1.2B model in various tasks, such as trip planning and simple programming, showed comparable or slightly better results for TENNs.
3. Extremely Low Training Costs: BrainChip's focus on small specialized models means training costs are less than a premium economy flight from Los Angeles to Sydney.
DeepSeek's success highlights areas where TENNs can be further enhanced. We can leverage many of the tricks learned from DeepSeek to improve their TENNs LLM even more.
The future of extreme-edge AI is bright, with DeepSeek demonstrating that small companies can compete effectively in this space.
... and I've still got 2 wishes left!The good Doctor is an absolute ️
View attachment 76950
DeepSeek R1's breakthrough results are a boon for BrainChip and the extreme-edge AI industry. Here's how it will impact inference at the extreme edge:
DeepSeek has shattered the "No one ever got fired for going with IBM" mentality, liberating customers from the behemoths' models. This paves the way for a Cambrian explosion of specialized LLMs delivered by smaller companies. It's akin to breaking the 4-minute mile – now that it's been done, we're poised to witness an explosion of innovation. This is excellent news for smaller companies like BrainChip.
Now imagine AI models running on whispers of energy, delivering powerful performance without massive data centers or cloud connectivity. This is the promise of extreme-edge AI, which is BrainChip's sweet spot.
While DeepSeek is an impressive transformer based model, TENNs technology, based on State Space Models (SSMs), currently offers a superior solution for extreme-edge applications:
1. No explosive KV cache: Unlike traditional transformer models that rely on rapidly expanding key-value (KV) caches, TENNs sidesteps this issue with fixed memory requirements, regardless of input length. This fundamental property enables LLMs at the extreme edge.
2. Competitive Performancereliminary experiments pitting the DeepSeek 1.5B model against TENNs 1.2B model in various tasks, such as trip planning and simple programming, showed comparable or slightly better results for TENNs.
3. Extremely Low Training Costs: BrainChip's focus on small specialized models means training costs are less than a premium economy flight from Los Angeles to Sydney.
DeepSeek's success highlights areas where TENNs can be further enhanced. We can leverage many of the tricks learned from DeepSeek to improve their TENNs LLM even more.
The future of extreme-edge AI is bright, with DeepSeek demonstrating that small companies can compete effectively in this space.
The A.I. Model wars are intensifying, with 2 new Chinese ones introduced, in the last several hours.
One by Alibaba no less..
Yes - he did say that, but there are several Akida 1 SoC products.I agree Dio, but I think Sean reiterated again in that recent podcast interview he did that we are (still) an IP company. Someone correct me if my memory is not serving me well (I couldn’t be bothered listening to it again )
I wonder the where the products are selling, there is no sales revenue either.Yes - he did say that, but there are several Akida 1 SoC products.
PCIe, M2, Raspberry Pi, Edge Box, Edge Server, Bascom Hunter 3U VPX SNAP card, ...
I wonder where the chips are coming from?
No, think it's where he keeps his emergency diaper.
C'mon BrainChip, whilst we can still get it up!
Can we ensure it's not VB?That's what i thought i knew but wanted to hear. You're a keeper Tony, buy that man a beer.
Bring on 2025, the instos are coming for your shares MAKE THEM PAY THROUGH THE NOSE.
imo dyor
This has me excited:The good Doctor is an absolute ️
View attachment 76954
DeepSeek R1's breakthrough results are a boon for BrainChip and the extreme-edge AI industry. Here's how it will impact inference at the extreme edge:
DeepSeek has shattered the "No one ever got fired for going with IBM" mentality, liberating customers from the behemoths' models. This paves the way for a Cambrian explosion of specialized LLMs delivered by smaller companies. It's akin to breaking the 4-minute mile – now that it's been done, we're poised to witness an explosion of innovation. This is excellent news for smaller companies like BrainChip.
Now imagine AI models running on whispers of energy, delivering powerful performance without massive data centers or cloud connectivity. This is the promise of extreme-edge AI, which is BrainChip's sweet spot.
While DeepSeek is an impressive transformer based model, TENNs technology, based on State Space Models (SSMs), currently offers a superior solution for extreme-edge applications:
1. No explosive KV cache: Unlike traditional transformer models that rely on rapidly expanding key-value (KV) caches, TENNs sidesteps this issue with fixed memory requirements, regardless of input length. This fundamental property enables LLMs at the extreme edge.
2. Competitive Performancereliminary experiments pitting the DeepSeek 1.5B model against TENNs 1.2B model in various tasks, such as trip planning and simple programming, showed comparable or slightly better results for TENNs.
3. Extremely Low Training Costs: BrainChip's focus on small specialized models means training costs are less than a premium economy flight from Los Angeles to Sydney.
DeepSeek's success highlights areas where TENNs can be further enhanced. We can leverage many of the tricks learned from DeepSeek to improve their TENNs LLM even more.
The future of extreme-edge AI is bright, with DeepSeek demonstrating that small companies can compete effectively in this space.
Thank you. This is an extremely informative post. It summarises how the process works and how components feed into one another.This has me excited:
1. No explosive KV cache: Unlike traditional transformer models that rely on rapidly expanding key-value (KV) caches, TENNs sidesteps this issue with fixed memory requirements, regardless of input length. This fundamental property enables LLMs at the extreme edge.
If I understand this correctly it seems to have a fixed memory requirement regardless of the context length. I notice that when using local inferencing with models that support up to a 128k context window, the smaller the context window size I choose, the less memory it consumes (in addition to the memory used by the model data itself).
Removing the context window altogether is like the holy grail for LLMs. You could feed it volumes of information to give it more and more context for a more accurate response to your query.
To put it into perspective, there are about 0.75 tokens per word, with around 96,000 words in a 128k context window. Average novels run around 60,000 to 100,000 words, with "Harry Potter and the Sorcerer's Stone" with about 77,000 words.
Probably an oversimplification, but imagine stuffing a model up front with the Harry Potter novel and being able to ask it to summarize, look for specific parts of the text, create character biographies, write a new imaginative piece using the same writing style, isolate heroes vs. villains, etc.
Now an alternate route, like filling up context for a programming language, examples, the documentation for some 3rd party APIs, and then asking the model to write code to solve a specific problem using all those tools.
If they can do extreme RAG (Retrieval Augment Generation, where current data is retrieved and used to provide the model context) in memory-constrained Edge cases, that is a boon for TENNS. An additional advantage when performing accurate and up-to-date inferencing in conjunction with Akida's ability to update its model as it learns from new sensor input.
The only thing that would get me excited now is Brainchip actually selling their product and making some explosive revenue. NVIDIA would never have thought something like this could've happened, but it did. Imagine waking up one morning and finding out that the PRC have managed to develop and sell some type of device with on chip learning, independant of the cloud. Don't think because its Chinese people won't buy it and its at the right price..... also no secutity concern because it doesn't need to be connected to the cloud....This has me excited:
1. No explosive KV cache: Unlike traditional transformer models that rely on rapidly expanding key-value (KV) caches, TENNs sidesteps this issue with fixed memory requirements, regardless of input length. This fundamental property enables LLMs at the extreme edge.
If I understand this correctly it seems to have a fixed memory requirement regardless of the context length. I notice that when using local inferencing with models that support up to a 128k context window, the smaller the context window size I choose, the less memory it consumes (in addition to the memory used by the model data itself).
Removing the context window altogether is like the holy grail for LLMs. You could feed it volumes of information to give it more and more context for a more accurate response to your query.
To put it into perspective, there are about 0.75 tokens per word, with around 96,000 words in a 128k context window. Average novels run around 60,000 to 100,000 words, with "Harry Potter and the Sorcerer's Stone" with about 77,000 words.
Probably an oversimplification, but imagine stuffing a model up front with the Harry Potter novel and being able to ask it to summarize, look for specific parts of the text, create character biographies, write a new imaginative piece using the same writing style, isolate heroes vs. villains, etc.
Now an alternate route, like filling up context for a programming language, examples, the documentation for some 3rd party APIs, and then asking the model to write code to solve a specific problem using all those tools.
If they can do extreme RAG (Retrieval Augment Generation, where current data is retrieved and used to provide the model context) in memory-constrained Edge cases, that is a boon for TENNS. An additional advantage when performing accurate and up-to-date inferencing in conjunction with Akida's ability to update its model as it learns from new sensor input.
This rather eloquent viewpoint or a version thereof, should be a news release from Brainchip touting not only our product but out leadership.