BRN Discussion Ongoing

Frangipani

Top 20

Extremely interesting comment by Alf Kuchenbuch in response to a post by STEMIX.TECH, a newly-founded Romanian “deep-tech and innovation-focused company specializing in AI-centric hardware and software products” that has plans to launch an AI-powered kids’ toy.


They seem to have been in contact for a while, given the reply by STEMIX.TECH?!

Of course, as a start-up founded in 2024, they are not in our target group of companies that would be able to afford signing an IP license with us.
But their vision of “becoming a leading provider of LLM edge devices” sounds as if Akida 2.0 would be a much better fit than any AKD1000/1500 chips? 🤔

So is this possibly a case of a collaboration involving a potential software license of TENNs only?! After all, you don’t necessarily need Akida for running TENNs!



View attachment 66457

View attachment 66458
View attachment 66462
View attachment 66459

Until February, both STEMIX.TECH co-founders Alf Kuchenbuch addressed in his comment used to be with CyberSwarm, another AI start-up from Romania that has previously been discussed here on TSE. One as Project Manager, the other one as CTO. So both of them are already familiar with neuromorphic technology, although CyberSwarm uses analog architecture rather than digital.

6D94A97E-2D62-4BCC-BDAB-C1FA0E354406.jpeg

F2F36E3D-4146-40BE-84C5-C098C6DD8704.jpeg


BBDDE42B-6565-401B-A38F-397594A3FFD5.jpeg

EFEB4AA3-1922-4A0A-807B-56C268894C7F.jpeg



Looks as if they recently met up with Alf Kuchenbuch at Hardware Pioneers Max in London:

0A0D604B-CC7F-4C32-A450-9961325CB7E6.jpeg




Here is some more info on the AI toy they are planning on launching:


A03335D8-792C-469B-BF25-1742E0D4C280.jpeg



So STEMIX.TECH’s AI toy will definitely have a connection to the cloud, as “parents can access a web portal to view all questions, answers, and interactions their child has with the toy.”

Some parents will like this idea, but I personally think it opens up a can of privacy and data protection issues.
 
  • Like
  • Love
  • Fire
Reactions: 15 users

Frangipani

Top 20
  • Haha
  • Love
Reactions: 8 users

Frangipani

Top 20
Until February, both STEMIX.TECH co-founders Alf Kuchenbuch addressed in his comment used to be with CyberSwarm, another AI start-up from Romania that has previously been discussed here on TSE. One as Project Manager, the other one as CTO. So both of them are already familiar with neuromorphic technology, although CyberSwarm uses analog architecture rather than digital.

View attachment 66485
View attachment 66486

View attachment 66487
View attachment 66488


Looks as if they recently met up with Alf Kuchenbuch at Hardware Pioneers Max in London:

View attachment 66490



Here is some more info on the AI toy they are planning on launching:


View attachment 66491


So STEMIX.TECH’s AI toy will definitely have a connection to the cloud, as “parents can access a web portal to view all questions, answers, and interactions their child has with the toy.”

Some parents will like this idea, but I personally think it opens up a can of privacy and data protection issues.

I suspect CyberSwarm founder and CEO Mihai Raneti will not be amused when he finds out that two of his ex-employees, who founded their own AI start-up earlier this year, appear to be collaborating with BrainChip:

FC8859C8-21A5-4BF6-8D6F-E14E2E458508.jpeg
 
  • Like
  • Thinking
  • Haha
Reactions: 12 users
Until February, both STEMIX.TECH co-founders Alf Kuchenbuch addressed in his comment used to be with CyberSwarm, another AI start-up from Romania that has previously been discussed here on TSE. One as Project Manager, the other one as CTO. So both of them are already familiar with neuromorphic technology, although CyberSwarm uses analog architecture rather than digital.

View attachment 66485
View attachment 66486

View attachment 66487
View attachment 66488


Looks as if they recently met up with Alf Kuchenbuch at Hardware Pioneers Max in London:

View attachment 66490



Here is some more info on the AI toy they are planning on launching:


View attachment 66491


So STEMIX.TECH’s AI toy will definitely have a connection to the cloud, as “parents can access a web portal to view all questions, answers, and interactions their child has with the toy.”

Some parents will like this idea, but I personally think it opens up a can of privacy and data protection issues.
I've often thought pursuing an application in the toy market, would be a winner and with less regulatory hurdles (barring a FMF scenario..) than high end applications, like ADAS, FSD etc..

But seems a little odd, with these previous "high end" product ambitions and the type of companies, that we've been dealing with, to be eager to engage with a company, that is only just starting out?..

Obviously some bad blood, with their previous employers too! 😛
 
  • Like
  • Thinking
Reactions: 5 users

Diogenese

Top 20
I've often thought pursuing an application in the toy market, would be a winner and with less regulatory hurdles (barring a FMF scenario..) than high end applications, like ADAS, FSD etc..

But seems a little odd, with these previous "high end" product ambitions and the type of companies, that we've been dealing with, to be eager to engage with a company, that is only just starting out?..

Obviously some bad blood, with their previous employers too! 😛
Synsense probably beats us on price for toy market.

Also toy makers couldn't handle progressing IP to production chip. They need COTS.
 
Last edited:
  • Like
  • Love
Reactions: 9 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
I suspect CyberSwarm founder and CEO Mihai Raneti will not be amused when he finds out that two of his ex-employees, who founded their own AI start-up earlier this year, appear to be collaborating with BrainChip:

View attachment 66492


I am a bit worried about posting anything incase you don't like it Fragipanni. But I think we can both agree this is awesome! Great work!
 
  • Like
  • Love
  • Fire
Reactions: 17 users

CHIPS

Regular
I am a bit worried about posting anything incase you don't like it Fragipanni. But I think we can both agree this is awesome! Great work!

Please stop both of you. You both do a fantastic job investigating BrainChip's development for us and should respect each other for it. I can say for myself that I do love both of your work and I am sure all others do too. O.K. I do not like hairy toes 🥴and I am allergic to cats (but only in real life!), but I love your running BRAVO, your ideas, and your findings. 💐

I can hardly imagine how much time you FRANGIPANI spend (for us!) on finding all those connecting dots and remembering all those of the past. I very much appreciate that. We should have said that much earlier already. 💐

Both of you are important to us and this forum and you are best when working together! So please give each other at least respect if you cannot be friends.

Thank you and have a good weekend!
 
  • Like
  • Love
  • Fire
Reactions: 49 users
Synsense probably bets us on price for toy market.

Also toy makers couldn't handle progressing IP to production chip. They need COTS.
Wasn't there talk (I think proposed by FactFinder?) of MegaChips being an enabler, for companies wanting chips, but not being a large enough entity, to warrant designing their own?

I thought AKD1500, was possibly along those lines, as MegaChips had input into the backend, or something? (children please! 🙄..).

The capabilities of AKIDA IP, would go well beyond anything SynSense could offer (not to mention our learning features).

I see a toy in this kind of vein, being disruptive and a "must have" type item (if done right and marketed well) which is maybe why Alf has shown interest (which would have being spring boarded off of the Company first?)..

Kids toys, are Big business.
 
  • Like
  • Love
Reactions: 9 users

GazDix

Regular
Screenshot_20240713_004956_com.twitter.android.jpg
 
  • Like
  • Fire
  • Love
Reactions: 33 users
 
  • Fire
  • Thinking
Reactions: 2 users

FJ-215

Regular
  • Fire
  • Like
Reactions: 2 users

IloveLamp

Top 20
  • Like
  • Fire
Reactions: 6 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Nice!




EXTRACT ONLY

Arm’s AI Chip Bet​

While most people associate AI data center workloads with GPU computing, there is a growing demand for more energy-efficient CPU-based solutions. This presents an opportunity for CPU players like Arm to cut into a market that is currently dominated by Nvidia’s GPUs. (Conversely, Nvidia’s plans to develop its own ARM-based CPUs will provide additional revenue to Arm, which holds the intellectual property rights to ARM designs.)

Of course, GPUs are expected to remain critical in AI training for the foreseeable future. But solutions like Arm Neoverse have proven that less computationally intense AI inference (i.e. the process of running live data through a trained AI model) can be done much more efficiently with a CPU architecture.

As the AI market evolves, rising adoption could play directly into Arm’s hands as inference supplants training as the primary growth driver.

Just as the AI training boom has fueled a surge in demand for Nvidia’s GPUs in recent years, as data centers look to ramp up their inference capacity, Arm’s AI-optimized CPUs will become more sought-after.




PS: Arm is expected to unveil a new data centre strategy at the end of the month that could bring it into more direct competition with Nvidia, so it’ll be worth keeping an eye out for more information on this as it emerges.
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 31 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Everyone is talking about efficiency these days, which is quite lucky since that just happens to be AKIDA’s middle name.🥳

And since Masayoshi San is prepared to pay $9 billion a year on AI, a five year license with BrainChip would be chicken feed.


SoftBank buys Graphcore, targets further AI investments​

Posted by Harry Baldock | Jul 12, 2024 | TECHNOLOGY, Investment, AI, Products & Services, COMPANY NEWS, People, Governance, Asia-Pacific, Europe, News
SoftBank buys Graphcore, targets further AI investments

News​

SoftBank founder Masayoshi Son said earlier this year that AI will be SoftBank’s ‘next big bet’ when it comes to technology​

This week, Japanese conglomerate SoftBank has announced the acquisition of struggling UK-based AI chipmaker Graphcore.
Official financial details have not been disclosed, but anonymous sources speaking to the Financial Times valued the deal at $600 million.
Graphcore creates specialised AI chips, known as intelligence processing units, which can be used to train and operator AI large language models.
This is the same type of chip technology that has seen rival chip company Nvidia soar to around $3 trillion earlier this year.
Unlike Nvidia, however, Graphcore has struggled significantly to commercialise its technology. Valued at $2.8 billion back in 2020, Graphcore has since failed to sell its products at scale, noting “lower hardware sales to key strategic customers”. In 2022, the company recorded just $2.7 million in sales, 46% lower than in 2021, and booking a pre-tax loss for the year of $205 million.
As a result, 2023 saw Graphcore undertake cost cutting measures, cutting 20% of its workforce and closing its operations in Norway, Japan, and South Korea. At the time, the company said there was ‘material uncertainty’ over the company’s survival and called for fresh funding.
Now, as part of SoftBank, Graphcore will reportedly have all the resources it needs to return to full force.
“Demand for AI compute is vast and continues to grow,” said Graphcore’s co-founder and chief executive, Nigel Toon. “There remains much to do to improve efficiency, resilience, and computational power to unlock the full potential of AI. In SoftBank, we have a partner that can enable the Graphcore team to redefine the landscape for AI technology.”
SoftBank itself has been stepping up its focus on AI for over a year now, with Son saying earlier this year that “realising ASI (Artificial Superintelligence)” was “his only focus”. He has also said the company is ready to invest roughly $9 billion a year in AI and is prepared for largescale dealmaking in the future.


 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 27 users

Wags

Regular
Please stop both of you. You both do a fantastic job investigating BrainChip's development for us and should respect each other for it. I can say for myself that I do love both of your work and I am sure all others do too. O.K. I do not like hairy toes 🥴and I am allergic to cats (but only in real life!), but I love your running BRAVO, your ideas, and your findings. 💐

I can hardly imagine how much time you FRANGIPANI spend (for us!) on finding all those connecting dots and remembering all those of the past. I very much appreciate that. We should have said that much earlier already. 💐

Both of you are important to us and this forum and you are best when working together! So please give each other at least respect if you cannot be friends.

Thank you and have a good weekend!
Well said @CHIPS couldn't agree more. Cheers to @Bravo and @Frangipani

But this also applies to anyone, who puts in the time and generously shares the outcomes of their efforts, for us all to benefit from. Thankyou.

I'm ok with alternative views or theories, allowing discussion, investigation and or debunking.

I don't see much point with the relentless negative bagging of BRN that some posters here provide, or indeed the character attacks, seems like a waste of time to me.

Personally, I'm feeling pretty anxious with BRN at the moment. Im not technical enough to appreciate the full benefits of our tech, and rely on those skilled enough here. Seems any company with the slightest AI enhancement tool, is kicking goals or getting snapped up one way or the other. Some days we struggle to hold 20c a share. WTF??

I guess the edge box's will show $$ this qtly report, but I hope to see some more upturn in revenue from other, possibly unknown source's.

I know I have said this before, so apologies in advance for being boring, but I'm a bit of a contractural / literal sort of guy.
BRN is openly and well documented, an IP focused company, following the ARM model.
BRN has stated publicly in writing "We’re embedding our IP in everything, everywhere." (It's on our website for f%cks sake)
Unless we are giving our IP away for free, is it not reasonable for an investor to assume this statement would suggest revenue $.

When I asked management about this publicly at the AGM, they blew it off as just marketing words. ?

I understand this may be hidden or buried in amongst partnerships and/or enablers, costing lots of time to bear fruit. Im just hoping that fruit is soon.
This is without discussing the "imminent' or 'explosion of sales' comments.

I'm the upbeat BRN supporter, this just shows my mood at the mo.
Anyways, on with the weekend, Rant over. Apologies all.
 
  • Like
  • Love
Reactions: 36 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
  • Wow
  • Like
  • Fire
Reactions: 4 users

Bravo

If ARM was an arm, BRN would be its biceps💪!

An interesting take on what has hampered our progress. Says here that companies such as ours were "founded with the idea that there would be an explosion in the use of specialized chips to train AI systems and to perform inference using trained models." But instead of buying our systems, the hyperscalers just went out and bought oodles of NVIDIA GPU's and started developing their own accelerators.

Anyway, I'm still optomistic that an Arm, Graphcore and Brainchip collaboration would be too good of a proposition for Softbank to ignore.

CAN SOFTBANK BE AN AI SUPERCOMPUTER PLAYER? WILL ARM LEND A HAND?​

July 12, 2024 Timothy Prickett Morgan
graphcore-chip-shot-logo-1030x438.jpg

There have been rumors that either Arm Ltd or parent company and Japanese conglomerate SoftBank would buy British AI chip and system upstart – it is no longer a startup if it is eight years old – Graphcore for quite some time. And yesterday the scuttlebutt was confirmed as Masayoshi Son got out the corporate checkbook for SoftBank Group and cut check to acquire the company for less money than its investors put in.
It has not been an easy time the various first round of AI chip startups that were founded in the wake of the initial success that Nvidia found with its GPU compute engines for training neural networks. Or, at least, it has not been as easy as it could have been to get customers and amass both funding and revenues on the road to profits.
Most of these startups, in one form or another, emulate the clean slate approach that Google took with its AI-specific Tensor Processing Units, which were first deployed in early 2015 and which are now in their sixth generation with the “Trillium” family of GPUs, the first of which were revealed a month ago. The idea is to strip all of the graphics processing that comes from the GPU heritage and the high-precision floating point math required by HPC simulation and modeling out of the design and, in many cases, build in fast and wide networking pipes to interconnect these resulting matrix math engines into clusters to take on very large jobs.

BrainChip, Cerebras Systems, Graphcore, Groq, Habana Labs (now part of Intel), Nervana Systems (supplanted by Habana at Intel), and SambaNova Systems all were founded with the idea that there would be an explosion in the use of specialized chips to train AI systems and to perform inference using trained models. But the ideal customers to buy these devices – or to acquire these companies – were the hyperscalers and cloud builders, and instead of buying any of these compute engines or systems based on them, they decided to use a different two-pronged approach. They bought Nvidia GPUs (and now sometimes AMD GPUs) for the masses (which they can rent at incredible premiums even after buying them at incredible premiums) and they started creating their own AI accelerators so they could have a second source, backup architecture, cheaper option.
Even the shortages of Nvidia GPUs, which have been propping up prices for the past three years, has not really help the cause of the AI chip and system upstarts all that much. Which is odd, and is a testament to the fact that people have learned to be weary of and leery of software stacks that are not fully there yet. So beware Tenstorrent and Etched (both of whom we just talked to and will write about shortly) and anyone else who thinks they have a better matrix math engine and a magic compiler.
It is not just a crowded market, it is a very expensive one to start up in and to be an upstart within. The money is just not there with the hyperscalers and cloud builders doing their own thing and enterprises being very risk averse when it comes to AI infrastructure.

Which is why Graphcore was seeking a buyer instead of another round of investment, which presumably is hard to come by. With former Prime Minister Rishi Sunak pledging £1.5 billion for AI supercomputing in the United Kingdom, with the first machine funded being the Isambard-AI cluster at the University of Bristol, there was always a chance that Graphcore would get a big chunk of money to build its Good AI supercomputer, a hypothetical and hopeful machine that the company said back in March 2022 it would build with 3D wafer stacking techniques on its “Bow” series Intelligence Processing Units. But for whatever reason, despite Graphcore being a darling of the British tech industry, the UK government did not fund the $120 million required to build the proposed Good machine, which would have 8,192 of the Bow IPUs lashed together to deliver 10 exaflops of compute at 16-bit precision and 4 PB of aggregate memory with over 10 PB/sec of aggregate memory bandwidth across those Bow IPUs.
We would have loved to see such a machine built, and so would have Graphcore, we presume. But even that would not have been enough to save Graphcore from its current fate.
Governments can fund one-off supercomputers and often do. The “K” and “Fugaku” supercomputers built by Fujitsu for RIKEN Lab in Japan are perfect examples. K and Fugaku are the most efficient supercomputers for true HPC workloads ever created on the planet – K actually was more efficient than the more recent Fugaku – but both are very expensive compared to alternatives that are nonetheless efficient. And they do not have software stacks that translate across industries as the Nvidia CUDA platform does after nearly two decades of immense work. K and Fugaku, despite their excellences, did not really cultivate a widening and deepening of indigenous compute on the isle of Japan, despite the very best efforts of Fujitsu with its Sparc64fx and A64FX processors and Tofu mesh/torus interconnects. Which is why Fujitsu is working on a more cloud-friendly and less HPC-specific fork of its Arm server chips called “Monaka,” which we detailed here back in March 2023.
Japan ponied up $1.2 billion for build the 10 petaflops K machine, which became operational in 2011, and $910 million for the 513.9 petaflops Fugaku machine, which became operational in 2021. If history is any guide, Japan will shell out somewhere around $1 billion for a “Fugaku-Next” machine, which will become operational in 2031. Heaven only knows how many exaflops it will have at what precisions.
For the United Kingdom, the University of Edinburgh is where the flagship supercomputer goes, not down the road from where Graphcore is located in Bristol. Of the £900 million ($1.12 billion) in funding from the British government to build an exascale supercomputer in the United Kingdom by 2026, £225 million ($281 million) of that was allocated to the Isambard-AI machine and most of the rest is going to be used to build a successor to the Archer2 system at the Edinburgh Parallel Computing Centre (EPCC) lab.
Graphcore was never going to get a piece of that action because it builds AI-only machinery, not hybrid HPC/AI systems, no matter how much the British government and the British people love to have an indigenous supplier. If it wanted full government support, Graphcore needed to create a real HPC/AI machine, something that could compete head to head with CPU-only and hybrid CPU-GPU machines. Governments are interested in weather forecasting, climate modeling, and nuclear weapons. This is why they build big supercomputers.
Because of the lack of interest by hyperscalers and cloud builders and the risk aversion of enterprises, Graphcore found itself in a very tight spot financially. The company has raised around $682 million in six rounds of funding between 2016, when it was founded, and 2021, in the belly of the coronavirus pandemic when transformers and large language models were coiling to spring on the world. That is not as much money as it seems given the enormous hardware and software development to create an exaflops AI system.
The last year for which we have financials for Graphcore is 2020, which according to a report in the Financial Times saw the company only generate $2.7 million in revenues but post $205 million in pre-tax losses. Last fall, Graphcore said it would need to raise new capital this year to continue operating, and presumably the revenue picture and loss picture did not improve by much. It is not clear how much money was left in the Graphcore kitty, but what we hear is that SoftBank paid a little more than $600 million to acquire the company. Assuming that all the money is gone, then Microsoft, Molten Ventures, Atomico, Baillie Gifford, and OpenAI’s co-founder Ilya Sutskever have lost money on their investment, which is a harsh reality given that only four years ago Graphcore had a valuation of $2.5 billion.
None of that harsh economics means that a Bow IPU, or a piece of one, could not make an excellent accelerator for an Arm-based processor. SoftBank’s Son, who is just a little too enamored of the cacophonous void of the singularity for our tastes (we like the chatter and laughter of individuals and the willing collaboration and independence of people), has made his aspirations in the AI sector clear.
But so what?
All of the upstarts mentioned above had and have aspirations in AI, and so do the hyperscalers and cloud builders who are actually trying to make a business out of this. And they all have some of the smartest people on Earth working on it – and still Nvidia owns this space compute engine, network, and software stack, which is analogous to lock, stock, and barrel.
Son spent $32 billion to acquire Arm Ltd back in 2016. If Son is so smart, he should have not even bothered. At the time, Nvidia had a market capitalization of $57.5 billion, which was up by a factor of 3.2X compared to 2015 as the acceleration waves were taking off in both HPC and AI. At $32 billion, Son could have acquired a 55.7 percent stake in Nvidia. With Nvidia’s market cap standing at $3,210 billion as we go to press, that hypothetical massive investment by Son in Nvidia would be worth just shy of $1,786 billion today.
To put that into perspective, the gross domestic product of the entire country of Japan was $4,210 billion in 2023.

We had a burr under our saddle earlier this year, talking about how Arm should have had a GPU or at least some kind of datacenter-class accelerator to compete against Nvidia in its portfolio. We took a certain amount of grief about this, but we stand by our statement that Arm left a lot of money on the Neoverse table by not having a big, fat XPU, and here are we with Son having Arm in one hand and Graphcore in the other. But with the hyperscalers and cloud builders already building their own accelerators, the time might have passed where Arm can sell IP blocks for accelerators.
But maybe not.

There may be a way to create a more general purpose Graphcore architecture and take on Fujitsu for Fugaku-Next, too. Or to collaborate with Fujitsu, which would be more consistent with how the Project Keisuko effort to make the K supercomputer started out in 2006 with a collaboration between Fujitsu, NEC, and Hitachi.
There is only one sure way to predict the future, and that is to live it. We shall see.

 
Last edited:
  • Like
  • Thinking
  • Sad
Reactions: 16 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
  • Like
  • Love
Reactions: 4 users

Bravo

If ARM was an arm, BRN would be its biceps💪!

Hitachi Vantara Federal’s Pragyansmita Nayak Discusses Edge Computing Challenges & Solutions​

by mm Jerry PetersenJuly 12, 2024, 11:55 am

Pragyansmita Nayak
Hitachi Vantara Federal's Pragyansmita Nayak Discusses Edge Computing Challenges & Solutions - top government contractors - best government contracting event

Pragyansmita Nayak, chief data scientist at Hitachi Vantara Federal, believes that edge computing will become a necessary component of the technological infrastructure by 2030.

Nayak predicts in an opinion column published on the Hitachi Vantara Federal website that the next decade will see a proliferation of various technologies, such as autonomous vehicles, smart cities and Internet-of-Things devices, which will require speedy and efficient local data processing that only edge computing can deliver.


Alongside the benefits, edge computing also comes with a number of challenges, including a dramatic increase in the volume of data generated by devices; security and privacy concerns; interoperability problems due to the use of proprietary technologies; and the need for greater energy efficiency.

Nayak nevertheless considers edge computing “a critical area of focus for future technological development” and so looks to other innovations to address inherent challenges.

These innovations include artificial intelligence, which could enable smarter data processing; quantum computing, which could bolster data encryption and increase processing speeds; and advances in networking, which could deliver the low-latency connectivity that edge computing requires.


Screenshot 2024-07-13 at 11.48.46 am.png


 
  • Like
  • Love
Reactions: 7 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
  • Like
  • Love
  • Thinking
Reactions: 11 users
Top Bottom