BRN Discussion Ongoing

A8kr1

Member
The Merc "announcement" wasn't even an announcement by brainchip. It came out in an interview with Mercedes about their new concept car.
How conveniently timed
 

7fĂźr7

Regular
  • Like
Reactions: 2 users

Deadpool

hyper-efficient Ai
Hi Esq
You beat me to it but this is the link to the Frequently Answered Questions where this information can be found on the Brainchip website.


sb182 every ASX listed company has to maintain a register of shareholders. A company can do it or it can as most do appoint an agent. In this case Brainchip has appointed Boardroom. Every company has a different approach to the release of information however under the Rules a registered shareholder has the right to personally attend the share registry in this case Boardroom and without payment of any fee inspect the register. Over my time with Brainchip as a shareholder the registry by all accounts has been a little flexible if what others has stated is correct. I have never had the need to contact so cannot speak from personal experience.

On the LDA Capital extension we know it costs about $20 million per annum to keep the lights on. We know that Brainchip has about this amount so about four quarters of cash runway.

We know that Brainchip has indicated the desire to produce the AKIDA 2.0 as a reference chip. We know that Brainchip also is hoping to complete the IP design of AKIDA 3.0.

So I simply ask the rhetorical question if Brainchip only has enough cash to keep the lights on for the next four quarters ie 2024 where were the funds coming from to turn the AKIDA 2.0 into a reference chip and if AKIDA 3.0 IP is finalised where will the funds come from to turn it into a reference chip.

Socionext and TSMC cost the best part of about $5 million to produce AKD1000. I am not sure if we had sufficient information to work out what AKD1500 cost at Global Foundries. AKIDA 2.0 and AKIDA 3.0 are more complex chips again to produce. If we look at the amount of capital secured via LDA Capital by coincidence the minimum draw down amount of $12 million would likely be sufficient to produce at least AKIDA 2.0.
and may also cover AKIDA 3.0.

I will leave it to others to say whether the production of AKIDA 2.0 and possibly AKIDA 3.0 as reference chips to demonstrate LLMs to customers World wide is a good or bad thing from a commercialisation perspective and a sensible way to accelerate that process.

My opinion only DYOR
FF

AKIDA BALLISTA
I think this goes without saying but would just like to add, that the company has to show that they have readily available funding going forward, when they are potentially inking $M contracts with clients. I don't think anybody is going to sign on the dotted line if they think BRN may a problem with finance going forward.
 
  • Like
  • Fire
  • Thinking
Reactions: 45 users

IloveLamp

Top 20
I think this goes without saying but would just like to add, that the company has to show that they have readily available funding going forward, when they are potentially inking $M contracts with clients. I don't think anybody is going to sign on the dotted line if they think BRN may a problem with finance going forward.
Great point
 
  • Like
Reactions: 14 users

7fĂźr7

Regular
I think this goes without saying but would just like to add, that the company has to show that they have readily available funding going forward, when they are potentially inking $M contracts with clients. I don't think anybody is going to sign on the dotted line if they think BRN may a problem with finance going forward.
Why should BRN have problems with finance if customers sign contracts with brainchip? If customers sign contracts, they do it because they see benefits I guess. Our western capitalistic economy works with depts otherwise it would collapse. So, if everyone would worry about the financial situation of other companies, there would be no business at all… just my opinion
 
  • Like
Reactions: 5 users
True. I just wanted to make a point:

If they desired to produce AKD2.0 back then, why not make the cap call at that time? The CEO has watched SP decline for so, so long, only to amend now. It should be embarrassing for him. I hope it is...
Sorry I should have realised you were simply setting a trap and you did know the source of my statements.

I do hope you get your wish that the Brainchip share price collapses and your desire to embarrass Sean Hehir and prove his incompetence is realised.

Who gives a ...... about retail shareholders anyway as long as your personal grievance is satisfied???

Hang on are you not a real shareholder looking to make a return on your investment???

Then again if you were you would well know just as you did when setting your trap that a design win that was expected to see an IP licence sold fell over during second half 2023 when the company concerned closed down that arm of its operations laying of thousands of employees.

You would also know that a number of the early access customers for AKIDA 2.0 out of the blue suspended their further testing until the finalised IP was released. Yet knowing these things you completely ignore that these events could have had an effect on Brainchip's planning causing a rethink around timing of when revenue would occur.

But no in your balanced way you have laid a trap to try and make a point purely to satisfy some desire to embarrass the CEO.

It is indeed an interesting approach to investment:

1. Ignore the known facts.
2. Draw adverse conclusions without allowing for any alternate view that these facts might support.
3. Publish these adverse conclusions on a shareholder forum.
4. Then engage in setting a trap for what reason was it because you want the CEO to feel publicly embarrassed.
5. Celebrate your success in misleading those who read your posts.
6. In the end not embarrass the CEO.
7. Feel personally embarrassed by being caught out in the attempt.

I assume that what I have just responded to was another one of your traps. I like traps that work but given the above this one appears not to have triggered.

My opinion only DYOR
Fact Finder
 
  • Like
  • Love
  • Haha
Reactions: 44 users

Quatrojos

Regular
Hi Esq
You beat me to it but this is the link to the Frequently Answered Questions where this information can be found on the Brainchip website.


sb182 every ASX listed company has to maintain a register of shareholders. A company can do it or it can as most do appoint an agent. In this case Brainchip has appointed Boardroom. Every company has a different approach to the release of information however under the Rules a registered shareholder has the right to personally attend the share registry in this case Boardroom and without payment of any fee inspect the register. Over my time with Brainchip as a shareholder the registry by all accounts has been a little flexible if what others has stated is correct. I have never had the need to contact so cannot speak from personal experience.

On the LDA Capital extension we know it costs about $20 million per annum to keep the lights on. We know that Brainchip has about this amount so about four quarters of cash runway.

We know that Brainchip has indicated the desire to produce the AKIDA 2.0 as a reference chip. We know that Brainchip also is hoping to complete the IP design of AKIDA 3.0.

So I simply ask the rhetorical question if Brainchip only has enough cash to keep the lights on for the next four quarters ie 2024 where were the funds coming from to turn the AKIDA 2.0 into a reference chip and if AKIDA 3.0 IP is finalised where will the funds come from to turn it into a reference chip.

Socionext and TSMC cost the best part of about $5 million to produce AKD1000. I am not sure if we had sufficient information to work out what AKD1500 cost at Global Foundries. AKIDA 2.0 and AKIDA 3.0 are more complex chips again to produce. If we look at the amount of capital secured via LDA Capital by coincidence the minimum draw down amount of $12 million would likely be sufficient to produce at least AKIDA 2.0.
and may also cover AKIDA 3.0.

I will leave it to others to say whether the production of AKIDA 2.0 and possibly AKIDA 3.0 as reference chips to demonstrate LLMs to customers World wide is a good or bad thing from a commercialisation perspective and a sensible way to accelerate that process.

My opinion only DYOR
FF

AKIDA BALLISTA
We know that Brainchip has indicated the desire to produce the AKIDA 2.0 as a reference chip.

From what I've seen, there's nothing about producing AKD2.0 as a reference chip in last year's AGM. How do you know this?
 
  • Fire
Reactions: 1 users

Quatrojos

Regular
Sorry I should have realised you were simply setting a trap and you did know the source of my statements.

I do hope you get your wish that the Brainchip share price collapses and your desire to embarrass Sean Hehir and prove his incompetence is realised.

Who gives a ...... about retail shareholders anyway as long as your personal grievance is satisfied???

Hang on are you not a real shareholder looking to make a return on your investment???

Then again if you were you would well know just as you did when setting your trap that a design win that was expected to see an IP licence sold fell over during second half 2023 when the company concerned closed down that arm of its operations laying of thousands of employees.

You would also know that a number of the early access customers for AKIDA 2.0 out of the blue suspended their further testing until the finalised IP was released. Yet knowing these things you completely ignore that these events could have had an effect on Brainchip's planning causing a rethink around timing of when revenue would occur.

But no in your balanced way you have laid a trap to try and make a point purely to satisfy some desire to embarrass the CEO.

It is indeed an interesting approach to investment:

1. Ignore the known facts.
2. Draw adverse conclusions without allowing for any alternate view that these facts might support.
3. Publish these adverse conclusions on a shareholder forum.
4. Then engage in setting a trap for what reason was it because you want the CEO to feel publicly embarrassed.
5. Celebrate your success in misleading those who read your posts.
6. In the end not embarrass the CEO.
7. Feel personally embarrassed by being caught out in the attempt.

I assume that what I have just responded to was another one of your traps. I like traps that work but given the above this one appears not to have triggered.

My opinion only DYOR
Fact Finder
Ad Hominem
 
  • Like
Reactions: 1 users
Great point
Excellent point, So I'll sit back and wait for future positive announcements to roll thru 24
 
  • Like
Reactions: 1 users

IloveLamp

Top 20
Excellent point, So I'll sit back and wait for future positive announcements to roll thru 24

Will you though............?

200w (7).gif
 
Last edited:
  • Haha
  • Like
Reactions: 13 users
Not initially happy, that we have to dance with the Devil again, but it's good to have a bigger cash buffer.

"As we enter 2024 with the momentum to grow the business on multiple vectors with our 2nd
generation Akida TM products, the Edge Box initiative and strategic partnerships, we need the ability to rapidly invest for growth and build on our lead” said Sean Hehir, CEO, BrainChip. “While we will continue to be judicious with our use of cash, having access to funding from our well-respected partners at LDA Capital, strengthens our business continuity position against well-capitalized and more established competitors in a highly competitive market.”


Sean's strategy, is for aggressive growth and penetration of the Edge A.I. market and we simply cannot do that, with the unpredictability of current incoming revenue.

With tapeout costs and production of AKD 2000 reference chips, likely to be around 7 million dollars or more (my guess) the Company, can't rely on piece meal incoming funds, to pursue it's strategy, of Edge A.I. domination and keep everything running as well.

It remains to be seen, if BrainChip can secure any AKIDA 2 IP deals, without a reference chip.
I believe, on the strength of it's now multiple year relationships with other companies and their trust in the abilities of the BrainChip team (with its accomplishments in AKIDA technology thus far) that it's possible..

But I nor the Company, can hold its breath on that one..


Any additional income, will also be strongly applied to growth at this stage and if progress and an increasing share price can be met, the LDA arrangement will provide much more than the minimum 12 million dollars.

All the better, to aggressively grow the Company.

Shareholders need to understand, that while we are technologically Superior, we are competing against companies that absolutely dwarf us, with their financial and market muscle.

BrainChip is playing the "We will be a future Big Market player" card and we need a bigger bank roll, to back that up.

Of course there is an element of risk in such an aggressive growth strategy, but Sean knows he has "very good cards" (as does everyone else at the table)..

Overall, I'm personally pleased at the financial security, this gives the Company going forward.

Remember, BrainChip plans on being around, for a long, long, long time and in a Big way.

Phone Wallpaper For Men.gif
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 46 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Here's an article written by Orr Danon is CEO of Hailo on AI processors for edge devices. He mentions Mercedes having introduced ChatGPT and discusses the benefits of having generative AI being able to operate without any internet connection and how that will improve latency, privacy and cost-efficiency.

I find the name dropping of Mercedes very interesting considering Hailo don't have a known connection to Mercedes, but we do. 🥳




The untapped potential of generative AI with emerging edge applications​

By Orr DanonJan 2, 2024 11:04am

Processing data at the edge of a network offers a promising path to generative AI’s full potential, according to the author. (Getty Images)
The internet has changed every aspect of our lives from communication, shopping, and working. Now, for reasons of latency, privacy, and cost-efficiency, the “internet of things” has been born as the internet has expanded to the network edge.
Now, with artificial intelligence, everything on the internet is easier, more personalized, and more intelligent. However, AI is currently confined to the cloud due to the large servers and high compute capacity it needs. As a result, companies like Hailo are driven by latency, privacy, and cost efficiency to develop technologies that enable AI on the edge.



Undoubtedly, the next big thing is generative AI. Generative AI presents enormous potential across industries. It can be used to streamline work and increase the efficiency of various creators — lawyers, content writers, graphic designers, musicians, and more. It can help discover new therapeutic drugs or aid in medical procedures. Generative AI can improve industrial automation, develop new software code, and enhance transportation security through the automated synthesis of video, audio, imagery, and more.



However, generative AI as it exists today is limited by the technology that enables it. That’s because generative AI happens in the cloud — large data centers of costly, energy-consuming computer processors far removed from actual users. When someone issues a prompt to a generative AI tool like ChatGPT or some new AI-based videoconferencing solution, the request is transmitted via the internet to the cloud, where it’s processed by servers before the results are returned over the network. Data centers are major energy consumers, and as AI becomes more popular, global energy consumption will rapidly increase. This is a growing concern for companies trying to balance between the need to offer innovative solutions to the requirement to reduce operating costs and environmental impact.
As companies develop new applications for generative AI and deploy them on different types of devices — video cameras and security systems, industrial and personal robots, laptops and even cars — the cloud is a bottleneck in terms of bandwidth, cost, safety, and connectivity.
And for applications like driver assist, personal computer software, videoconferencing and security, constantly moving data over a network can be a privacy risk.



The solution is to enable these devices to process generative AI at the edge. In fact, edge-based generative AI stands to benefit many emerging applications.
Generative AI on the rise
Consider that in June, Mercedes-Benz said it would introduce ChatGPT to its cars. In a ChatGPT-enhanced Mercedes, for example, a driver could ask the car — hands free — for a dinner recipe based on ingredients they already have at home. That is, if the car is connected to the internet. In a parking garage or remote location, all bets are off.
In the last couple of years, videoconferencing has become second nature to most of us. Already, software companies are integrating forms of AI into videoconferencing solutions. Maybe it’s to optimize audio and video quality on the fly, or to “place” people in the same virtual space. Now, generative AI-powered videoconferences can automatically create meeting minutes or pull in relevant information from company sources in real-time as different topics are discussed.

However, if a smart car, videoconferencing system, or any other edge device can’t reach back to the cloud, then the generative AI experience can’t happen. But what if they didn’t have to? It sounds like a daunting task considering the enormous processing of cloud AI, but it is now becoming possible.

Generative AI at the edge
Already, there are generative AI tools, for example, that can automatically create rich, engaging PowerPoint presentations. But the user needs the system to work from anywhere, even without an internet connection.
Similarly, we’re already seeing a new class of generative AI-based “co-pilot” assistants that will fundamentally change how we interact with our computing devices by automating many routine tasks, like creating reports or visualizing data. Imagine flipping open a laptop, the laptop recognizing you through its camera, then automatically generating a course of action for the day,week or month based on your most used tools, like Outlook, Teams, Slack, Trello, etc. But to maintain data privacy and a good user experience, you must have the option of running generative AI locally.
In addition to meeting the challenges of unreliable connections and data privacy, edge AI can help reduce bandwidth demands and enhance application performance. For instance, if a generative AI application is creating data-rich content, like a virtual conference space, via the cloud, the process could lag depending on available (and costly) bandwidth. And certain types of generative AI applications, like security, robotics, or healthcare, require high-performance, low-latency responses that cloud connections can’t handle.
In video security, the ability to re-identify people as they move among many cameras — some placed where networks can’t reach — requires data models and AI processing in the actual cameras. In this case, generative AI can be applied to automated descriptions of what the cameras see through simple queries like, “Find the 8-year-old child with the red T shirt and baseball cap.”
That’s generative AI at the edge.
Developments in edge AI
Through the adoption of a new class of AI processors and the development of leaner, more efficient, though no-less-powerful generative AI data models, edge devices can be designed to operate intelligently where cloud connectivity is impossible or undesirable.
Of course, cloud processing will remain a critical component of generative AI. For example, training AI models will remain in the cloud. But the act of applying user inputs to those models, called inferencing, can — and in many cases should — happen at the edge.
The industry is already developing leaner, smaller, more efficient AI models that can be loaded onto edge devices. Companies like Hailo manufacture AI processors purpose-designed to perform neural network processing. Such neural-network processors not only handle AI models incredibly rapidly, but they also do so with less power, making them energy efficient and apt to a variety of edge devices, from smartphones to cameras.
Utilizing generative AI at the edge enables effective load-balancing of growing workloads, allows applications to scale more stably, relieves cloud data centers of costly processing, and helps reduce environmental impact. Generative AI is on the brink of revolutionizing computing once more. In the future, your laptop’s LLM may auto-update the same way your OS does today — and function in much the same way. However, in order to get there, generative AI processing will need to be enabled at the network’s edge. The outcome promises to be greater performance, energy efficiency, security and privacy. All of which leads to AI applications that reshape the world just as significantly as generative AI itself.
Orr Danon is CEO of Hailo, a maker of AI processors for edge devices used in automotive, security, industrial automation and retail applications, among others.
 
  • Like
  • Fire
  • Love
Reactions: 17 users
Not initially happy, that we have to dance with the Devil again, but it's good to have a bigger cash buffer.

"As we enter 2024 with the momentum to grow the business on multiple vectors with our 2nd
generation Akida TM products, the Edge Box initiative and strategic partnerships, we need the ability
to rapidly invest for growth and build on our lead
” said Sean Hehir, CEO, BrainChip. “While we will
continue to be judicious with our use of cash, having access to funding from our well-respected
partners at LDA Capital, strengthens our business continuity position against well-capitalized and
more established competitors in a highly competitive market
.”


Sean's strategy, is for aggressive growth and penetration of the Edge A.I. market and we simply cannot do that, with the unpredictability of current incoming revenue.

With tapeout costs and production of AKD 2000 reference chips, likely to be around 7 million dollars or more (my guess) the Company, can't rely on piece meal incoming funds, to pursue it's strategy, of Edge A.I. domination and keep everything running as well.

It remains to be seen, if BrainChip can secure any AKIDA 2 IP deals, without a reference chip.
I believe, on the strength it's now multiple year relationships with other companies and their trust in the abilities of the BrainChip team (with its accomplishments in AKIDA technology thus far) that it's possible..

But I nor the Company, can hold its breath on that one..


Any additional income, will also be strongly applied to growth at this stage and if progress and an increasing share price can be met, the LDA arrangement will provide much more than the minimum 12 million dollars.

All the better, to aggressively grow the Company.

Shareholders need to understand, that while we are technologically Superior, we are competing against companies that absolutely dwarf us, with their financial and market muscle.

BrainChip is playing the "We will be a future Big Market player" card and we need a bigger bank roll, to back that up.

Of course there is an element of risk in such an aggressive growth strategy, but Sean knows he has "very good cards" (as does everyone else at the table)..

Overall, I'm personally pleased at the financial security, this gives the Company going forward.

Remember, BrainChip plans on being around, for a long, long, long time and in a Big way.

View attachment 53262
Not initially happy, that we have to dance with the Devil again, but it's good to have a bigger cash buffer.

"As we enter 2024 with the momentum to grow the business on multiple vectors with our 2nd
generation Akida TM products, the Edge Box initiative and strategic partnerships, we need the ability
to rapidly invest for growth and build on our lead
” said Sean Hehir, CEO, BrainChip. “While we will
continue to be judicious with our use of cash, having access to funding from our well-respected
partners at LDA Capital, strengthens our business continuity position against well-capitalized and
more established competitors in a highly competitive market
.”


Sean's strategy, is for aggressive growth and penetration of the Edge A.I. market and we simply cannot do that, with the unpredictability of current incoming revenue.

With tapeout costs and production of AKD 2000 reference chips, likely to be around 7 million dollars or more (my guess) the Company, can't rely on piece meal incoming funds, to pursue it's strategy, of Edge A.I. domination and keep everything running as well.

It remains to be seen, if BrainChip can secure any AKIDA 2 IP deals, without a reference chip.
I believe, on the strength it's now multiple year relationships with other companies and their trust in the abilities of the BrainChip team (with its accomplishments in AKIDA technology thus far) that it's possible..

But I nor the Company, can hold its breath on that one..


Any additional income, will also be strongly applied to growth at this stage and if progress and an increasing share price can be met, the LDA arrangement will provide much more than the minimum 12 million dollars.

All the better, to aggressively grow the Company.

Shareholders need to understand, that while we are technologically Superior, we are competing against companies that absolutely dwarf us, with their financial and market muscle.

BrainChip is playing the "We will be a future Big Market player" card and we need a bigger bank roll, to back that up.

Of course there is an element of risk in such an aggressive growth strategy, but Sean knows he has "very good cards" (as does everyone else at the table)..

Overall, I'm personally pleased at the financial security, this gives the Company going forward.

Remember, BrainChip plans on being around, for a long, long, long time and in a Big way.

Has it been decided if we go 28 or 12 nm Akida 2.0)? I think 7 million for tape-out would definitely be too high for 28 nm. Even 12 nm is less than 7 (USD that is). It doesn’t change much on your points. I was just curious about this aspect.
 
  • Thinking
Reactions: 1 users

buena suerte :-)

BOB Bank of Brainchip
I think this goes without saying but would just like to add, that the company has to show that they have readily available funding going forward, when they are potentially inking $M contracts with clients. I don't think anybody is going to sign on the dotted line if they think BRN may a problem with finance going forward.
Agreed..We don't want to go looking for finance deals at the 11th hour ..Get the $$$$$$ in place and ready to go 'If needed'!!? I'm taking this as a positive! :)

Cheers
 
  • Like
  • Love
  • Fire
Reactions: 18 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
emmy-amy-poehler.gif



3 trends for 2024: AI drives more edge intelligence, RISC-V, & chiplets​

January 2, 2024 Nitin Dahad
With the rise of AI, once “simple” devices are becoming increasingly intelligent, leading to more computing devices than ever before.



With CES 2024 set to open its doors in Las Vegas just a week from now, it’s clear that this year is all about evolving consumer electronics products that rely on ever more connected, embedded edge intelligence.
This is nothing new, and we’ve been talking about it for a few years, but after the industry ‘hype’ of 2023 around generative AI, consumers will begin to understand more of what it means for them in their everyday lives.
Almost every industry vertical will see more connected embedded devices with even more smartness or intelligence at the edge.
CES generic image


In his keynote at CES 2024 on Tuesday 9th January 2024, Pat Gelsinger, CEO of Intel, will explore how silicon, amplified by innovative and open software, is enabling AI capabilities for consumers and business alike. And on the following day, Qualcomm president and CEO Cristiano Amon will highlight how more devices will be seamlessly integrated into our lives; he’ll explain that AI running pervasively and continually on devices will transform user experiences, making them more natural, intuitive, relevant, and personal, with the need for increased immediacy, privacy, and security.

That means more machine learning (ML) in more and more constrained devices, in the sensors, whether it is for the internet of things (IoT), for industrial automation, for autonomous mobility and software-defined vehicles (SDVs), or for health and wearable devices.
In the embedded world here are what I see as trends enabling some of this:


1. Edge intelligence gets better

If it’s any indication of the direction of travel, then the research paper just released by Apple on deploying large language models (LLMs) on resource constrained devices with limited memory is certainly a pointer. It’s paper, entitled “LLM in a flash: Efficient Large Language Model Inference with Limited Memory”, tackles the challenge of efficiently running LLMs that exceed the available DRAM capacity by storing the model parameters on flash memory but bringing them on demand to DRAM.
In the paper, the Apple team said their methos involves constructing an inference cost model that harmonizes with the flash memory behavior, enabling optimization in two critical areas: reducing the volume of data transferred from flash and reading data in larger, more contiguous chunks. To do this, they have introduced two techniques – one called “windowing” to strategically reduce data transfer by reusing previously activated neurons, and the second being “row-column bundling” to increase the size of data chunks read from flash memory.
The papers states, “These methods collectively enable running models up to twice the size of the available DRAM, with a 4-5x and 20-25x increase in inference speed compared to naive loading approaches in CPU and GPU, respectively. Our integration of sparsity awareness, context-adaptive loading, and a hardware-oriented design paves the way for effective inference of LLMs on devices with limited memory.”
What this points to is that the general direction of travel in the industry is the deployment of more machine learning and inference at the edge.

2. RISC-V adoption becomes more visible

In August 2023, several companies had announced the formation of a new unnamed company that would help accelerate commercialization of RISC-V hardware globally. Well in the last week, that company was named as Quintauris, with Alexander Kocher appointed as CEO. Headquartered in Munich, Germany, the company’s investors are Bosch, Infineon, Nordic Semiconductor, NXP Semiconductors, and Qualcomm Technologies, Inc. While the web site is minimal at the moment, it states:
“The company will be a single source to enable compatible RISC-V based products, provide reference architectures, and help establish solutions widely used in the industry. Initial application focus will be automotive, but with an eventual expansion to include mobile and IoT.”
It would be interesting to see how Quintauris sits alongside RISC-V International, its neighbor headquartered in Zurich, Switzerland. The industry association no doubt looks after maintaining the instruction set architecture’s (ISA’s) specifications, while Quintauris is likely to be a resource for developers needing ready-made reference boards and systems for their own development.
When many commentators talk about RISC-V, the point that is often missed is that developers are designing systems based on heterogenous architectures – so multiple ISAs are likely to be part of the chip with RISC-V being deployed for various functions.
Consulting firm SHD Group presented a report at the November 2023 RISC-V Summit in the U.S. to highlight its own RISC-V market analysis, which it expects to release as a report this year. In a briefing with embedded.com in December 2023, SHD Group’s principal analyst Richard Wawrzyniak told us, “We’re in a heterogeneous world. We’re not saying that RISC-V is taking over the world, but the ecosystem is building out well with all the elements a designer needs to create their own silicon.”
SHD Group RISC-V Market Report 2023 - Market revenues RISC-V market revenue forecast from the SHD Group, (Source: The SHD Group)
His research highlights numbers of parts shipping with RISC-V inside, as opposed to the actual number of RISC-V cores. He said, “There are billions of units of SoCs shipping with RISC-V within them in any form or function. The report looks at 54 applications in six different categories, and projects that by 2030, RISC-V based SoCs will see chip revenues of almost $100 billion and will deliver IP revenues in the area of $1.6 billion. And around 2027, the research projects a flip from license-driven revenue to royalty-driven revenues.

3. Chiplet business will start looking a bit like IP business

Chiplets have been all the rage last year, and 2024 could be a year when it starts to look like the way the intellectual property (IP) business looked about 20 or so years ago. Some of us might recall things like the Virtual Component Exchange originally established in 2000, whose objective was to act as a portal for both IP buyers and sellers.
Now, chiplets are one of the answers to overcoming the challenges of enabling the massive compute demands from today’s ML-intensive products, without having to build one monolithic chip at the most advanced (and expensive) process technology available. As Chuck Sobey, chair of last year’s Chiplet Summit in California said, “Chiplets can do much to increase chip scalability, modularity, and flexibility. But the idea only works if product developers can integrate them quickly and cheaply. Effective integration platforms require many tools. Vendors in all areas must provide a platform and support an ecosystem and open-source efforts to fill the interface and software gaps.”
Zero Asic - efabric_concept Zero ASIC’s efabric concept. (Image: Zero ASIC)
Since then, one of the most interesting startups to come up in this area in 2023 was Zero ASIC, who said they are democratizing chip making with a chiplet design and emulation platform. The company came out of stealth in October and offers 3D chiplet composability with fully automated no-code chiplet-based chip design. The company said its platform enables automated design, validation, and assembly of system-in-packages (SiPs) from a catalog of known good chiplets. Web-based design and emulation tools allow users to test out custom designs quickly and accurately before ordering physical devices, using cloud FPGAs to implement the RTL source code of each chiplet in a custom SoC.
The proliferation of chiplets and the ability to use chiplets from various sources will of course depend on standards, but that is already a part of the industry’s thinking, with the evolution of the UCIe (universal chiplet interconnect express) specification.

Not forgetting the AI/ML driving the trends: Nvidia

Ultimately, it’s demand for machine learning (ML) that is driving the trends above. And while not specifically highlighted, it goes without saying that domain specific AI/ML will be the key drivers in 2024 – moving us beyond the generic hype around AI and generative AI. And there will be concerns around accuracy, privacy, data security, and secure connectivity.
The abundance of data will be a key concern, so developers will no doubt be looking more closely at areas such as data encryption, cybersecurity at both hardware and connectivity layers, plus the use of machine learning to stay on top of these issues.
Various companies have offered their AI predictions and trends for 2024. Nvidia executives highlighted its 17 predictions for 2024. Manuvir Dav, the company’s vice president for enterprise computing, says one size won’t fit all, and there will be hundreds of customer large language models (LLMs) to deliver accurate, specific informed responses to analyzing the masses of data within the enterprise; to enable this, open source software and off-the-shelf AI and microservices will lead the charge.
In healthcare, Nvidia’s vice president of healthcare Kimberly Powell talks about combining instruments, imaging, robotics and real-time patient data with AI to enable better surgeon training, more personalization during surgery and better safety with real-time feedback and guidance even during remote surgery. She said, this will help close the gap on the 150 million surgeries that are needed yet do not occur, particularly in low- and middle-income countries.
In automotive, the company’s vice president of automotive, XinZhou Wu, said generative AI will help modernize the vehicle production lifecycle in smart factories and create digital twins. Beyond the automotive product lifecycle, generative AI will also enable breakthroughs in autonomous vehicle (AV) development, including turning recorded sensor data into fully interactive 3D simulations. These digital twin environments, as well as synthetic data generation, will be used to safely develop, test and validate AVs at scale virtually before they’re deployed in the real world.
And Deepu Talla, Nvidia’s vice president of embedded and edge and computing, said, generative AI will develop code for robots and create new simulations to test and train them. He added, “LLMs will accelerate simulation development by automatically building 3D scenes, constructing environments and generating assets from inputs. The resulting simulation assets will be critical for workflows like synthetic data generation, robot skills training and robotics application testing.” For the robotics industry to scale, he said robots have to become more generalizable — that is, they need to acquire skills more quickly or bring them to new environments. “Generative AI models — trained and tested in simulation — will be a key enabler in the drive toward more powerful, flexible and easier-to-use robots.”

The trends changing the computing landscape – according to Ampere

Continuing on the generic computing trends for 2024, Jeff Wittich, chief product officer at Ampere Computing, offered his perspective on what he sees for 2024 in a blog, and summarized here:
  1. AI inference and large-scale deployment take center stage
  2. Sustainability and energy efficiency become even more important in the context of AI
  3. Computing and data processing is becoming even more distributed and complex
And it’s that third bullet point that summarizes our trends for 2024, citing Wittich: “With the rise of AI, once “simple” devices are becoming increasingly intelligent, leading to more computing devices than ever before. To process data from these devices in real-time, high-performance computing is being deployed all over the place – in public and private clouds but now also at the edge, leading to greater demand for computing in super-local areas compared to centralized regions. The complexity of these environments requires solutions that the industry didn’t have five years ago, and there will be more specialized vendors than ever trying to address the situation.”
We’ll be at the following events over the next few months where we will be happy to chat more about some of these topics and more:

 
  • Like
  • Fire
Reactions: 13 users
Has it been decided if we go 28 or 12 nm Akida 2.0)? I think 7 million for tape-out would definitely be too high for 28 nm. Even 12 nm is less than 7 (USD that is). It doesn’t change much on your points. I was just curious about this aspect.
 
  • Like
Reactions: 2 users
Has it been decided if we go 28 or 12 nm Akida 2.0)? I think 7 million for tape-out would definitely be too high for 28 nm. Even 12 nm is less than 7 (USD that is). It doesn’t change much on your points. I was just curious about this aspect.
No clue, if you're asking me..

But if it's all the same for "proof of concept" you'ld go for the cheapest option, otherwise it's just a show of financial muscle, which we haven't got?

I have no idea, of current end to end costings, of 28nm..
 
  • Like
  • Love
Reactions: 7 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
This is not strictly relevant but if it's true that you are what you eat, then I'm definitely a pavlova.
 
  • Haha
  • Love
  • Like
Reactions: 14 users
Top Bottom