BRN Discussion Ongoing

A8kr1

Member
The Merc "announcement" wasn't even an announcement by brainchip. It came out in an interview with Mercedes about their new concept car.
How conveniently timed
 

7für7

Top 20
  • Like
Reactions: 2 users

Deadpool

hyper-efficient Ai
Hi Esq
You beat me to it but this is the link to the Frequently Answered Questions where this information can be found on the Brainchip website.


sb182 every ASX listed company has to maintain a register of shareholders. A company can do it or it can as most do appoint an agent. In this case Brainchip has appointed Boardroom. Every company has a different approach to the release of information however under the Rules a registered shareholder has the right to personally attend the share registry in this case Boardroom and without payment of any fee inspect the register. Over my time with Brainchip as a shareholder the registry by all accounts has been a little flexible if what others has stated is correct. I have never had the need to contact so cannot speak from personal experience.

On the LDA Capital extension we know it costs about $20 million per annum to keep the lights on. We know that Brainchip has about this amount so about four quarters of cash runway.

We know that Brainchip has indicated the desire to produce the AKIDA 2.0 as a reference chip. We know that Brainchip also is hoping to complete the IP design of AKIDA 3.0.

So I simply ask the rhetorical question if Brainchip only has enough cash to keep the lights on for the next four quarters ie 2024 where were the funds coming from to turn the AKIDA 2.0 into a reference chip and if AKIDA 3.0 IP is finalised where will the funds come from to turn it into a reference chip.

Socionext and TSMC cost the best part of about $5 million to produce AKD1000. I am not sure if we had sufficient information to work out what AKD1500 cost at Global Foundries. AKIDA 2.0 and AKIDA 3.0 are more complex chips again to produce. If we look at the amount of capital secured via LDA Capital by coincidence the minimum draw down amount of $12 million would likely be sufficient to produce at least AKIDA 2.0.
and may also cover AKIDA 3.0.

I will leave it to others to say whether the production of AKIDA 2.0 and possibly AKIDA 3.0 as reference chips to demonstrate LLMs to customers World wide is a good or bad thing from a commercialisation perspective and a sensible way to accelerate that process.

My opinion only DYOR
FF

AKIDA BALLISTA
I think this goes without saying but would just like to add, that the company has to show that they have readily available funding going forward, when they are potentially inking $M contracts with clients. I don't think anybody is going to sign on the dotted line if they think BRN may a problem with finance going forward.
 
  • Like
  • Fire
  • Thinking
Reactions: 45 users

IloveLamp

Top 20
I think this goes without saying but would just like to add, that the company has to show that they have readily available funding going forward, when they are potentially inking $M contracts with clients. I don't think anybody is going to sign on the dotted line if they think BRN may a problem with finance going forward.
Great point
 
  • Like
Reactions: 14 users

7für7

Top 20
I think this goes without saying but would just like to add, that the company has to show that they have readily available funding going forward, when they are potentially inking $M contracts with clients. I don't think anybody is going to sign on the dotted line if they think BRN may a problem with finance going forward.
Why should BRN have problems with finance if customers sign contracts with brainchip? If customers sign contracts, they do it because they see benefits I guess. Our western capitalistic economy works with depts otherwise it would collapse. So, if everyone would worry about the financial situation of other companies, there would be no business at all… just my opinion
 
  • Like
Reactions: 5 users
True. I just wanted to make a point:

If they desired to produce AKD2.0 back then, why not make the cap call at that time? The CEO has watched SP decline for so, so long, only to amend now. It should be embarrassing for him. I hope it is...
Sorry I should have realised you were simply setting a trap and you did know the source of my statements.

I do hope you get your wish that the Brainchip share price collapses and your desire to embarrass Sean Hehir and prove his incompetence is realised.

Who gives a ...... about retail shareholders anyway as long as your personal grievance is satisfied???

Hang on are you not a real shareholder looking to make a return on your investment???

Then again if you were you would well know just as you did when setting your trap that a design win that was expected to see an IP licence sold fell over during second half 2023 when the company concerned closed down that arm of its operations laying of thousands of employees.

You would also know that a number of the early access customers for AKIDA 2.0 out of the blue suspended their further testing until the finalised IP was released. Yet knowing these things you completely ignore that these events could have had an effect on Brainchip's planning causing a rethink around timing of when revenue would occur.

But no in your balanced way you have laid a trap to try and make a point purely to satisfy some desire to embarrass the CEO.

It is indeed an interesting approach to investment:

1. Ignore the known facts.
2. Draw adverse conclusions without allowing for any alternate view that these facts might support.
3. Publish these adverse conclusions on a shareholder forum.
4. Then engage in setting a trap for what reason was it because you want the CEO to feel publicly embarrassed.
5. Celebrate your success in misleading those who read your posts.
6. In the end not embarrass the CEO.
7. Feel personally embarrassed by being caught out in the attempt.

I assume that what I have just responded to was another one of your traps. I like traps that work but given the above this one appears not to have triggered.

My opinion only DYOR
Fact Finder
 
  • Like
  • Love
  • Haha
Reactions: 44 users

Quatrojos

Regular
Hi Esq
You beat me to it but this is the link to the Frequently Answered Questions where this information can be found on the Brainchip website.


sb182 every ASX listed company has to maintain a register of shareholders. A company can do it or it can as most do appoint an agent. In this case Brainchip has appointed Boardroom. Every company has a different approach to the release of information however under the Rules a registered shareholder has the right to personally attend the share registry in this case Boardroom and without payment of any fee inspect the register. Over my time with Brainchip as a shareholder the registry by all accounts has been a little flexible if what others has stated is correct. I have never had the need to contact so cannot speak from personal experience.

On the LDA Capital extension we know it costs about $20 million per annum to keep the lights on. We know that Brainchip has about this amount so about four quarters of cash runway.

We know that Brainchip has indicated the desire to produce the AKIDA 2.0 as a reference chip. We know that Brainchip also is hoping to complete the IP design of AKIDA 3.0.

So I simply ask the rhetorical question if Brainchip only has enough cash to keep the lights on for the next four quarters ie 2024 where were the funds coming from to turn the AKIDA 2.0 into a reference chip and if AKIDA 3.0 IP is finalised where will the funds come from to turn it into a reference chip.

Socionext and TSMC cost the best part of about $5 million to produce AKD1000. I am not sure if we had sufficient information to work out what AKD1500 cost at Global Foundries. AKIDA 2.0 and AKIDA 3.0 are more complex chips again to produce. If we look at the amount of capital secured via LDA Capital by coincidence the minimum draw down amount of $12 million would likely be sufficient to produce at least AKIDA 2.0.
and may also cover AKIDA 3.0.

I will leave it to others to say whether the production of AKIDA 2.0 and possibly AKIDA 3.0 as reference chips to demonstrate LLMs to customers World wide is a good or bad thing from a commercialisation perspective and a sensible way to accelerate that process.

My opinion only DYOR
FF

AKIDA BALLISTA
We know that Brainchip has indicated the desire to produce the AKIDA 2.0 as a reference chip.

From what I've seen, there's nothing about producing AKD2.0 as a reference chip in last year's AGM. How do you know this?
 
  • Fire
Reactions: 1 users

Quatrojos

Regular
Sorry I should have realised you were simply setting a trap and you did know the source of my statements.

I do hope you get your wish that the Brainchip share price collapses and your desire to embarrass Sean Hehir and prove his incompetence is realised.

Who gives a ...... about retail shareholders anyway as long as your personal grievance is satisfied???

Hang on are you not a real shareholder looking to make a return on your investment???

Then again if you were you would well know just as you did when setting your trap that a design win that was expected to see an IP licence sold fell over during second half 2023 when the company concerned closed down that arm of its operations laying of thousands of employees.

You would also know that a number of the early access customers for AKIDA 2.0 out of the blue suspended their further testing until the finalised IP was released. Yet knowing these things you completely ignore that these events could have had an effect on Brainchip's planning causing a rethink around timing of when revenue would occur.

But no in your balanced way you have laid a trap to try and make a point purely to satisfy some desire to embarrass the CEO.

It is indeed an interesting approach to investment:

1. Ignore the known facts.
2. Draw adverse conclusions without allowing for any alternate view that these facts might support.
3. Publish these adverse conclusions on a shareholder forum.
4. Then engage in setting a trap for what reason was it because you want the CEO to feel publicly embarrassed.
5. Celebrate your success in misleading those who read your posts.
6. In the end not embarrass the CEO.
7. Feel personally embarrassed by being caught out in the attempt.

I assume that what I have just responded to was another one of your traps. I like traps that work but given the above this one appears not to have triggered.

My opinion only DYOR
Fact Finder
Ad Hominem
 
  • Like
Reactions: 1 users
  • Like
Reactions: 1 users

IloveLamp

Top 20
Excellent point, So I'll sit back and wait for future positive announcements to roll thru 24

Will you though............?

200w (7).gif
 
Last edited:
  • Haha
  • Like
Reactions: 13 users
Not initially happy, that we have to dance with the Devil again, but it's good to have a bigger cash buffer.

"As we enter 2024 with the momentum to grow the business on multiple vectors with our 2nd
generation Akida TM products, the Edge Box initiative and strategic partnerships, we need the ability to rapidly invest for growth and build on our lead” said Sean Hehir, CEO, BrainChip. “While we will continue to be judicious with our use of cash, having access to funding from our well-respected partners at LDA Capital, strengthens our business continuity position against well-capitalized and more established competitors in a highly competitive market.”


Sean's strategy, is for aggressive growth and penetration of the Edge A.I. market and we simply cannot do that, with the unpredictability of current incoming revenue.

With tapeout costs and production of AKD 2000 reference chips, likely to be around 7 million dollars or more (my guess) the Company, can't rely on piece meal incoming funds, to pursue it's strategy, of Edge A.I. domination and keep everything running as well.

It remains to be seen, if BrainChip can secure any AKIDA 2 IP deals, without a reference chip.
I believe, on the strength of it's now multiple year relationships with other companies and their trust in the abilities of the BrainChip team (with its accomplishments in AKIDA technology thus far) that it's possible..

But I nor the Company, can hold its breath on that one..


Any additional income, will also be strongly applied to growth at this stage and if progress and an increasing share price can be met, the LDA arrangement will provide much more than the minimum 12 million dollars.

All the better, to aggressively grow the Company.

Shareholders need to understand, that while we are technologically Superior, we are competing against companies that absolutely dwarf us, with their financial and market muscle.

BrainChip is playing the "We will be a future Big Market player" card and we need a bigger bank roll, to back that up.

Of course there is an element of risk in such an aggressive growth strategy, but Sean knows he has "very good cards" (as does everyone else at the table)..

Overall, I'm personally pleased at the financial security, this gives the Company going forward.

Remember, BrainChip plans on being around, for a long, long, long time and in a Big way.

Phone Wallpaper For Men.gif
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 46 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Here's an article written by Orr Danon is CEO of Hailo on AI processors for edge devices. He mentions Mercedes having introduced ChatGPT and discusses the benefits of having generative AI being able to operate without any internet connection and how that will improve latency, privacy and cost-efficiency.

I find the name dropping of Mercedes very interesting considering Hailo don't have a known connection to Mercedes, but we do. 🥳




The untapped potential of generative AI with emerging edge applications​

By Orr DanonJan 2, 2024 11:04am

Processing data at the edge of a network offers a promising path to generative AI’s full potential, according to the author. (Getty Images)
The internet has changed every aspect of our lives from communication, shopping, and working. Now, for reasons of latency, privacy, and cost-efficiency, the “internet of things” has been born as the internet has expanded to the network edge.
Now, with artificial intelligence, everything on the internet is easier, more personalized, and more intelligent. However, AI is currently confined to the cloud due to the large servers and high compute capacity it needs. As a result, companies like Hailo are driven by latency, privacy, and cost efficiency to develop technologies that enable AI on the edge.



Undoubtedly, the next big thing is generative AI. Generative AI presents enormous potential across industries. It can be used to streamline work and increase the efficiency of various creators — lawyers, content writers, graphic designers, musicians, and more. It can help discover new therapeutic drugs or aid in medical procedures. Generative AI can improve industrial automation, develop new software code, and enhance transportation security through the automated synthesis of video, audio, imagery, and more.



However, generative AI as it exists today is limited by the technology that enables it. That’s because generative AI happens in the cloud — large data centers of costly, energy-consuming computer processors far removed from actual users. When someone issues a prompt to a generative AI tool like ChatGPT or some new AI-based videoconferencing solution, the request is transmitted via the internet to the cloud, where it’s processed by servers before the results are returned over the network. Data centers are major energy consumers, and as AI becomes more popular, global energy consumption will rapidly increase. This is a growing concern for companies trying to balance between the need to offer innovative solutions to the requirement to reduce operating costs and environmental impact.
As companies develop new applications for generative AI and deploy them on different types of devices — video cameras and security systems, industrial and personal robots, laptops and even cars — the cloud is a bottleneck in terms of bandwidth, cost, safety, and connectivity.
And for applications like driver assist, personal computer software, videoconferencing and security, constantly moving data over a network can be a privacy risk.



The solution is to enable these devices to process generative AI at the edge. In fact, edge-based generative AI stands to benefit many emerging applications.
Generative AI on the rise
Consider that in June, Mercedes-Benz said it would introduce ChatGPT to its cars. In a ChatGPT-enhanced Mercedes, for example, a driver could ask the car — hands free — for a dinner recipe based on ingredients they already have at home. That is, if the car is connected to the internet. In a parking garage or remote location, all bets are off.
In the last couple of years, videoconferencing has become second nature to most of us. Already, software companies are integrating forms of AI into videoconferencing solutions. Maybe it’s to optimize audio and video quality on the fly, or to “place” people in the same virtual space. Now, generative AI-powered videoconferences can automatically create meeting minutes or pull in relevant information from company sources in real-time as different topics are discussed.

However, if a smart car, videoconferencing system, or any other edge device can’t reach back to the cloud, then the generative AI experience can’t happen. But what if they didn’t have to? It sounds like a daunting task considering the enormous processing of cloud AI, but it is now becoming possible.

Generative AI at the edge
Already, there are generative AI tools, for example, that can automatically create rich, engaging PowerPoint presentations. But the user needs the system to work from anywhere, even without an internet connection.
Similarly, we’re already seeing a new class of generative AI-based “co-pilot” assistants that will fundamentally change how we interact with our computing devices by automating many routine tasks, like creating reports or visualizing data. Imagine flipping open a laptop, the laptop recognizing you through its camera, then automatically generating a course of action for the day,week or month based on your most used tools, like Outlook, Teams, Slack, Trello, etc. But to maintain data privacy and a good user experience, you must have the option of running generative AI locally.
In addition to meeting the challenges of unreliable connections and data privacy, edge AI can help reduce bandwidth demands and enhance application performance. For instance, if a generative AI application is creating data-rich content, like a virtual conference space, via the cloud, the process could lag depending on available (and costly) bandwidth. And certain types of generative AI applications, like security, robotics, or healthcare, require high-performance, low-latency responses that cloud connections can’t handle.
In video security, the ability to re-identify people as they move among many cameras — some placed where networks can’t reach — requires data models and AI processing in the actual cameras. In this case, generative AI can be applied to automated descriptions of what the cameras see through simple queries like, “Find the 8-year-old child with the red T shirt and baseball cap.”
That’s generative AI at the edge.
Developments in edge AI
Through the adoption of a new class of AI processors and the development of leaner, more efficient, though no-less-powerful generative AI data models, edge devices can be designed to operate intelligently where cloud connectivity is impossible or undesirable.
Of course, cloud processing will remain a critical component of generative AI. For example, training AI models will remain in the cloud. But the act of applying user inputs to those models, called inferencing, can — and in many cases should — happen at the edge.
The industry is already developing leaner, smaller, more efficient AI models that can be loaded onto edge devices. Companies like Hailo manufacture AI processors purpose-designed to perform neural network processing. Such neural-network processors not only handle AI models incredibly rapidly, but they also do so with less power, making them energy efficient and apt to a variety of edge devices, from smartphones to cameras.
Utilizing generative AI at the edge enables effective load-balancing of growing workloads, allows applications to scale more stably, relieves cloud data centers of costly processing, and helps reduce environmental impact. Generative AI is on the brink of revolutionizing computing once more. In the future, your laptop’s LLM may auto-update the same way your OS does today — and function in much the same way. However, in order to get there, generative AI processing will need to be enabled at the network’s edge. The outcome promises to be greater performance, energy efficiency, security and privacy. All of which leads to AI applications that reshape the world just as significantly as generative AI itself.
Orr Danon is CEO of Hailo, a maker of AI processors for edge devices used in automotive, security, industrial automation and retail applications, among others.
 
  • Like
  • Fire
  • Love
Reactions: 17 users
Not initially happy, that we have to dance with the Devil again, but it's good to have a bigger cash buffer.

"As we enter 2024 with the momentum to grow the business on multiple vectors with our 2nd
generation Akida TM products, the Edge Box initiative and strategic partnerships, we need the ability
to rapidly invest for growth and build on our lead
” said Sean Hehir, CEO, BrainChip. “While we will
continue to be judicious with our use of cash, having access to funding from our well-respected
partners at LDA Capital, strengthens our business continuity position against well-capitalized and
more established competitors in a highly competitive market
.”


Sean's strategy, is for aggressive growth and penetration of the Edge A.I. market and we simply cannot do that, with the unpredictability of current incoming revenue.

With tapeout costs and production of AKD 2000 reference chips, likely to be around 7 million dollars or more (my guess) the Company, can't rely on piece meal incoming funds, to pursue it's strategy, of Edge A.I. domination and keep everything running as well.

It remains to be seen, if BrainChip can secure any AKIDA 2 IP deals, without a reference chip.
I believe, on the strength it's now multiple year relationships with other companies and their trust in the abilities of the BrainChip team (with its accomplishments in AKIDA technology thus far) that it's possible..

But I nor the Company, can hold its breath on that one..


Any additional income, will also be strongly applied to growth at this stage and if progress and an increasing share price can be met, the LDA arrangement will provide much more than the minimum 12 million dollars.

All the better, to aggressively grow the Company.

Shareholders need to understand, that while we are technologically Superior, we are competing against companies that absolutely dwarf us, with their financial and market muscle.

BrainChip is playing the "We will be a future Big Market player" card and we need a bigger bank roll, to back that up.

Of course there is an element of risk in such an aggressive growth strategy, but Sean knows he has "very good cards" (as does everyone else at the table)..

Overall, I'm personally pleased at the financial security, this gives the Company going forward.

Remember, BrainChip plans on being around, for a long, long, long time and in a Big way.

View attachment 53262
Not initially happy, that we have to dance with the Devil again, but it's good to have a bigger cash buffer.

"As we enter 2024 with the momentum to grow the business on multiple vectors with our 2nd
generation Akida TM products, the Edge Box initiative and strategic partnerships, we need the ability
to rapidly invest for growth and build on our lead
” said Sean Hehir, CEO, BrainChip. “While we will
continue to be judicious with our use of cash, having access to funding from our well-respected
partners at LDA Capital, strengthens our business continuity position against well-capitalized and
more established competitors in a highly competitive market
.”


Sean's strategy, is for aggressive growth and penetration of the Edge A.I. market and we simply cannot do that, with the unpredictability of current incoming revenue.

With tapeout costs and production of AKD 2000 reference chips, likely to be around 7 million dollars or more (my guess) the Company, can't rely on piece meal incoming funds, to pursue it's strategy, of Edge A.I. domination and keep everything running as well.

It remains to be seen, if BrainChip can secure any AKIDA 2 IP deals, without a reference chip.
I believe, on the strength it's now multiple year relationships with other companies and their trust in the abilities of the BrainChip team (with its accomplishments in AKIDA technology thus far) that it's possible..

But I nor the Company, can hold its breath on that one..


Any additional income, will also be strongly applied to growth at this stage and if progress and an increasing share price can be met, the LDA arrangement will provide much more than the minimum 12 million dollars.

All the better, to aggressively grow the Company.

Shareholders need to understand, that while we are technologically Superior, we are competing against companies that absolutely dwarf us, with their financial and market muscle.

BrainChip is playing the "We will be a future Big Market player" card and we need a bigger bank roll, to back that up.

Of course there is an element of risk in such an aggressive growth strategy, but Sean knows he has "very good cards" (as does everyone else at the table)..

Overall, I'm personally pleased at the financial security, this gives the Company going forward.

Remember, BrainChip plans on being around, for a long, long, long time and in a Big way.

Has it been decided if we go 28 or 12 nm Akida 2.0)? I think 7 million for tape-out would definitely be too high for 28 nm. Even 12 nm is less than 7 (USD that is). It doesn’t change much on your points. I was just curious about this aspect.
 
  • Thinking
Reactions: 1 users

buena suerte :-)

BOB Bank of Brainchip
I think this goes without saying but would just like to add, that the company has to show that they have readily available funding going forward, when they are potentially inking $M contracts with clients. I don't think anybody is going to sign on the dotted line if they think BRN may a problem with finance going forward.
Agreed..We don't want to go looking for finance deals at the 11th hour ..Get the $$$$$$ in place and ready to go 'If needed'!!? I'm taking this as a positive! :)

Cheers
 
  • Like
  • Love
  • Fire
Reactions: 18 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
emmy-amy-poehler.gif



3 trends for 2024: AI drives more edge intelligence, RISC-V, & chiplets​

January 2, 2024 Nitin Dahad
With the rise of AI, once “simple” devices are becoming increasingly intelligent, leading to more computing devices than ever before.



With CES 2024 set to open its doors in Las Vegas just a week from now, it’s clear that this year is all about evolving consumer electronics products that rely on ever more connected, embedded edge intelligence.
This is nothing new, and we’ve been talking about it for a few years, but after the industry ‘hype’ of 2023 around generative AI, consumers will begin to understand more of what it means for them in their everyday lives.
Almost every industry vertical will see more connected embedded devices with even more smartness or intelligence at the edge.
CES generic image


In his keynote at CES 2024 on Tuesday 9th January 2024, Pat Gelsinger, CEO of Intel, will explore how silicon, amplified by innovative and open software, is enabling AI capabilities for consumers and business alike. And on the following day, Qualcomm president and CEO Cristiano Amon will highlight how more devices will be seamlessly integrated into our lives; he’ll explain that AI running pervasively and continually on devices will transform user experiences, making them more natural, intuitive, relevant, and personal, with the need for increased immediacy, privacy, and security.

That means more machine learning (ML) in more and more constrained devices, in the sensors, whether it is for the internet of things (IoT), for industrial automation, for autonomous mobility and software-defined vehicles (SDVs), or for health and wearable devices.
In the embedded world here are what I see as trends enabling some of this:


1. Edge intelligence gets better

If it’s any indication of the direction of travel, then the research paper just released by Apple on deploying large language models (LLMs) on resource constrained devices with limited memory is certainly a pointer. It’s paper, entitled “LLM in a flash: Efficient Large Language Model Inference with Limited Memory”, tackles the challenge of efficiently running LLMs that exceed the available DRAM capacity by storing the model parameters on flash memory but bringing them on demand to DRAM.
In the paper, the Apple team said their methos involves constructing an inference cost model that harmonizes with the flash memory behavior, enabling optimization in two critical areas: reducing the volume of data transferred from flash and reading data in larger, more contiguous chunks. To do this, they have introduced two techniques – one called “windowing” to strategically reduce data transfer by reusing previously activated neurons, and the second being “row-column bundling” to increase the size of data chunks read from flash memory.
The papers states, “These methods collectively enable running models up to twice the size of the available DRAM, with a 4-5x and 20-25x increase in inference speed compared to naive loading approaches in CPU and GPU, respectively. Our integration of sparsity awareness, context-adaptive loading, and a hardware-oriented design paves the way for effective inference of LLMs on devices with limited memory.
What this points to is that the general direction of travel in the industry is the deployment of more machine learning and inference at the edge.

2. RISC-V adoption becomes more visible

In August 2023, several companies had announced the formation of a new unnamed company that would help accelerate commercialization of RISC-V hardware globally. Well in the last week, that company was named as Quintauris, with Alexander Kocher appointed as CEO. Headquartered in Munich, Germany, the company’s investors are Bosch, Infineon, Nordic Semiconductor, NXP Semiconductors, and Qualcomm Technologies, Inc. While the web site is minimal at the moment, it states:
“The company will be a single source to enable compatible RISC-V based products, provide reference architectures, and help establish solutions widely used in the industry. Initial application focus will be automotive, but with an eventual expansion to include mobile and IoT.”
It would be interesting to see how Quintauris sits alongside RISC-V International, its neighbor headquartered in Zurich, Switzerland. The industry association no doubt looks after maintaining the instruction set architecture’s (ISA’s) specifications, while Quintauris is likely to be a resource for developers needing ready-made reference boards and systems for their own development.
When many commentators talk about RISC-V, the point that is often missed is that developers are designing systems based on heterogenous architectures – so multiple ISAs are likely to be part of the chip with RISC-V being deployed for various functions.
Consulting firm SHD Group presented a report at the November 2023 RISC-V Summit in the U.S. to highlight its own RISC-V market analysis, which it expects to release as a report this year. In a briefing with embedded.com in December 2023, SHD Group’s principal analyst Richard Wawrzyniak told us, “We’re in a heterogeneous world. We’re not saying that RISC-V is taking over the world, but the ecosystem is building out well with all the elements a designer needs to create their own silicon.”
SHD Group RISC-V Market Report 2023 - Market revenues RISC-V market revenue forecast from the SHD Group, (Source: The SHD Group)
His research highlights numbers of parts shipping with RISC-V inside, as opposed to the actual number of RISC-V cores. He said, “There are billions of units of SoCs shipping with RISC-V within them in any form or function. The report looks at 54 applications in six different categories, and projects that by 2030, RISC-V based SoCs will see chip revenues of almost $100 billion and will deliver IP revenues in the area of $1.6 billion. And around 2027, the research projects a flip from license-driven revenue to royalty-driven revenues.

3. Chiplet business will start looking a bit like IP business

Chiplets have been all the rage last year, and 2024 could be a year when it starts to look like the way the intellectual property (IP) business looked about 20 or so years ago. Some of us might recall things like the Virtual Component Exchange originally established in 2000, whose objective was to act as a portal for both IP buyers and sellers.
Now, chiplets are one of the answers to overcoming the challenges of enabling the massive compute demands from today’s ML-intensive products, without having to build one monolithic chip at the most advanced (and expensive) process technology available. As Chuck Sobey, chair of last year’s Chiplet Summit in California said, “Chiplets can do much to increase chip scalability, modularity, and flexibility. But the idea only works if product developers can integrate them quickly and cheaply. Effective integration platforms require many tools. Vendors in all areas must provide a platform and support an ecosystem and open-source efforts to fill the interface and software gaps.”
Zero Asic - efabric_concept Zero ASIC’s efabric concept. (Image: Zero ASIC)
Since then, one of the most interesting startups to come up in this area in 2023 was Zero ASIC, who said they are democratizing chip making with a chiplet design and emulation platform. The company came out of stealth in October and offers 3D chiplet composability with fully automated no-code chiplet-based chip design. The company said its platform enables automated design, validation, and assembly of system-in-packages (SiPs) from a catalog of known good chiplets. Web-based design and emulation tools allow users to test out custom designs quickly and accurately before ordering physical devices, using cloud FPGAs to implement the RTL source code of each chiplet in a custom SoC.
The proliferation of chiplets and the ability to use chiplets from various sources will of course depend on standards, but that is already a part of the industry’s thinking, with the evolution of the UCIe (universal chiplet interconnect express) specification.

Not forgetting the AI/ML driving the trends: Nvidia

Ultimately, it’s demand for machine learning (ML) that is driving the trends above. And while not specifically highlighted, it goes without saying that domain specific AI/ML will be the key drivers in 2024 – moving us beyond the generic hype around AI and generative AI. And there will be concerns around accuracy, privacy, data security, and secure connectivity.
The abundance of data will be a key concern, so developers will no doubt be looking more closely at areas such as data encryption, cybersecurity at both hardware and connectivity layers, plus the use of machine learning to stay on top of these issues.
Various companies have offered their AI predictions and trends for 2024. Nvidia executives highlighted its 17 predictions for 2024. Manuvir Dav, the company’s vice president for enterprise computing, says one size won’t fit all, and there will be hundreds of customer large language models (LLMs) to deliver accurate, specific informed responses to analyzing the masses of data within the enterprise; to enable this, open source software and off-the-shelf AI and microservices will lead the charge.
In healthcare, Nvidia’s vice president of healthcare Kimberly Powell talks about combining instruments, imaging, robotics and real-time patient data with AI to enable better surgeon training, more personalization during surgery and better safety with real-time feedback and guidance even during remote surgery. She said, this will help close the gap on the 150 million surgeries that are needed yet do not occur, particularly in low- and middle-income countries.
In automotive, the company’s vice president of automotive, XinZhou Wu, said generative AI will help modernize the vehicle production lifecycle in smart factories and create digital twins. Beyond the automotive product lifecycle, generative AI will also enable breakthroughs in autonomous vehicle (AV) development, including turning recorded sensor data into fully interactive 3D simulations. These digital twin environments, as well as synthetic data generation, will be used to safely develop, test and validate AVs at scale virtually before they’re deployed in the real world.
And Deepu Talla, Nvidia’s vice president of embedded and edge and computing, said, generative AI will develop code for robots and create new simulations to test and train them. He added, “LLMs will accelerate simulation development by automatically building 3D scenes, constructing environments and generating assets from inputs. The resulting simulation assets will be critical for workflows like synthetic data generation, robot skills training and robotics application testing.” For the robotics industry to scale, he said robots have to become more generalizable — that is, they need to acquire skills more quickly or bring them to new environments. “Generative AI models — trained and tested in simulation — will be a key enabler in the drive toward more powerful, flexible and easier-to-use robots.”

The trends changing the computing landscape – according to Ampere

Continuing on the generic computing trends for 2024, Jeff Wittich, chief product officer at Ampere Computing, offered his perspective on what he sees for 2024 in a blog, and summarized here:
  1. AI inference and large-scale deployment take center stage
  2. Sustainability and energy efficiency become even more important in the context of AI
  3. Computing and data processing is becoming even more distributed and complex
And it’s that third bullet point that summarizes our trends for 2024, citing Wittich: “With the rise of AI, once “simple” devices are becoming increasingly intelligent, leading to more computing devices than ever before. To process data from these devices in real-time, high-performance computing is being deployed all over the place – in public and private clouds but now also at the edge, leading to greater demand for computing in super-local areas compared to centralized regions. The complexity of these environments requires solutions that the industry didn’t have five years ago, and there will be more specialized vendors than ever trying to address the situation.”
We’ll be at the following events over the next few months where we will be happy to chat more about some of these topics and more:

 
  • Like
  • Fire
Reactions: 13 users
Has it been decided if we go 28 or 12 nm Akida 2.0)? I think 7 million for tape-out would definitely be too high for 28 nm. Even 12 nm is less than 7 (USD that is). It doesn’t change much on your points. I was just curious about this aspect.
 
  • Like
Reactions: 2 users
Has it been decided if we go 28 or 12 nm Akida 2.0)? I think 7 million for tape-out would definitely be too high for 28 nm. Even 12 nm is less than 7 (USD that is). It doesn’t change much on your points. I was just curious about this aspect.
No clue, if you're asking me..

But if it's all the same for "proof of concept" you'ld go for the cheapest option, otherwise it's just a show of financial muscle, which we haven't got?

I have no idea, of current end to end costings, of 28nm..
 
  • Like
  • Love
Reactions: 7 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
This is not strictly relevant but if it's true that you are what you eat, then I'm definitely a pavlova.
 
  • Haha
  • Love
  • Like
Reactions: 14 users
No clue, if you're asking me..

But if it's all the same for "proof of concept" you'ld go for the cheapest option, otherwise it's just a show of financial muscle, which we haven't got?

I have no idea, of current end to end costings, of 28nm..
This post from Sept 23 from a Chinese tech company by the looks gives a pretty good run down of the process and approx. costs.



1704252590440.png


Talk about Chip Design, Tape-out, Verification, Manufacturing, and Cost​


YM Innovation Technolgy (Shenzhen)Co.,Ltd
YM Innovation Technolgy (Shenzhen)Co.,Ltd

YM Innovation Technolgy (Shenzhen)Co.,Ltd​

Be a leading professional manufacturer of micro…​

Published Sep 22, 2023
+ Follow
Let’s talk about chip design, tape-out, verification, manufacturing, and cost.
Wafer Terminology
1695362913107

1. Chip (chip, die), device (device), circuit (circuit), microchip (microchip) or barcode (bar): All these terms refer to the microchip pattern that occupies most of the area on the wafer surface;
2. Scribe line (scribe line, saw line) or street (street, avenue): These areas are used to separate the intervals between different chips on the wafer. The scribe lines are usually blank, but some companies place alignment marks in the spacer areas, or structures to be tested;
3. Engineering die and test die: These chips are different from formal chips or circuit chips. It includes special devices and circuit modules for electrical testing of wafer production processes;
4. Edge die: Area loss caused by some chips with incomplete masks on the edge of the wafer. More edge waste due to larger individual chip sizes is offset by the use of larger diameter wafers. One of the driving forces driving the semiconductor industry toward larger diameter wafers is to reduce the area occupied by edge chips;
5. Wafer crystal plane: The cross-section in the figure marks the lattice structure under the device. The direction of the edge of the device and the lattice structure shown in this figure is determined;
6. Wafer flats/notche: The wafer shown in the figure consists of a major flat and a minor flat, indicating that it is a P-type <100> crystal orientation of wafers. Both 300mm and 450mm diameter wafers use grooves as lattice guide marks. These locating edges and grooves also assist in wafer registration in some wafer production processes.
Chip Tape-out Method (Full Mask, MPW)
Full Mask and MPW are both a tape-out (handing over the design results for production and manufacturing) method of integrated circuits. Full Mask means all masks in the manufacturing process serve a certain design; and MPW stands for Multi Project Wafer, literally translated as multi-project wafer, that is, multiple projects share a certain wafer , that is, the same manufacturing process can undertake the manufacturing tasks of multiple IC designs.
1. Full Mask: for Full Mask chips, one wafer can produce thousands of DIEs; and then packaged into chips, they can support large-scale Bulk customer demand.
2. Multi-project wafer is to tape out multiple integrated circuit designs using the same process on the same wafer. After manufacturing is completed, dozens of chip samples can be obtained for each design. This number is very important for the prototype design stage. Experimentation and testing are enough. This method of operation can reduce tape-out fees by 90%-95%, which greatly reduces the cost of chip development.
The wafer fab has several fixed MPW opportunities every year, called Shuttle, which leaves as soon as it arrives. Isn’t it very impressive? Different companies compete for wafer. There must be a rule. MPW presses SEAT to lock the area. A SEAT is generally 3mm. *In an area of 4mm, in order to ensure that different chip companies can participate in MPW, general wafer factories will limit the number of SEATs reserved by each company (in fact, the cost of SEAT will go up, and the meaning of MPW will be lost). The advantage of MPW is that the production cost is small, usually only a few hundred thousand, which can greatly reduce risks. It should be noted that MPW is a complete production process from a production perspective, so it is still time-consuming. One MPW generally requires 6 to 9 months, which will cause a delay in the delivery time of chips.
Because it is a wafer business, the number of chips obtained through MPW will be very limited. They are mainly used for internal verification testing of the chip company, and may also be provided to a very small number of head customers. From here, you may have understood that MPW is an incomplete and cannot be mass-produced.
Chip ECO Process
ECO refers to Engineering Change Order. ECO can occur before, during, or after tapeout; for ECO after tapeout, small changes may require only a few metal layers to be changed, while large changes may require more than a dozen metal layers or even re-tapeout. The implementation process of ECO is shown in the figure.
1695363078296

If the MPW or FullMask chip is verified to have functional or performance defects, small-scale adjustments to the circuit and standard unit layout are made through ECO, small-scale optimization is performed while keeping the original design layout and wiring results basically unchanged, and the remaining violations of the chip are repaired. Finally, the chip sign-off standard is reached. Violations cannot be repaired through the back-end placement and routing process (it is too time-consuming to go through the process again), but timing, DRC, DRV, and power consumption must be optimized through the ECO process.
Tape-out Corner
1. Corner

Chip manufacturing is a physical process, and there are process deviations (including doping concentration, diffusion depth, etching degree, etc.), resulting in different batches, different wafers in the same batch, and different wafers. The situation is different between chips.
On a wafer, it is impossible for the average drift speed of carriers at every point to be the same. As the voltage and temperature are different, their characteristics will be different. To classify them, PVT (Process, Voltage, Temperature) process is divided into different corners:
TT: Typical N Typical P
FF: Fast N Fast P
SS: Slow N Slow P
FS: Fast N Slow P
SF: Slow N Fast P
The first letter represents NMOS, the second letter Represents PMOS, which is for different concentrations of N-type and P-type doping. NMOS and PMOS are made independently in the process and will not affect each other. However, for circuits, NMOS and PMOS work at the same time. NMOS will be fast and PMOS will be fast or slow at the same time, so FF and SS will appear. , FS, SF four situations. Through the adjustment of process injection, the speed of the device is simulated, and different levels of FF and SS are set according to the size of the deviation. Under normal circumstances, most of them are TT, and the above five corners can cover about 99.73% of the range at +/-3sigma. The occurrence of this randomness is consistent with the normal distribution.
2. The significance of corner wafer.
During the tape-out of engineering chips, FAB will pirun key levels to adjust inline variation, and some will also run backup wafer to ensure that the shipped wafer device is on target, that is, near the TT corner. If it is simply to make some samples and only perform engineering tape-out, then you do not need to verify the corners, but if you are preparing for subsequent mass production, you must consider the corners. Since the process will have deviations during the production process, and the corner is an estimate of the normal fluctuations of the production line, FAB will also have requirements for corner verification of mass-produced chips. Therefore, corners must be met in the design stage, and the circuit must be simulated under various corners and extreme temperature conditions to make it work normally on various corners, so that the final chip produced can have a high yield.
3. Corner split table strategy for products.
The corner is usually on the spec. under normal circumstances, the spec has 6 sigmas. For example, FF2 (or 2FF) means 2 Sigma in the faster direction, and SS3 (or 3SS) means sigma in the faster direction. The slow direction is 3 Sigma. Sigma mainly represents the fluctuation of Vt. The larger the fluctuation, the larger the sigma. The three sigma here are on the spec line of the process device. It can be allowed to exceed a little, because the fluctuation on the line cannot be exactly on the spec.
The following is an example of a 55nm Logic process chip and the proposed corner split table:
①#1 & #2 two pieces of pilot wafer, one for blind sealing and one for CP measurement;
②#3 & #4 hold two pieces in Contact to reserve engineering wafer for subsequent revisions, which can save ECO tape-out time;
③#5~#12 eight pieces are held in Poly, wait for the pilot result to see if the device speed needs to be adjusted, and verify the corner;
④ In addition to leaving enough chips for testing and verification, Metal Fix should also reserve as many wafers as possible for mass production and shipment according to project requirements.
4. Confirm the Corner result
First of all, most of them should fall within the window range determined by the four corners. If there is a big deviation, it may be a process shift. If the yield of each corner is not affected and meets expectations, it means that the process window is sufficient. If there are individual conditions where the yield is low, the process window needs to be adjusted. The purpose of the corner wafer is to verify the design margin and examine whether there is any loss in yield. In general, chips that exceed the performance range of this corner constraint are scrapped.
Corner verification benchmarks are WAT test results, which are generally led by FAB, but the cost of corner wafer is borne by the design company. Generally, with a mature and stable process, the parameters of chips on the same wafer, the same batch of wafers, and even different batches of wafers are very close, and the range of deviation is relatively small. Process Corner PVT (Precess Voltage Temperature) process errors are different from bipolar transistors. MOSFETs parameters vary greatly between different wafers and between different batches.
In order to alleviate the difficulty of circuit design tasks to a certain extent, process engineers must ensure that the performance of the device is within a certain range. In general, they strictly control expected parameter changes by scrapping chips that exceed this performance range.
①The speed of the MOS tube refers to the level of the threshold voltage respectively. Fast speed corresponds to a low threshold, and slow speed corresponds to a high threshold. GBW=GM/CC. Under other conditions being the same, the lower the vth, the higher the gm value. Therefore, the larger the GBW, the faster the speed. (Detailed analysis of specific situations)
②The speed of the resistance. Fast corresponds to a small square resistance, and slow corresponds to a large square resistance.
③The speed of the capacitor. Fast corresponds to the smallest capacitance, and slow corresponds to the largest capacitance.
Tape-out Cost and Wafer Price
The tape-out mask cost of 40nm is about US$800,000-900,000, and the wafer cost is about US$3,000-4,000 per piece. Including IP merge, it costs at least seven to eight million yuan.
A tape-out of the 28nm process costs US$2 million;
A tape-out of the 14nm process costs US$5 million;
A 7nm process tapeout costs US$15 million;
5nm process tape-out costs US$47.25 million per time;
Taping out the 3nm process may cost hundreds of millions of dollars;
Among the two main tape-out costs, mask and wafer, mask is the most expensive.
The more advanced the process node, the more mask layers are required; because each layer of "mask" corresponds to one application of photoresist, exposure, development, etching and other operations, involving material costs, instrument depreciation costs , these costs need to be paid by fabless customers!
The 28nm process requires about 40 layers,
The 14nm process requires 60 masks;
The 7nm process requires 80 or even hundreds of masks.
One layer of mask costs 80,000 US dollars, so the chip must be mass-produced to reduce costs!
Take the 40nm MCU process as an example: if 10 wafers are produced, the cost of each wafer is (900,000+4000*10)/10=94,000 US dollars; if 10,000 wafers are produced, the cost of each wafer is (900,000+4000* 10000)/10000=4090 US dollars. The larger the wafer quantity, the cheaper it is, and different manufacturers have different quotations.
The latest quotation given by TSMC this year: the most advanced 3nm process, US$19,865 per wafer, equivalent to about 14.2w in RMB.

1695362980435
 
  • Like
  • Fire
  • Thinking
Reactions: 21 users
Top Bottom