BRN Discussion Ongoing

Diogenese

Top 20
Slowly working my way through this (756 pages) report. I think it provides a good overview, if somewhat skewed with an American bias, of just where AI is, was, and maybe headed. It is such a broad subject and it is so easy to get lost in the detail of what we are doing and trying to do. They freely acknowledge that they cannot predict exactly where this road will lead us nor what developments may come, but I did like their use of an analogy with the way Edison described the ongoing possibilities in regard to the then recently controlled uses of electricity...........

'However, what Thomas Edison said of electricity encapsulates the AI future: “It is a field of fields ... it holds the secrets which will reorganize the life of the world.” Edison’s astounding assessment came from humility. All that he discovered was “very little in comparison with the possibilities that appear.”'

A few further snippets that could have been printed straight from the Brainchip website................

'Edge Computing. Breaking size, weight, and power barriers also increases the ubiquity of AI and aids privacy protection. Companies are working to pack more computational power into tighter, specialized chips that use less energy to train and run the same models. Such chips allow consumer devices to run complex models locally, rather than transmit data externally and wait for models to run remotely. Retaining data entirely on the device where a model is being trained or run is an advancement that could potentially enhance individual privacy in AI-powered systems.'

And also...............

The Department must act
now to integrate AI into critical functions, existing systems, exercises and wargames to become an AI-ready force by 2025.”


They are in a hurry, and advocating spending big (8 Billion p.a. ongoing) to try and catch up and reign in a perceived gap behind China and in some areas Russia.

Also check out this table below, (page 85) particularly the Intelligent Edge Devices, Computing and Networking tab.............:)

In their long term time horizon they are asking for.............. Autonomous edge devices that dynamically learn, share, and team with other devices, while exercising intelligent data collection, exploitation, and retention, mastery of domain-specific physical manipulation.

Not quite sure what all that means but a good deal of it sounds familiar to me.
Will keep reading.....:)

View attachment 3195
"The Department must act now to integrate AI into critical functions, existing systems, exercises and wargames to become an AI-ready force by 2025.”

One good thing is they can start breadboarding implementations using ADE/MetaTF (the siliconless Akida) yesterday.
 
  • Like
  • Fire
Reactions: 13 users

equanimous

Norse clairvoyant shapeshifter goddess
Just your thoughts on Dell,me personally Yes, why would you have Dell on a podcast
Well imagine having Brainchip incorporated in their Alienware product lineup.
1648282376934.png
 
  • Like
  • Fire
Reactions: 8 users

HUSS

Regular
These are the top 10 most valuable electronic and appliances in 2022. Lets hope that BRN will be engaging and associated with as per research and dot joining in at least one or two brands of these like Samsung, Dell or LG!!
0C1F11BB-1D41-49B2-8AEB-9CE27A6CF219.jpeg
 
  • Like
  • Fire
Reactions: 17 users

Dozzaman1977

Regular
My thoughts;... woo hoo AKIDA Ballista! Who coined that phrase?
As if long term investors that bough in sub 20c didn't cash in, seriously if you had a decent size parcel and your account went to millions of dollars, you d have rocks in your head not to sell some and reap the rewards of your belief in BRN.
I remember a post where some guy said he talked his wife into using their savings for a deposit for a house to invest in brainchip.instead, possibly 2018/2019.... da da ! House paid for now..... 🤗
Some might have borrowed to buy..ta da! 🤗
Retail nothing to do with fundamentals, just rewarding themselves, and I'm sure the still have a slice of the pie but living life somewhat more comfortably.
Instos gobbled up a lot of those shares imo, who else could afford to buy up in the tens of millions?
Now they're playing with those marbles, lol you own most of then marbles you can bring out the Tom thumbs and smash shit up however you want. Its a game now imo.
Fundamentals haven't changed, never has, it's just got better for BRN.
Now if we bring in world politics that's a differrent story. How does that affect roll out and $$$ coming in?
It all depends on consumers yes? Remember this is not an 👁 phone.. yet.
🤔🧐🤔🧐🤔🧐🤔🧐🤔
Hi alfie
Ive got a quite a few shares with a average price just under 20 cents, which ticked over the magical number you mentioned and I have not sold a single share........
I don't think I have rocks in my head
I believe that brainchips akida will be EXTREMLY BENEFICIAL TO MAKING EVERYBODYS LIFE EASIER in the future due to the multitude (and magnitude) of applications it can be incorporated and used in....
Kung Fu Wtf GIF by A24
 
  • Like
  • Fire
  • Love
Reactions: 29 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
I dance like this even though my podiatrist doesn't recommend it on account of the possibility that it could make my bunions worse.

 
Last edited:
  • Like
  • Love
Reactions: 9 users

dippY22

Regular
I have been in the Brainchip stock now a few years. I have become comfortable with the investment. I feel like I understand the tech well enough thanks to my time here and formerly on HC, and their educational web site. I get the huge opportunities ahead for Brainchip and for us as investors.

That said, yesteday I gave another person my elevator speech on this exciting opportunity. I was crisp, clear, confident and succinct. I explained that intellectual property sales was what Brainchip really desired. Then the person I was talking to said, "how does the intellectual property thing work?" So, I explained,....well if a customer is going to want more than a million + chips they are recommended to enter into an IP contract with Brainchip". I talked about revenue expected from such deals, ...engineering support fees, the per chip revenue, the IP up front costs, and so on.

But then I added that one of our contracts is with Megachips, a fabless chip manufacturer in Japan who designs chips for others. The other person seemed confused, and said, "but how does the IP work with them? Does Brainchip make their design? Or does Brainchip license the technology to Megachips and how does that work....?" I suddenly realized I was in over my head and couldn't actually describe what and how the IP arrangement actually worked. I thought I understood IP sales, but realized that I couldn't explain it to this person with enough clarity that she would understand.

I realized that I didn't understand it well enough either. And so I will reach out to you folks and hope one or more of you can explain the IP contract sale and how it actually works. Without details of the Renesas or Megachips contract specifics, what is Brainchip doing when they enter one of these deals?

Is Brainchip (or Taiwan Semi) actually making the chips? Are they selling consulting service(s)? Have they licensed their secret sauce?

I would like a intellectual property sale "basics" explaination for dummies, if you will. To quote my friend, "how does the intellectual property thing work?", in a way that next time I will have the confidence and understanding to answer such a question In plain English, please.
 
  • Like
Reactions: 14 users

Neuromorphia

fact collector
I have been in the Brainchip stock now a few years. I have become comfortable with the investment. I feel like I understand the tech well enough thanks to my time here and formerly on HC, and their educational web site. I get the huge opportunities ahead for Brainchip and for us as investors.

That said, yesteday I gave another person my elevator speech on this exciting opportunity. I was crisp, clear, confident and succinct. I explained that intellectual property sales was what Brainchip really desired. Then the person I was talking to said, "how does the intellectual property thing work?" So, I explained,....well if a customer is going to want more than a million + chips they are recommended to enter into an IP contract with Brainchip". I talked about revenue expected from such deals, ...engineering support fees, the per chip revenue, the IP up front costs, and so on.

But then I added that one of our contracts is with Megachips, a fabless chip manufacturer in Japan who designs chips for others. The other person seemed confused, and said, "but how does the IP work with them? Does Brainchip make their design? Or does Brainchip license the technology to Megachips and how does that work....?" I suddenly realized I was in over my head and couldn't actually describe what and how the IP arrangement actually worked. I thought I understood IP sales, but realized that I couldn't explain it to this person with enough clarity that she would understand.

I realized that I didn't understand it well enough either. And so I will reach out to you folks and hope one or more of you can explain the IP contract sale and how it actually works. Without details of the Renesas or Megachips contract specifics, what is Brainchip doing when they enter one of these deals?

Is Brainchip (or Taiwan Semi) actually making the chips? Are they selling consulting service(s)? Have they licensed their secret sauce?

I would like a intellectual property sale "basics" explaination for dummies, if you will. To quote my friend, "how does the intellectual property thing work?", in a way that next time I will have the confidence and understanding to answer such a question In plain English, please.
Megachips License Agreement
Additional Information on MegaChips Agreement
megachips-license-agreement 2.JPG


IP license agreement with Renesas
 
Last edited:
  • Like
  • Fire
Reactions: 30 users

Slade

Top 20
Akida IP = initial $$ plus future $$$$$$$$$$$$$$$$$. Its a great thing.
 
  • Like
  • Fire
Reactions: 16 users
I have been in the Brainchip stock now a few years. I have become comfortable with the investment. I feel like I understand the tech well enough thanks to my time here and formerly on HC, and their educational web site. I get the huge opportunities ahead for Brainchip and for us as investors.

That said, yesteday I gave another person my elevator speech on this exciting opportunity. I was crisp, clear, confident and succinct. I explained that intellectual property sales was what Brainchip really desired. Then the person I was talking to said, "how does the intellectual property thing work?" So, I explained,....well if a customer is going to want more than a million + chips they are recommended to enter into an IP contract with Brainchip". I talked about revenue expected from such deals, ...engineering support fees, the per chip revenue, the IP up front costs, and so on.

But then I added that one of our contracts is with Megachips, a fabless chip manufacturer in Japan who designs chips for others. The other person seemed confused, and said, "but how does the IP work with them? Does Brainchip make their design? Or does Brainchip license the technology to Megachips and how does that work....?" I suddenly realized I was in over my head and couldn't actually describe what and how the IP arrangement actually worked. I thought I understood IP sales, but realized that I couldn't explain it to this person with enough clarity that she would understand.

I realized that I didn't understand it well enough either. And so I will reach out to you folks and hope one or more of you can explain the IP contract sale and how it actually works. Without details of the Renesas or Megachips contract specifics, what is Brainchip doing when they enter one of these deals?

Is Brainchip (or Taiwan Semi) actually making the chips? Are they selling consulting service(s)? Have they licensed their secret sauce?

I would like a intellectual property sale "basics" explaination for dummies, if you will. To quote my friend, "how does the intellectual property thing work?", in a way that next time I will have the confidence and understanding to answer such a question In plain English, please.

Bit of a read but attached handbook on Semiconductor IP business might give additional insight.

Also, bit of background on the ARM model as another example and bit more straightforward to understand. Model seems familiar ;)



ARM Holdings develops intellectual property (IP) used in silicon chips. It was founded in 1990 as a spinoff of British computer manufacturer Acorn Computers. The first time ARM designs were used in a cell phone was in 1994 for the Nokia 6110.
Semiconductor manufacturers combine ARM IP with their own IP to create complete chip designs. Chips containing ARM IP power most of today’s mobile devices, due to their low power consumption. In 2014, 60% of the world’s population used a device with an ARM chip on a daily basis.55 In 2012, 95% of the chips found in smartphones and tablets were ARM designs.
ARM licenses IP to over 1,000 global partners (including Samsung, Apple, Microsoft). The company doesn’t manufacture or sell chips, unlike semiconductor manufacturers such as Intel or AMD.
SoftBank purchased ARM in 2016 for £24.3 billion.
ARM Business Model
1
Detect and Solve Difficult Problems
ARM recognizes that tablets, laptops, and smartphones are the next wave of technology. To create attractive chips and intellectual property for portable devices, ARM focuses on faster processing speeds, lower power consumption, and lower costs.
2
Invest Heavily in R&D
In 2018, ARM invests $773 million in R&D (42% of 2018 revenues). ARM is able to incur R&D costs many years before revenue starts (eight years on average). In 2008, ARM’s R&D expenditure was £87 million or 29% of revenues. Expenditures continue to grow over time.
3
License Intelligently
ARM earns fixed upfront license fees when they deliver IP to partners and variable royalties from partners for each chip they ship that con- tains ARM IP. The licensing fees vary between an estimated $1 million to 10 million. The royalty is usually 1 to 2% of the selling price of the chip.
4
Scale without Manufacturing
Licensing enables ARM to scale the business efficiently. Designs can be sold multiple times and reused across multiple applications (e.g., mobile, consumer devices, networking equipment, etc.). ARM has no manufacturing costs.

1648305595655.png
 

Attachments

  • Handbook_Understanding_SemiIP_BusinessProcess.pdf
    214.5 KB · Views: 185
Last edited:
  • Like
  • Fire
Reactions: 25 users
Bit of a read but attached handbook on Semiconductor IP business might give additional insight.

Also, bit of background on the ARM model as another example and bit more straightforward to understand. Model seems familiar ;)



ARM Holdings develops intellectual property (IP) used in silicon chips. It was founded in 1990 as a spinoff of British computer manufacturer Acorn Computers. The first time ARM designs were used in a cell phone was in 1994 for the Nokia 6110.
Semiconductor manufacturers combine ARM IP with their own IP to create complete chip designs. Chips containing ARM IP power most of today’s mobile devices, due to their low power consumption. In 2014, 60% of the world’s population used a device with an ARM chip on a daily basis.55 In 2012, 95% of the chips found in smartphones and tablets were ARM designs.
ARM licenses IP to over 1,000 global partners (including Samsung, Apple, Microsoft). The company doesn’t manufacture or sell chips, unlike semiconductor manufacturers such as Intel or AMD.
SoftBank purchased ARM in 2016 for £24.3 billion.
ARM Business Model
1
Detect and Solve Difficult Problems
ARM recognizes that tablets, laptops, and smartphones are the next wave of technology. To create attractive chips and intellectual property for portable devices, ARM focuses on faster processing speeds, lower power consumption, and lower costs.
2
Invest Heavily in R&D
In 2018, ARM invests $773 million in R&D (42% of 2018 revenues). ARM is able to incur R&D costs many years before revenue starts (eight years on average). In 2008, ARM’s R&D expenditure was £87 million or 29% of revenues. Expenditures continue to grow over time.
3
License Intelligently
ARM earns fixed upfront license fees when they deliver IP to partners and variable royalties from partners for each chip they ship that con- tains ARM IP. The licensing fees vary between an estimated $1 million to 10 million. The royalty is usually 1 to 2% of the selling price of the chip.
4
Scale without Manufacturing
Licensing enables ARM to scale the business efficiently. Designs can be sold multiple times and reused across multiple applications (e.g., mobile, consumer devices, networking equipment, etc.). ARM has no manufacturing costs.

View attachment 3209

A recent good article explaining various areas of the process / market worth a read.




Machine Learning Showing Up As Silicon IP​

472Shares
sharethis sharing button

It won’t replace ML chips, but it could broaden the market.
MARCH 3RD, 2022 - BY: BRYON MOYER
popularity

New machine-learning (ML) architectures continue to appear. Up to now, each new offering has been implemented in a chip for sale, to be placed alongside host processors, memory, and other chips on an accelerator board. But over time, more of this technology could be sold as IP that can be integrated into a system-on-chip (SoC).
That trend is evident at recent conferences, where an increasing number of announcements involve IP for which there may or may not be physical chips available.
“For customers who want to jump straight into machine learning and make their products smarter, it’s easier just to buy an AI chip with an existing tooling process and add that to your existing architectural platform,” said Danny Watson, principal engineer for the ICW business line at Infineon. “Going forward, when they’re going to make platform decisions across the whole portfolio, that’s when they’re going to have the architecture integrated into an SoC directly.”
This may accelerate the adoption of ML in dedicated applications, and it may be a faster route to market for ML hardware providers. While this may add options for system designers, IP comes with its own set of challenges.
Something completely new
While new technology appears every day in some form or another, most of what’s introduced is evolutionary. Typically, it involves a faster version of something, a new communications protocol, or a different way of storing data. It’s far less common that something completely new appears, adding a capability that was not possible before.
This is why ML’s impact has been so significant. While the ideas underlying the technology have been around for a while, it’s been only recently that silicon technology has made it possible for ML to be deployed at scales that previously were considered unfeasible. ML is a wholly new concept, not just a different way of doing something or an integration play. It has made possible solutions to problems that were not tractable before, and it has allowed system designers to conceive of equipment that would have been unthinkable until recently.
But as a new concept, there also is no best way to do it yet. The industry has been in the early stages of figuring out how to make it work, and there are lots of moving parts that can be tuned. So there are numerous proposals and offerings on different ways to solve problems.
It’s not a one-solution-fits-all situation either. What works best for one problem may be sub-optimal for another. The challenge is figuring out how much to specialize and how much to stay general-purpose.
The last time the chip industry was in this situation was decades ago — perhaps the availability of SSI logic or the microprocessor. Everything since then has been an improvement and better integration. Decades later, these functions are widely available as IP.
The new “new kid in town”
Now that the technology necessary for ML is available, the industry seems to be charting a similar trajectory to logic, but on an extremely compressed timescale. The first big challenge is how to best implement ML capabilities. As with logic in the beginning, that has meant the availability of individual chips performing the ML functions. System designers can include them as accelerators either in parallel with their main CPUs (if the chip includes a host), or by using their CPU as a host to control the ML tasks.
These chips originally were designed onto boards or modules dedicated to ML. With an original focus on the cloud, an entire board with a PCIe interface might be dedicated to ML training or inference. But with ML being offered as IP instead of individual chips, ML functions can be wrapped into SoCs, which in turn reduces the overall footprint on a board.
But as the industry increasingly moves to chiplets and disaggregates large SoCs, this business model may work for some startups. “If you don’t have hardware out by now, then you may strategically say, ‘Well, I’m gonna sell it as IP,’” said Dana McCarty, vice president of sales and marketing for inference products at Flex Logix.
Fig. 1: New neural architectures have largely been implemented as their own chip for inclusion on a board (left). Now offerings are starting to include IP for inclusion on a chip (right). Source: Bryon Moyer/Semiconductor Engineering

Fig. 1: New neural architectures have largely been implemented as their own chip for inclusion on a board (left). Now offerings are starting to include IP for inclusion on a chip (right). Source: Bryon Moyer/Semiconductor Engineering
Cloud vs. edge

The impetus for ML IP will partially depend on where the ML will be instantiated. The original ML focus was on the cloud or in other data centers, where accelerator boards have proven to be a good solution. That becomes even more the case when considering a disaggregated data center of the future, where resources can be pulled as necessary. In the cloud, it makes more sense for ML functionality to stand alone so that only as much as is needed can be roped into a particular project. If it were built into every server, then it might sit idle while the server worked on projects not needing ML.
Data center form factors thus provide a constraint. “Cloud and data centers have much more fixed infrastructure,” said Nick Ni, director of product marketing for AI and software at AMD. “You can’t just change the PCIe form factor. There’s a full rack of data-center servers that’s already been deployed, so you have to live within the constraints.”
As inference moves to the edge, that’s not necessary. “The edge is completely different,” said Ni. “There’s more flexibility there. But it’s also much less adopted today, because there’s so much randomness in the hardware. In automotive, drones, and medical applications, it’s like 2% adoption. The hardware market still has a huge untapped potential, and nobody’s a winner so far.”
For the edge, integration into an SoC could make sense. Unlike server-based applications, these tend to focus on specific problems, so the solution can be tailored. Edge devices tend to be small, so having fewer packages also makes sense. And an SoC designer can manage the power and performance of the ML IP block more directly as compared to what would be possible with a dedicated chip.
There are typically two types of ML chips — those intended for training, with facilities for back-propagation, and those intended only for inference. Training is likely to be a cloud-based activity for the foreseeable future, so it’s not obvious that an IP version of a training-oriented ML architecture would make sense.
Embedded ML functions are much more likely to be focused on inference. To the extent that such a device might want to improve its model over time, data would be sent back to the cloud for further training rather than attempting training in the edge device itself (with the exception of limited incremental learning). That could change if new training techniques arise, but for the current mainstream, IP will likely be limited to inference.
The impact of standards
The move to IP has often been spurred by standards that limit how much creativity can be brought to some functions. When implementing PCIe, for example, differentiation cannot rely on adding novel features because the features themselves are specified in the standards. So differentiation occurs based on how those features are realized. Speed is often captured in a standard, so just being faster often has to do with how much headroom is available. That leaves power and cost as the major silicon-based characteristics for differentiation.
But the other big opportunity lies in ease of use. Standards-based IP, in particular, provides a well-defined set of implementation options such as bus widths, security choices, or optional features. A company that makes it easy for designers to implement its IP will have less pressure on things like price when competing.
But it hasn’t been only standards that have benefited from IP. Infineon’s Watson points out that the audio market, for instance, has seen chip-based implementations gradually move into IP for greater integration, even though that doesn’t come as a result of standards circumscribing the available options.
Somewhere between standards-based and completely ad-hoc IP are IP blocks that have become de-facto standards, typically by virtue of market power. Arm processors, for example, aren’t industry standards for how to implement a processor, as the growing popularity of RISC-V shows, but they are prevalent enough to have set an expectation that a certain class of processor be available as IP.
Processors are at the heart of most SoCs, and so by definition an SoC must use IP for a processor rather than a dedicated processor chip. SoCs attempt to integrate as much as possible into a platform architecture that can be leveraged over enough designs to achieve the sales volume sufficient to pay back the enormous cost of developing the chip.
Like CPUs, ML processors in embedded systems are also logical candidates for inclusion in an SoC. Most of them are built out of pure logic, so there is no obvious technology barrier. As a result, it’s natural that architects would look to pulling them inside the SoC for better performance and lower power.
This makes ML IP different from standards-based IP. “If you take Bluetooth, for example, you’ve got specific companies providing that IP,” said Watson. “And it’s a handful, because the spec is defined, and there’s a bound to the innovation that you can provide. With machine learning, we don’t have that, and everybody thinks they can do it better.”
That’s not to say that standards will not eventually enter the ML arena. “This is an area where standards haven’t been driving the enablement,” Watson said. “This is something that’s hit the ground, and now standards are trying to catch up. Because it’s a disrupter, everybody wants to make sure that when they create IP and standards eventually do get defined, that they are the big player providing either the IP or the dedicated chip.”
SDKs complicate matters
While new hardware ideas for implementing ML are still being churned out at a high rate, the role of the software stack has become increasingly important — some would say even more important than the hardware itself.
“Software is so complex that the hardware is a small piece of the overall value,” said McCarty.
AMD’s Ni agreed. “This space is all about software tools, and it’s very complex,” he said. “Folks tend to focus too much on the hardware to create IP that’s 10% more efficient. If you ask any AI customer, the biggest reason they’re using Nvidia today is software. Their software is mature and older. Software investments are very often underestimated by new entrants.”
But those investments also are increasingly required by companies buying chips. “Building a chip is never enough,” said Anoop Saha, senior manager for strategy and growth at Siemens EDA. “You need the software stack on top of it. It’s expensive, and you have to put in a lot of capital before you even see the chip and get revenue.”
As the number of ML chips increases, much of the competition rides on how easy an ML function is to implement for a given piece of hardware. The simpler it is, the more everyday designers can use it. The more one has to rely on specialist data scientists, the less reach the hardware will have. So while the software development kit (SDK) has become an important part of any ML offering, it’s easier to handle when offering a chip. The chip becomes its own standalone world, and the tools can operate independently of other parts of the system it goes into.
It’s not so cut-and-dried with ML as IP, though. A chip provider sells to the system builder, but an IP provider sells to a chip builder, which in turn sells to the system builder. “If you are an IP developer, you don’t know how your customer is going to use your IP,” said Saha. “And your customer probably does not know how it will be used in the market.”
That goes for the SDK, as well. The SoC builder won’t be using the SDK, but the system builder will. That means that the tools must pass through the chip designer to the system builder. Chip-level SDKs can assume a configuration of the silicon, but assuming ML IP is sold with options, an IP-oriented SDK must have an extra layer of flexibility to account for the different possible implementations.
In addition, the system builder will get those tools from the SoC vendor, not directly from the IP provider. That SoC will have its own SDK already, and so it’s only natural that the ML portions of the SDK be included with the rest of the SoC SDK. So the IP provider will need to ensure that its ML SDK has the proper hooks for integration into a larger set of tools. The SoC provider also will need to make the effort to wrap the IP tools into its own set to make it look as seamless as practical. That makes having an SDK that’s easy to integrate almost as important as having silicon IP that’s easy to integrate.
“Let’s create the wrappers so that it looks like it’s from us,” said Watson. “Under the hood, it’s utilizing an SDK from one of these IP providers, but that’s abstracted from the users.”
There are further complications with ML IP. SoC verification means that debug must be thought through. “You have to be sure that if something goes wrong at the customer site, you are able to trace that error into your IP,” said Saha. “Your physical design will become more complex, and you might get an issue with design closure.”
This issue persists even after a chip has been deployed into a system. “Let’s say you see a problem in the field,” he added. “How do you take that error back to your IP? And once you have that error, how do you reproduce that error and fix it?”
Effort, value, and margin
Integrating ML functions can be particularly challenging. SoC builders likely will want to customize bits and pieces of the IP. A shrink-wrapped solution is less likely today simply because there is still so much new that’s appearing. So IP vendors may have a high-touch process that involves working with customers to alter the basic IP.
“If you’re building IP and you have to customize it for every customer, then it’s a losing value proposition and becomes very painful,” noted Saha. “It takes a lot of manual effort to customize it for specific use cases.”
The easier the IP vendor makes it to customize the IP, the less work will be required on each sale and the more scalable the business will be. “You have to build something that is easily customizable and easily split into different architectures, different performance and bandwidth levels so you don’t have to customize it a lot,” he said. “Most importantly, you should be able to build it across different technologies.”
On the plus side, IP can provide more market access at lower risk. “You are targeting different markets,” said Saha. “It’s an easier way to monetize things, it’s less complex, and there is less chance of things going wrong.”
But the issue of margin is more complicated. “The company building and producing a chip will always be able to capture more value purely because they are at the forefront and they can set the prices,” he continued. “For the IP company, it becomes more difficult unless you have something so differentiated that you can charge a premium.”
The size and profile of an ML-solution provider also may impact the chip vs. IP decision. A large company will be able to sell a chip to a broad range of customer, whereas a small company may get better traction by selling IP to a large company, thereby indirectly getting access to their better-established customer base.
“It’s better to try and provide IP into SoC silicon, because it has the biggest reach,” said Watson. “You’re not going to get the same margin, but is it better to get $2 of margin on 10 things, or 50 cents on 100,000 things?”
Opportunities for both chips and IP
Some of the companies that are offering IP are also offering a chip – covering both sides of the opportunity. “I have seen multiple companies doing edge-inference where they were funded to design chips,” said Saha. “But then they decided to sell it as an IP for different SoC vendors.”
And some offerings will be IP only, although those companies will usually have built a test chip in order to verify their design. “Almost everybody who is designing IP has a test chip,” noted Saha.
The economics of chips and IP are different, of course. With new memories, for instance, DRAM and flash raise an almost impenetrable barrier against new entrants for dedicated memory chips. Embedded memory IP, on the other hand, can provide lots of value that DRAM and flash can’t provide.
In a similar manner, ML chips and IP need to be able to justify themselves economically. Of course, in this case, there’s not a highly optimized incumbent being threatened, so the new entrants will be competing with each other rather than a well-entrenched foe. Price-points have not yet been rationalized, and that process may get messy for both chips and IP.
So the industry definitely isn’t going into an all-IP mode. “I see both chips and IP,” said Sam Fuller, senior director of marketing at Flex Logix. “I see lots of both.”
But the appearance of IP as an option also is an indication that ML is being integrated into more functions than ever before, and that trend shows no sign of slowing down
 
  • Like
Reactions: 8 users

FKE

Regular
Hello,

since I mostly can't follow technical analysis I focus on interviews/videos for my DD.
The reason is that while you are talking and thinking at the same time, you can't do both 100% perfectly. This leads to revealing information that would not have been revealed in a written statement.

Example:
Valeo Q&A after the latest presentation (LIDAR).
Q: Is Brainchip part of the product?
A: We are not talking about the source of the IP.

It's like the interrogation in a murder case:
Q: Did you kill the man?
A: No, I couldn't not be on the golf course at the time.
Q: We never mentioned that he was killed on the golf course.


I would like to comment on one point that I personally find striking. Our CEO has been asked questions in the last two investor events (Q&A) that were answered with the same statement (wording from my memory):
1.) Is Brainchip working with the Department of Defense?
2.) Does Brainchip work with smartphone vendors?

Answer from our CEO: Not our focus.

Smart answer. However, it would have been easy to say: No - but he did not. Why? Because it would be a lie. Not our focus sounds like a no, so no further questions are asked, however Sean was just very well prepared with this answer, so that no mistake like Valeo happens to him.

Now in my opinion we have connections to Defense - the clearest one with ISL.
So we also have connections to smartphone vendors. Why is this not being voiced? In my opinion: Because of the huge impact/competitive advantage it creates for the customer to use Akida.

Conversation.jpg


So what we can see are NDAs that don't reveal customer names but products are talked about: Drones, vibration analysis for rails, self-driving cars, etc.

And then there seem to be NDAs (or other agreements with customers) that are meant to keep topics a complete black box --> defense, smartphones, ...

But now to my point. In the Robohub podcast with Rob (Mimicking the Five Senses, on a Chip | Ep 348) Rob makes an interesting statement starting at minute 10:30 approx. Here he first talks about "vehicles to go over a thousend miles on a charge" (okay okay, Merc - got it). And then he makes the statement: "phones that would be able to last 3 to 5 days on a charge. Those are the type of things you're gonna see new technologies such as what we've designed with Akida start to change the way devices are architectured and which will allow us to have a lot more freedom and flexibility from wearables all the way through to new devices that will be introduced."

Okay okay stop.

Why is he mentioning smart phones? --> Wearables are a market.
Okay got it.
But why is he saying that these will last 3-5 days on one battery charge? Not 2-4 or 4-6 or twice as long as before or significantly longer than today.... No, he says 3-5 days, right after saying earlier that the savings in the car are a factor of 5-10 copared to their current solution. To me the statement seems like results from a customer working with Akida. The statement came directly and without raising doubts. It does not sound like an idea but a fact that he is talking about.

I know my reasoning is on shaky ground, maybe because I am not a native speaker, but this sentence from Rob + the statement from Sean convince me.


Akida in Smartphones.jpg


All the best
FKE
 
  • Like
  • Fire
  • Thinking
Reactions: 92 users

uiux

Regular
I have been in the Brainchip stock now a few years. I have become comfortable with the investment. I feel like I understand the tech well enough thanks to my time here and formerly on HC, and their educational web site. I get the huge opportunities ahead for Brainchip and for us as investors.

That said, yesteday I gave another person my elevator speech on this exciting opportunity. I was crisp, clear, confident and succinct. I explained that intellectual property sales was what Brainchip really desired. Then the person I was talking to said, "how does the intellectual property thing work?" So, I explained,....well if a customer is going to want more than a million + chips they are recommended to enter into an IP contract with Brainchip". I talked about revenue expected from such deals, ...engineering support fees, the per chip revenue, the IP up front costs, and so on.

But then I added that one of our contracts is with Megachips, a fabless chip manufacturer in Japan who designs chips for others. The other person seemed confused, and said, "but how does the IP work with them? Does Brainchip make their design? Or does Brainchip license the technology to Megachips and how does that work....?" I suddenly realized I was in over my head and couldn't actually describe what and how the IP arrangement actually worked. I thought I understood IP sales, but realized that I couldn't explain it to this person with enough clarity that she would understand.

I realized that I didn't understand it well enough either. And so I will reach out to you folks and hope one or more of you can explain the IP contract sale and how it actually works. Without details of the Renesas or Megachips contract specifics, what is Brainchip doing when they enter one of these deals?

Is Brainchip (or Taiwan Semi) actually making the chips? Are they selling consulting service(s)? Have they licensed their secret sauce?

I would like a intellectual property sale "basics" explaination for dummies, if you will. To quote my friend, "how does the intellectual property thing work?", in a way that next time I will have the confidence and understanding to answer such a question In plain English, please.



From what I understand, when the company licenses the IP they get an RTL which allows the company to "drop in" the IP into their own designs.

A way to understand it would like like a box of Lego with a special non-generic piece (this being the Akida IP). The customer will build their own thing out of the usual Lego pieces ( a system-on-chip) and use the special non-generic piece (Akida IP) in the build, utilising it however they want.



 
  • Like
  • Fire
Reactions: 28 users

TheFunkMachine

seeds have the potential to become trees.
I was about to fall asleep and this came to me. thought I would write it down.

We see increase in publicity
We see increase in employment
We see increase in partnerships
We see increase in job advertisement
We see increase in patent portfolio
We see increase in sales deals
We see increase in Meta TF users
We see increase in media coverage
We see increase in office space
We see increase in revenue
We see increase in world wide reach
We see increase in podcasts
We see increase in Ex Arm employees lol
We see increase in …..
The only thing we are not seeing an increase in atm is SP

Guess what comes next. 🕵

All my opinion, DYOR
 
Last edited:
  • Like
  • Fire
  • Haha
Reactions: 47 users
Hello,

since I mostly can't follow technical analysis I focus on interviews/videos for my DD.
The reason is that while you are talking and thinking at the same time, you can't do both 100% perfectly. This leads to revealing information that would not have been revealed in a written statement.

Example:
Valeo Q&A after the latest presentation (LIDAR).
Q: Is Brainchip part of the product?
A: We are not talking about the source of the IP.

It's like the interrogation in a murder case:
Q: Did you kill the man?
A: No, I couldn't not be on the golf course at the time.
Q: We never mentioned that he was killed on the golf course.


I would like to comment on one point that I personally find striking. Our CEO has been asked questions in the last two investor events (Q&A) that were answered with the same statement (wording from my memory):
1.) Is Brainchip working with the Department of Defense?
2.) Does Brainchip work with smartphone vendors?

Answer from our CEO: Not our focus.

Smart answer. However, it would have been easy to say: No - but he did not. Why? Because it would be a lie. Not our focus sounds like a no, so no further questions are asked, however Sean was just very well prepared with this answer, so that no mistake like Valeo happens to him.

Now in my opinion we have connections to Defense - the clearest one with ISL.
So we also have connections to smartphone vendors. Why is this not being voiced? In my opinion: Because of the huge impact/competitive advantage it creates for the customer to use Akida.

View attachment 3211

So what we can see are NDAs that don't reveal customer names but products are talked about: Drones, vibration analysis for rails, self-driving cars, etc.

And then there seem to be NDAs (or other agreements with customers) that are meant to keep topics a complete black box --> defense, smartphones, ...

But now to my point. In the Robohub podcast with Rob (Mimicking the Five Senses, on a Chip | Ep 348) Rob makes an interesting statement starting at minute 10:30 approx. Here he first talks about "vehicles to go over a thousend miles on a charge" (okay okay, Merc - got it). And then he makes the statement: "phones that would be able to last 3 to 5 days on a charge. Those are the type of things you're gonna see new technologies such as what we've designed with Akida start to change the way devices are architectured and which will allow us to have a lot more freedom and flexibility from wearables all the way through to new devices that will be introduced."

Okay okay stop.

Why is he mentioning smart phones? --> Wearables are a market.
Okay got it.
But why is he saying that these will last 3-5 days on one battery charge? Not 2-4 or 4-6 or twice as long as before or significantly longer than today.... No, he says 3-5 days, right after saying earlier that the savings in the car are a factor of 5-10 copared to their current solution. To me the statement seems like results from a customer working with Akida. The statement came directly and without raising doubts. It does not sound like an idea but a fact that he is talking about.

I know my reasoning is on shaky ground, maybe because I am not a native speaker, but this sentence from Rob + the statement from Sean convince me.


View attachment 3210

All the best
FKE
I don’t think it is reasonable to attempt to change your mind.

The former CEO Mr. Dinardo spoke to mobile phones as a potential target market, he also spoke specifically about a direct approach from a Chinese mobile phone maker, he also said that the only issue with mobile phones was that to target this opportunity you had to hit the phone maker at the right point in the design cycle as new phones take 4 years.

So for many years now the mobile phone market has been Brainchip’s focus. Why because there is a lot of money to be made by Brainchip in this industry.

At the moment we have had the CEO Mr Hehir state that mobile phones are not their focus.

In the next breath we have had the VP of World Wide Sales Rob Telson say that using AKIDA technology would allow phones to last 3 to 5 days on a single charge off the top of his head at a time when according to the CEO mobile phones are not their focus so why does Rob Telson have this example at the front of his thoughts???

Is it there at the front of his thinking because like the 1,000 kilometre EQXX there is a prototype mobile phone sitting on a boardroom table somewhere in Silicon Valley being discussed???

I have no idea but I do not think your theory is so outlandish that we need to make you think otherwise.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
  • Love
Reactions: 43 users

Terroni2105

Founding Member
Hello,

since I mostly can't follow technical analysis I focus on interviews/videos for my DD.
The reason is that while you are talking and thinking at the same time, you can't do both 100% perfectly. This leads to revealing information that would not have been revealed in a written statement.

Example:
Valeo Q&A after the latest presentation (LIDAR).
Q: Is Brainchip part of the product?
A: We are not talking about the source of the IP.

It's like the interrogation in a murder case:
Q: Did you kill the man?
A: No, I couldn't not be on the golf course at the time.
Q: We never mentioned that he was killed on the golf course.


I would like to comment on one point that I personally find striking. Our CEO has been asked questions in the last two investor events (Q&A) that were answered with the same statement (wording from my memory):
1.) Is Brainchip working with the Department of Defense?
2.) Does Brainchip work with smartphone vendors?

Answer from our CEO: Not our focus.

Smart answer. However, it would have been easy to say: No - but he did not. Why? Because it would be a lie. Not our focus sounds like a no, so no further questions are asked, however Sean was just very well prepared with this answer, so that no mistake like Valeo happens to him.

Now in my opinion we have connections to Defense - the clearest one with ISL.
So we also have connections to smartphone vendors. Why is this not being voiced? In my opinion: Because of the huge impact/competitive advantage it creates for the customer to use Akida.

View attachment 3211

So what we can see are NDAs that don't reveal customer names but products are talked about: Drones, vibration analysis for rails, self-driving cars, etc.

And then there seem to be NDAs (or other agreements with customers) that are meant to keep topics a complete black box --> defense, smartphones, ...

But now to my point. In the Robohub podcast with Rob (Mimicking the Five Senses, on a Chip | Ep 348) Rob makes an interesting statement starting at minute 10:30 approx. Here he first talks about "vehicles to go over a thousend miles on a charge" (okay okay, Merc - got it). And then he makes the statement: "phones that would be able to last 3 to 5 days on a charge. Those are the type of things you're gonna see new technologies such as what we've designed with Akida start to change the way devices are architectured and which will allow us to have a lot more freedom and flexibility from wearables all the way through to new devices that will be introduced."

Okay okay stop.

Why is he mentioning smart phones? --> Wearables are a market.
Okay got it.
But why is he saying that these will last 3-5 days on one battery charge? Not 2-4 or 4-6 or twice as long as before or significantly longer than today.... No, he says 3-5 days, right after saying earlier that the savings in the car are a factor of 5-10 copared to their current solution. To me the statement seems like results from a customer working with Akida. The statement came directly and without raising doubts. It does not sound like an idea but a fact that he is talking about.

I know my reasoning is on shaky ground, maybe because I am not a native speaker, but this sentence from Rob + the statement from Sean convince me.


View attachment 3210

All the best
FKE
I 💯 percent agree. I had exactly the same thought about phones when I heard him say that too. We know mercedes can do 1000km per charge and we are working with them, now who is the phone company?
 
  • Like
Reactions: 25 users

dippY22

Regular
A recent good article explaining various areas of the process / market worth a read.




Machine Learning Showing Up As Silicon IP​

472Shares
sharethis sharing button

It won’t replace ML chips, but it could broaden the market.
MARCH 3RD, 2022 - BY: BRYON MOYER
popularity

New machine-learning (ML) architectures continue to appear. Up to now, each new offering has been implemented in a chip for sale, to be placed alongside host processors, memory, and other chips on an accelerator board. But over time, more of this technology could be sold as IP that can be integrated into a system-on-chip (SoC).
That trend is evident at recent conferences, where an increasing number of announcements involve IP for which there may or may not be physical chips available.
“For customers who want to jump straight into machine learning and make their products smarter, it’s easier just to buy an AI chip with an existing tooling process and add that to your existing architectural platform,” said Danny Watson, principal engineer for the ICW business line at Infineon. “Going forward, when they’re going to make platform decisions across the whole portfolio, that’s when they’re going to have the architecture integrated into an SoC directly.”
This may accelerate the adoption of ML in dedicated applications, and it may be a faster route to market for ML hardware providers. While this may add options for system designers, IP comes with its own set of challenges.
Something completely new
While new technology appears every day in some form or another, most of what’s introduced is evolutionary. Typically, it involves a faster version of something, a new communications protocol, or a different way of storing data. It’s far less common that something completely new appears, adding a capability that was not possible before.
This is why ML’s impact has been so significant. While the ideas underlying the technology have been around for a while, it’s been only recently that silicon technology has made it possible for ML to be deployed at scales that previously were considered unfeasible. ML is a wholly new concept, not just a different way of doing something or an integration play. It has made possible solutions to problems that were not tractable before, and it has allowed system designers to conceive of equipment that would have been unthinkable until recently.
But as a new concept, there also is no best way to do it yet. The industry has been in the early stages of figuring out how to make it work, and there are lots of moving parts that can be tuned. So there are numerous proposals and offerings on different ways to solve problems.
It’s not a one-solution-fits-all situation either. What works best for one problem may be sub-optimal for another. The challenge is figuring out how much to specialize and how much to stay general-purpose.
The last time the chip industry was in this situation was decades ago — perhaps the availability of SSI logic or the microprocessor. Everything since then has been an improvement and better integration. Decades later, these functions are widely available as IP.
The new “new kid in town”
Now that the technology necessary for ML is available, the industry seems to be charting a similar trajectory to logic, but on an extremely compressed timescale. The first big challenge is how to best implement ML capabilities. As with logic in the beginning, that has meant the availability of individual chips performing the ML functions. System designers can include them as accelerators either in parallel with their main CPUs (if the chip includes a host), or by using their CPU as a host to control the ML tasks.
These chips originally were designed onto boards or modules dedicated to ML. With an original focus on the cloud, an entire board with a PCIe interface might be dedicated to ML training or inference. But with ML being offered as IP instead of individual chips, ML functions can be wrapped into SoCs, which in turn reduces the overall footprint on a board.
But as the industry increasingly moves to chiplets and disaggregates large SoCs, this business model may work for some startups. “If you don’t have hardware out by now, then you may strategically say, ‘Well, I’m gonna sell it as IP,’” said Dana McCarty, vice president of sales and marketing for inference products at Flex Logix.
Fig. 1: New neural architectures have largely been implemented as their own chip for inclusion on a board (left). Now offerings are starting to include IP for inclusion on a chip (right). Source: Bryon Moyer/Semiconductor Engineering

Fig. 1: New neural architectures have largely been implemented as their own chip for inclusion on a board (left). Now offerings are starting to include IP for inclusion on a chip (right). Source: Bryon Moyer/Semiconductor Engineering
Cloud vs. edge

The impetus for ML IP will partially depend on where the ML will be instantiated. The original ML focus was on the cloud or in other data centers, where accelerator boards have proven to be a good solution. That becomes even more the case when considering a disaggregated data center of the future, where resources can be pulled as necessary. In the cloud, it makes more sense for ML functionality to stand alone so that only as much as is needed can be roped into a particular project. If it were built into every server, then it might sit idle while the server worked on projects not needing ML.
Data center form factors thus provide a constraint. “Cloud and data centers have much more fixed infrastructure,” said Nick Ni, director of product marketing for AI and software at AMD. “You can’t just change the PCIe form factor. There’s a full rack of data-center servers that’s already been deployed, so you have to live within the constraints.”
As inference moves to the edge, that’s not necessary. “The edge is completely different,” said Ni. “There’s more flexibility there. But it’s also much less adopted today, because there’s so much randomness in the hardware. In automotive, drones, and medical applications, it’s like 2% adoption. The hardware market still has a huge untapped potential, and nobody’s a winner so far.”
For the edge, integration into an SoC could make sense. Unlike server-based applications, these tend to focus on specific problems, so the solution can be tailored. Edge devices tend to be small, so having fewer packages also makes sense. And an SoC designer can manage the power and performance of the ML IP block more directly as compared to what would be possible with a dedicated chip.
There are typically two types of ML chips — those intended for training, with facilities for back-propagation, and those intended only for inference. Training is likely to be a cloud-based activity for the foreseeable future, so it’s not obvious that an IP version of a training-oriented ML architecture would make sense.
Embedded ML functions are much more likely to be focused on inference. To the extent that such a device might want to improve its model over time, data would be sent back to the cloud for further training rather than attempting training in the edge device itself (with the exception of limited incremental learning). That could change if new training techniques arise, but for the current mainstream, IP will likely be limited to inference.
The impact of standards
The move to IP has often been spurred by standards that limit how much creativity can be brought to some functions. When implementing PCIe, for example, differentiation cannot rely on adding novel features because the features themselves are specified in the standards. So differentiation occurs based on how those features are realized. Speed is often captured in a standard, so just being faster often has to do with how much headroom is available. That leaves power and cost as the major silicon-based characteristics for differentiation.
But the other big opportunity lies in ease of use. Standards-based IP, in particular, provides a well-defined set of implementation options such as bus widths, security choices, or optional features. A company that makes it easy for designers to implement its IP will have less pressure on things like price when competing.
But it hasn’t been only standards that have benefited from IP. Infineon’s Watson points out that the audio market, for instance, has seen chip-based implementations gradually move into IP for greater integration, even though that doesn’t come as a result of standards circumscribing the available options.
Somewhere between standards-based and completely ad-hoc IP are IP blocks that have become de-facto standards, typically by virtue of market power. Arm processors, for example, aren’t industry standards for how to implement a processor, as the growing popularity of RISC-V shows, but they are prevalent enough to have set an expectation that a certain class of processor be available as IP.
Processors are at the heart of most SoCs, and so by definition an SoC must use IP for a processor rather than a dedicated processor chip. SoCs attempt to integrate as much as possible into a platform architecture that can be leveraged over enough designs to achieve the sales volume sufficient to pay back the enormous cost of developing the chip.
Like CPUs, ML processors in embedded systems are also logical candidates for inclusion in an SoC. Most of them are built out of pure logic, so there is no obvious technology barrier. As a result, it’s natural that architects would look to pulling them inside the SoC for better performance and lower power.
This makes ML IP different from standards-based IP. “If you take Bluetooth, for example, you’ve got specific companies providing that IP,” said Watson. “And it’s a handful, because the spec is defined, and there’s a bound to the innovation that you can provide. With machine learning, we don’t have that, and everybody thinks they can do it better.”
That’s not to say that standards will not eventually enter the ML arena. “This is an area where standards haven’t been driving the enablement,” Watson said. “This is something that’s hit the ground, and now standards are trying to catch up. Because it’s a disrupter, everybody wants to make sure that when they create IP and standards eventually do get defined, that they are the big player providing either the IP or the dedicated chip.”
SDKs complicate matters
While new hardware ideas for implementing ML are still being churned out at a high rate, the role of the software stack has become increasingly important — some would say even more important than the hardware itself.
“Software is so complex that the hardware is a small piece of the overall value,” said McCarty.
AMD’s Ni agreed. “This space is all about software tools, and it’s very complex,” he said. “Folks tend to focus too much on the hardware to create IP that’s 10% more efficient. If you ask any AI customer, the biggest reason they’re using Nvidia today is software. Their software is mature and older. Software investments are very often underestimated by new entrants.”
But those investments also are increasingly required by companies buying chips. “Building a chip is never enough,” said Anoop Saha, senior manager for strategy and growth at Siemens EDA. “You need the software stack on top of it. It’s expensive, and you have to put in a lot of capital before you even see the chip and get revenue.”
As the number of ML chips increases, much of the competition rides on how easy an ML function is to implement for a given piece of hardware. The simpler it is, the more everyday designers can use it. The more one has to rely on specialist data scientists, the less reach the hardware will have. So while the software development kit (SDK) has become an important part of any ML offering, it’s easier to handle when offering a chip. The chip becomes its own standalone world, and the tools can operate independently of other parts of the system it goes into.
It’s not so cut-and-dried with ML as IP, though. A chip provider sells to the system builder, but an IP provider sells to a chip builder, which in turn sells to the system builder. “If you are an IP developer, you don’t know how your customer is going to use your IP,” said Saha. “And your customer probably does not know how it will be used in the market.”
That goes for the SDK, as well. The SoC builder won’t be using the SDK, but the system builder will. That means that the tools must pass through the chip designer to the system builder. Chip-level SDKs can assume a configuration of the silicon, but assuming ML IP is sold with options, an IP-oriented SDK must have an extra layer of flexibility to account for the different possible implementations.
In addition, the system builder will get those tools from the SoC vendor, not directly from the IP provider. That SoC will have its own SDK already, and so it’s only natural that the ML portions of the SDK be included with the rest of the SoC SDK. So the IP provider will need to ensure that its ML SDK has the proper hooks for integration into a larger set of tools. The SoC provider also will need to make the effort to wrap the IP tools into its own set to make it look as seamless as practical. That makes having an SDK that’s easy to integrate almost as important as having silicon IP that’s easy to integrate.
“Let’s create the wrappers so that it looks like it’s from us,” said Watson. “Under the hood, it’s utilizing an SDK from one of these IP providers, but that’s abstracted from the users.”
There are further complications with ML IP. SoC verification means that debug must be thought through. “You have to be sure that if something goes wrong at the customer site, you are able to trace that error into your IP,” said Saha. “Your physical design will become more complex, and you might get an issue with design closure.”
This issue persists even after a chip has been deployed into a system. “Let’s say you see a problem in the field,” he added. “How do you take that error back to your IP? And once you have that error, how do you reproduce that error and fix it?”
Effort, value, and margin
Integrating ML functions can be particularly challenging. SoC builders likely will want to customize bits and pieces of the IP. A shrink-wrapped solution is less likely today simply because there is still so much new that’s appearing. So IP vendors may have a high-touch process that involves working with customers to alter the basic IP.
“If you’re building IP and you have to customize it for every customer, then it’s a losing value proposition and becomes very painful,” noted Saha. “It takes a lot of manual effort to customize it for specific use cases.”
The easier the IP vendor makes it to customize the IP, the less work will be required on each sale and the more scalable the business will be. “You have to build something that is easily customizable and easily split into different architectures, different performance and bandwidth levels so you don’t have to customize it a lot,” he said. “Most importantly, you should be able to build it across different technologies.”
On the plus side, IP can provide more market access at lower risk. “You are targeting different markets,” said Saha. “It’s an easier way to monetize things, it’s less complex, and there is less chance of things going wrong.”
But the issue of margin is more complicated. “The company building and producing a chip will always be able to capture more value purely because they are at the forefront and they can set the prices,” he continued. “For the IP company, it becomes more difficult unless you have something so differentiated that you can charge a premium.”
The size and profile of an ML-solution provider also may impact the chip vs. IP decision. A large company will be able to sell a chip to a broad range of customer, whereas a small company may get better traction by selling IP to a large company, thereby indirectly getting access to their better-established customer base.
“It’s better to try and provide IP into SoC silicon, because it has the biggest reach,” said Watson. “You’re not going to get the same margin, but is it better to get $2 of margin on 10 things, or 50 cents on 100,000 things?”
Opportunities for both chips and IP
Some of the companies that are offering IP are also offering a chip – covering both sides of the opportunity. “I have seen multiple companies doing edge-inference where they were funded to design chips,” said Saha. “But then they decided to sell it as an IP for different SoC vendors.”
And some offerings will be IP only, although those companies will usually have built a test chip in order to verify their design. “Almost everybody who is designing IP has a test chip,” noted Saha.
The economics of chips and IP are different, of course. With new memories, for instance, DRAM and flash raise an almost impenetrable barrier against new entrants for dedicated memory chips. Embedded memory IP, on the other hand, can provide lots of value that DRAM and flash can’t provide.
In a similar manner, ML chips and IP need to be able to justify themselves economically. Of course, in this case, there’s not a highly optimized incumbent being threatened, so the new entrants will be competing with each other rather than a well-entrenched foe. Price-points have not yet been rationalized, and that process may get messy for both chips and IP.
So the industry definitely isn’t going into an all-IP mode. “I see both chips and IP,” said Sam Fuller, senior director of marketing at Flex Logix. “I see lots of both.”
But the appearance of IP as an option also is an indi
From what I understand, when the company licenses the IP they get an RTL which allows the company to "drop in" the IP into their own designs.

A way to understand it would like like a box of Lego with a special non-generic piece (this being the Akida IP). The customer will build their own thing out of the usual Lego pieces ( a system-on-chip) and use the special non-generic piece (Akida IP) in the build, utilising it however they want.



From what I understand, when the company licenses the IP they get an RTL which allows the company to "drop in" the IP into their own designs.

A way to understand it would like like a box of Lego with a special non-generic piece (this being the Akida IP). The customer will build their own thing out of the usual Lego pieces ( a system-on-chip) and use the special non-generic piece (Akida IP) in the build, utilising it however they want.




Thanks to all who replied to my I.P. confusion. I must say UIUX, that Wikipedia article, specifically the first paragraph, really helps me. Thanks again.
 
  • Like
Reactions: 15 users

Learning

Learning to the Top 🕵‍♂️
I have been in the Brainchip stock now a few years. I have become comfortable with the investment. I feel like I understand the tech well enough thanks to my time here and formerly on HC, and their educational web site. I get the huge opportunities ahead for Brainchip and for us as investors.

That said, yesteday I gave another person my elevator speech on this exciting opportunity. I was crisp, clear, confident and succinct. I explained that intellectual property sales was what Brainchip really desired. Then the person I was talking to said, "how does the intellectual property thing work?" So, I explained,....well if a customer is going to want more than a million + chips they are recommended to enter into an IP contract with Brainchip". I talked about revenue expected from such deals, ...engineering support fees, the per chip revenue, the IP up front costs, and so on.

But then I added that one of our contracts is with Megachips, a fabless chip manufacturer in Japan who designs chips for others. The other person seemed confused, and said, "but how does the IP work with them? Does Brainchip make their design? Or does Brainchip license the technology to Megachips and how does that work....?" I suddenly realized I was in over my head and couldn't actually describe what and how the IP arrangement actually worked. I thought I understood IP sales, but realized that I couldn't explain it to this person with enough clarity that she would understand.

I realized that I didn't understand it well enough either. And so I will reach out to you folks and hope one or more of you can explain the IP contract sale and how it actually works. Without details of the Renesas or Megachips contract specifics, what is Brainchip doing when they enter one of these deals?

Is Brainchip (or Taiwan Semi) actually making the chips? Are they selling consulting service(s)? Have they licensed their secret sauce?

I would like a intellectual property sale "basics" explaination for dummies, if you will. To quote my friend, "how does the intellectual property thing work?", in a way that next time I will have the confidence and understanding to answer such a question In plain English, please.
Hi dippY22,

Some of the obove post are great resources, but here my easy take on it.

My most basic understanding of how the IP licencing contract works.

Its like Akida = Sauce ( the five senses, low power, etc)
When you use thats sauce in your cooking its would taste great, without the sauce its doesn't.(implying and incorporates in current tech to make its better).

So Brainchip has created the sauce and that's sauce is the IP belong to Brainchip as the creator (we investor don't know the formula for the sauce). Then Megachips comes and buy the sauce in bulk, they pay 2 Millions for the recipe of the sauce. So now they can uses that sauce in any dish they have. However, as they use the sauce to sell a dish, they need to pay Brainchip the royalty per dish.

Hope this is easy to understand.

Its great to be an shareholder.
 
  • Like
  • Fire
  • Love
Reactions: 29 users

dippY22

Regular
Hi dippY22,

Some of the obove post are great resources, but here my easy take on it.

My most basic understanding of how the IP licencing contract works.

Its like Akida = Sauce ( the five senses, low power, etc)
When you use thats sauce in your cooking its would taste great, without the sauce its doesn't.(implying and incorporates in current tech to make its better).

So Brainchip has created the sauce and that's sauce is the IP belong to Brainchip as the creator (we investor don't know the formula for the sauce). Then Megachips comes and buy the sauce in bulk, they pay 2 Millions for the recipe of the sauce. So now they can uses that sauce in any dish they have. However, as they use the sauce to sell a dish, they need to pay Brainchip the royalty per dish.

Hope this is easy to understand.

Its great to be an shareholder.
Yes, and it tastes good, too..... ha ha
 
  • Like
Reactions: 5 users

Quatrojos

Regular
  • Like
  • Fire
  • Love
Reactions: 30 users
Top Bottom