BRN Discussion Ongoing

Who else has been throwing around the word "UBIQUITOUS" lately ?
Haha, must have come out of my gob at least 50 times this week.
 
  • Like
  • Love
  • Haha
Reactions: 6 users

Violin1

Regular
Hey V,

Point four, IMO, should be taken in context with the recent statement made by team BRN (PVDM, I think) that customers requiring volumes over 1M are encouraged to work with IP rather than BRN silicon.

I take this to mean that BRN will no longer get out of bed for less than this volume...

Q
Hi Q - don't think there was any specific conversation of this nature around the AGM. I got the sense that the company has moved on from this - that we won't be producing chips for sale of that nature. Anything at all produced will be for marketing and sales purposes. So IP or we can't do it. So your summary about the bed seems entirely apt! My take only.....
V
 
  • Love
  • Like
Reactions: 2 users

Violin1

Regular
Look at these class names and we are still only just looking at the tip of our berg.
So many arena's that we can improve have already been mentioned or hinted at on this site, just with our first iteration, and to my mind, we are still just getting started. Ubiquitous is such a lovely word, and it seems to me that probably the vast percentage of our use cases haven't even been thought of, or considered properly yet, except perhaps by Peter and his fellow visionary's.
The market will eventually catch on to what we already know or suspect, and will be chasing our shares even harder than it already is.
When this juggernaut takes off again, and the adrenalin and excitement has us in thrall it would be wise to be prepared with a plan that takes into account our individual circumstance and objectives so we are each set up as we would prefer, rather than as buck fever dictates.
Unicorns don't come around too often, or may go unrecognised till too late in the day, so don't trade 'em away too cheap guy's n gals.
One day, she may just take off and re rate, leaving some weeping.
Not financial advise yaddah yaddah.....
AKIDA BALLISTA
AKIDA EVERYWHERE.................isn't that just another way to say ubiquitous???:ROFLMAO:
GLTAH
And particularly us long termer's and old timers .......we deserve it. :ROFLMAO::ROFLMAO::ROFLMAO:

Look at these class names and we are still only just looking at the tip of our berg.
So many arena's that we can improve have already been mentioned or hinted at on this site, just with our first iteration, and to my mind, we are still just getting started. Ubiquitous is such a lovely word, and it seems to me that probably the vast percentage of our use cases haven't even been thought of, or considered properly yet, except perhaps by Peter and his fellow visionary's.
The market will eventually catch on to what we already know or suspect, and will be chasing our shares even harder than it already is.
When this juggernaut takes off again, and the adrenalin and excitement has us in thrall it would be wise to be prepared with a plan that takes into account our individual circumstance and objectives so we are each set up as we would prefer, rather than as buck fever dictates.
Unicorns don't come around too often, or may go unrecognised till too late in the day, so don't trade 'em away too cheap guy's n gals.
One day, she may just take off and re rate, leaving some weeping.
Not financial advise yaddah yaddah.....
AKIDA BALLISTA
AKIDA EVERYWHERE.................isn't that just another way to say ubiquitous???:ROFLMAO:
GLTAH
And particularly us long termer's and old timers .......we deserve it. :ROFLMAO::ROFLMAO::ROFLMAO:
No matter what our individual decisions - let's all hang on to a parcel so we can attend the AGM and crawl the pubs for strategic and thoughful investment decisions (not just the deductions lol). Great to meet you and see you next year!
 
  • Like
  • Fire
  • Love
Reactions: 10 users

Boab

I wish I could paint like Vincent
A Apple senior executive who was working on their electric vehicle project and who was previously at Tesla working on their semi autonomous drive system has reportedly jumped ship to work for Luminar.
Luminar......Mercedes......Akida... dot dot dot🤞🤞🤞

 
  • Like
  • Love
  • Fire
Reactions: 26 users
Hey everyone
Just seen On the TV Door bells /cameras advertising from Google
Does this ring a bell with anyone?
 
  • Like
  • Thinking
  • Fire
Reactions: 8 users

Boab

I wish I could paint like Vincent
A Apple senior executive who was working on their electric vehicle project and who was previously at Tesla working on their semi autonomous drive system has reportedly jumped ship to work for Luminar.
Luminar......Mercedes......Akida... dot dot dot🤞🤞🤞

This article has a bit more info.
 
  • Like
  • Thinking
  • Fire
Reactions: 4 users

Tuliptrader

Regular
To
Things that make me go mmmmmm. NPU or is it the co-processor that is mentioned? Any thoughts about the potential


An article along the same lines of Microsoft, Arm, NPU's and co proccessors.

www.pcworld.com/article/704538/microsoft-thinks-cloud-powered-arm-devices-are-the-pcs-future.html/amp

Microsoft’s vision for the PC’s future: AI-infused ‘NPUs’ and the cloud​

Will the PC transition from powerful processors to AI-powered, cloud-connected devices? Microsoft seems to hope so.
By Mark Hachman
Senior Editor, PCWorld MAY 24, 2022 3:10 PM PDT

We’ve become used to sharing files between the cloud and our local PCs, so much that we often don’t think about where a file physically resides. It sounds like Microsoft wants us to start thinking about our computing resources in the same way — and start considering AI when buying a new PC or processor, too.

In the future, an app you use will make a decision between whether to use the power of your local CPU; a local “neural processing unit,” or NPU; or the cloud to get your task done fast.


“We’re entering a world where every Windows computer will draw on the combined power of CPUs, GPUs, NPUs, and even a new coprocessor, Azure Compute, in this hybrid cloud-to-edge world,” said Microsoft chief executive Satya Nadella during his keynote address at the Microsoft Build conference, held virtually. “You will be able to do large scale training in the cloud and do inference at the edge and have the fabric work as one.”

What does this mean? Today, we buy beefy X86 processors to run games, Microsoft Excel, and other intensive applications on our PCs. What Microsoft is trying to accomplish is to bring new classes of applications to the PC, “magical experiences” that increasingly depend on artificial intelligence. These apps will use the CPUs and GPUs that are already in your PC, but also tap into the resources like the cloud and an AI processor like an NPU to help out, too.


What Nadella is proposing is that developers — Nadella mentioned Adobe by name — build in more intelligent capabilities into their own apps. (Think Photoshop’s “Magic Select” tool, which can, well, magically select an object by guessing that’s what you’re highlighting.) Those apps would be trained and improved in the cloud, teaching the apps through machine learning how to get smarter. They would then be tested — “inferencing,” in AI-speak — on your PC.

In other words, Microsoft sees the evolution of apps as one where the cloud will interact with the local PC in two ways. You’ll save files on your PC (or on a backup drive or USB key) and on OneDrive; and you’ll run apps on your local CPU and GPU, as well as Azure. The wild card will be the NPU, an AI coprocessor that has been somewhat ignored by X86 chipmakers like AMD and Intel, but prioritized by Arm. Deciding between what to use — CPU, GPU, NPU, or the cloud — is what Microsoft calls “the hybrid loop,” and it will be an important decision your Windows PC will have to make. From your perspective, however, it should just work.

More to come at Microsoft Build
At its Build developer conference, executive vice president and chief product officer Panos Panay will talk about a “vision for a world of intelligent hybrid compute, bringing together local
compute on the CPU, GPU, and NPU [neural processing unit] and cloud compute with Azure,” the company’s cloud technology.

“In the future, moving compute workloads between client and cloud will be as dynamic and seamless as moving between Wi-Fi and cellular on your phone today,” Panay wrote in a blog post titled “Create Next Generation Experiences at Scale with Windows,” that accompanied the opening of the Build conference.


“We’re building on the GPU, the CPU, the MPU, and in essence, and in essence, we’re introducing a fourth processor to Windows with Azure Compute — using Azure, one of the world’s most powerful computers, to enable rich local experiences on Windows.” Panay said at the Build conference.

We don’t know exactly what AI-powered apps these will be, but Microsoft itself has provided a few past examples, such as:

So far, all of these applications depend on your local PC’s processor, such as automatic captioning of local video. Others, such as automatic scheduling of meetings in Outlook, can certainly use your own local processing power, but they could also use the Azure cloud that powers Outlook.com. The point is: you don’t know, and you don’t care. It just gets done.

As to what those applications might be? We don’t know, though Microsoft is clearly trying to rally developers to build those applications using Build as inspiration. We know Microsoft would really love for you to start incorporating its Azure cloud into your computing experiences, even if you don’t think of “Azure” as something you’d sign up for. Microsoft’s Xbox cloud gaming? The “Windows in a cloud,” or Windows 365? Outlook on the Web? All of these heavily depend on Microsoft Azure, and they simply can’t be replicated without a cloud subscription backing them up.


A strong endorsement for Arm
What’s somewhat surprising, however, is that how strongly Microsoft seems to believe that Arm PCs will be necessary to enable this future. “Increasingly, magical experiences powered by AI will require enormous levels of processing power beyond the capabilities of traditional CPU and GPU alone. But new silicon like neural processing units (NPUs) will add expanded capacity for key AI workloads,” Panay wrote.

Put another way, “in the future, the speed of your computer will be measured by the power of its neural processor,” said Microsoft technical fellow Steven Bathiche, at Build.

Both AMD and Intel have made noises about AI capabilities within their processors, beginning with Intel’s 10th-gen “Ice Lake” chip. There, Intel showed off how AI could be used to filter out background noises in conference calls, accelerate photo-editing tools, and more. But AI hasn’t really been a focus in subsequent presentations. For their part, AMD executives mentioned that Ryzen 7000 would have specific AI instructions that they would talk about later, and that was that.

Arm and its licensee Qualcomm, however, have made AI an enormous priority, and its recent Snapdragon 8+ Gen 1 chip contains what Qualcomm calls its 7th-gen AI engine, with a 3rd-gen Sensing Hub that works on a low level to filter out audible noise along with other features. In smartphones, the impact of AI is more immediately felt, with “portrait” images and video using AI to set off the subject from the background, and apply filters. Qualcomm may refer to it as an “AI engine,” but it’s an NPU by any other name. And Microsoft seems to want them on the PC.


In a demonstration, Bathiche showed off how a face tracker algorithm would require more than 20 watts on a traditional X86 CPU, while on an NPU it took only 137 milliwatts.

And how is it going to do all this? It’s not exactly clear. Surprisingly, one name that we haven’t heard anything about is Windows ML, an AI API that we first heard about in 2018 to bring to AI to Windows…then sort of disappeared.

On one hand, Microsoft hasn’t called out Arm specifically as a preferred NPU provider. But we’ve seen a continued push to support Arm over the past few years, dating from the first Windows on Arm implementations to 64-bit app support. Unfortunately, in a world where Microsoft (and customers) didn’t care so much about AI, Qualcomm’s Snapdragon chips were forced to try and differentiate themselves on battery life, an advantage that X86 chips cut into. AMD claimed that the longest-lasting laptop, as measured by MobileMark, now runs on a Ryzen for example.

Microsoft hasn’t given up. At Build, the company announced Project Volterra, a new device powered by Snapdragon chips. (Microsoft representatives declined to comment on the exact specifications.) Volterra, pictured in the image at the top of this story, will be used as a way for developers to create these “magical apps” that use NPUs — on Arm.


“With Project Volterra you will be able to explore many AI scenarios,” Panay wrote. “And because we expect to see NPUs being built into most, if not all future computing devices, we’re going to make it easy for developers to leverage these new capabilities, by baking support for NPUs into the end-to-end Windows platform.”

Right now, you’re probably working on a PC that uses an X86 processor of some type, either from AMD or Intel. And you probably rely on the power of that PC to accomplish whatever task you wish to complete. Microsoft certainly doesn’t dictate the future of the personal computer, as we’ve learned from the failures of Windows 10X, the Surface Neo, and so on. But it does have significant influence, and the power of the cloud has already touched your life in subtle but meaningful ways. Microsoft appears to be trying to tug the PC into a direction that puts cloud connectivity plus local AI at the head of the table, with Azure as the main dish.

Will the PC industry follow suit? PC makers have generally been willing to experiment with Cortana buttons and Surface-like tablets and Windows 10 in S Mode and the like. But they’re quick to steer back towards what makes them money, too, ruthlessly cutting experiments that don’t pan out. Still, there’s no denying that Microsoft has staked a claim with a fresh vision for the future of the PC, and that’s worth watching.

TT
 
  • Like
  • Fire
  • Love
Reactions: 33 users
The partnership with ARM on a scale of 1 to 10 in regards to revenue is what ?
 
  • Like
Reactions: 2 users
To

An article along the same lines of Microsoft, Arm, NPU's and co proccessors.

www.pcworld.com/article/704538/microsoft-thinks-cloud-powered-arm-devices-are-the-pcs-future.html/amp

Microsoft’s vision for the PC’s future: AI-infused ‘NPUs’ and the cloud​

Will the PC transition from powerful processors to AI-powered, cloud-connected devices? Microsoft seems to hope so.
By Mark Hachman
Senior Editor, PCWorld MAY 24, 2022 3:10 PM PDT

We’ve become used to sharing files between the cloud and our local PCs, so much that we often don’t think about where a file physically resides. It sounds like Microsoft wants us to start thinking about our computing resources in the same way — and start considering AI when buying a new PC or processor, too.

In the future, an app you use will make a decision between whether to use the power of your local CPU; a local “neural processing unit,” or NPU; or the cloud to get your task done fast.


“We’re entering a world where every Windows computer will draw on the combined power of CPUs, GPUs, NPUs, and even a new coprocessor, Azure Compute, in this hybrid cloud-to-edge world,” said Microsoft chief executive Satya Nadella during his keynote address at the Microsoft Build conference, held virtually. “You will be able to do large scale training in the cloud and do inference at the edge and have the fabric work as one.”

What does this mean? Today, we buy beefy X86 processors to run games, Microsoft Excel, and other intensive applications on our PCs. What Microsoft is trying to accomplish is to bring new classes of applications to the PC, “magical experiences” that increasingly depend on artificial intelligence. These apps will use the CPUs and GPUs that are already in your PC, but also tap into the resources like the cloud and an AI processor like an NPU to help out, too.


What Nadella is proposing is that developers — Nadella mentioned Adobe by name — build in more intelligent capabilities into their own apps. (Think Photoshop’s “Magic Select” tool, which can, well, magically select an object by guessing that’s what you’re highlighting.) Those apps would be trained and improved in the cloud, teaching the apps through machine learning how to get smarter. They would then be tested — “inferencing,” in AI-speak — on your PC.

In other words, Microsoft sees the evolution of apps as one where the cloud will interact with the local PC in two ways. You’ll save files on your PC (or on a backup drive or USB key) and on OneDrive; and you’ll run apps on your local CPU and GPU, as well as Azure. The wild card will be the NPU, an AI coprocessor that has been somewhat ignored by X86 chipmakers like AMD and Intel, but prioritized by Arm. Deciding between what to use — CPU, GPU, NPU, or the cloud — is what Microsoft calls “the hybrid loop,” and it will be an important decision your Windows PC will have to make. From your perspective, however, it should just work.

More to come at Microsoft Build
At its Build developer conference, executive vice president and chief product officer Panos Panay will talk about a “vision for a world of intelligent hybrid compute, bringing together local
compute on the CPU, GPU, and NPU [neural processing unit] and cloud compute with Azure,” the company’s cloud technology.

“In the future, moving compute workloads between client and cloud will be as dynamic and seamless as moving between Wi-Fi and cellular on your phone today,” Panay wrote in a blog post titled “Create Next Generation Experiences at Scale with Windows,” that accompanied the opening of the Build conference.


“We’re building on the GPU, the CPU, the MPU, and in essence, and in essence, we’re introducing a fourth processor to Windows with Azure Compute — using Azure, one of the world’s most powerful computers, to enable rich local experiences on Windows.” Panay said at the Build conference.

We don’t know exactly what AI-powered apps these will be, but Microsoft itself has provided a few past examples, such as:

So far, all of these applications depend on your local PC’s processor, such as automatic captioning of local video. Others, such as automatic scheduling of meetings in Outlook, can certainly use your own local processing power, but they could also use the Azure cloud that powers Outlook.com. The point is: you don’t know, and you don’t care. It just gets done.

As to what those applications might be? We don’t know, though Microsoft is clearly trying to rally developers to build those applications using Build as inspiration. We know Microsoft would really love for you to start incorporating its Azure cloud into your computing experiences, even if you don’t think of “Azure” as something you’d sign up for. Microsoft’s Xbox cloud gaming? The “Windows in a cloud,” or Windows 365? Outlook on the Web? All of these heavily depend on Microsoft Azure, and they simply can’t be replicated without a cloud subscription backing them up.


A strong endorsement for Arm
What’s somewhat surprising, however, is that how strongly Microsoft seems to believe that Arm PCs will be necessary to enable this future. “Increasingly, magical experiences powered by AI will require enormous levels of processing power beyond the capabilities of traditional CPU and GPU alone. But new silicon like neural processing units (NPUs) will add expanded capacity for key AI workloads,” Panay wrote.

Put another way, “in the future, the speed of your computer will be measured by the power of its neural processor,” said Microsoft technical fellow Steven Bathiche, at Build.

Both AMD and Intel have made noises about AI capabilities within their processors, beginning with Intel’s 10th-gen “Ice Lake” chip. There, Intel showed off how AI could be used to filter out background noises in conference calls, accelerate photo-editing tools, and more. But AI hasn’t really been a focus in subsequent presentations. For their part, AMD executives mentioned that Ryzen 7000 would have specific AI instructions that they would talk about later, and that was that.

Arm and its licensee Qualcomm, however, have made AI an enormous priority, and its recent Snapdragon 8+ Gen 1 chip contains what Qualcomm calls its 7th-gen AI engine, with a 3rd-gen Sensing Hub that works on a low level to filter out audible noise along with other features. In smartphones, the impact of AI is more immediately felt, with “portrait” images and video using AI to set off the subject from the background, and apply filters. Qualcomm may refer to it as an “AI engine,” but it’s an NPU by any other name. And Microsoft seems to want them on the PC.


In a demonstration, Bathiche showed off how a face tracker algorithm would require more than 20 watts on a traditional X86 CPU, while on an NPU it took only 137 milliwatts.

And how is it going to do all this? It’s not exactly clear. Surprisingly, one name that we haven’t heard anything about is Windows ML, an AI API that we first heard about in 2018 to bring to AI to Windows…then sort of disappeared.

On one hand, Microsoft hasn’t called out Arm specifically as a preferred NPU provider. But we’ve seen a continued push to support Arm over the past few years, dating from the first Windows on Arm implementations to 64-bit app support. Unfortunately, in a world where Microsoft (and customers) didn’t care so much about AI, Qualcomm’s Snapdragon chips were forced to try and differentiate themselves on battery life, an advantage that X86 chips cut into. AMD claimed that the longest-lasting laptop, as measured by MobileMark, now runs on a Ryzen for example.

Microsoft hasn’t given up. At Build, the company announced Project Volterra, a new device powered by Snapdragon chips. (Microsoft representatives declined to comment on the exact specifications.) Volterra, pictured in the image at the top of this story, will be used as a way for developers to create these “magical apps” that use NPUs — on Arm.


“With Project Volterra you will be able to explore many AI scenarios,” Panay wrote. “And because we expect to see NPUs being built into most, if not all future computing devices, we’re going to make it easy for developers to leverage these new capabilities, by baking support for NPUs into the end-to-end Windows platform.”

Right now, you’re probably working on a PC that uses an X86 processor of some type, either from AMD or Intel. And you probably rely on the power of that PC to accomplish whatever task you wish to complete. Microsoft certainly doesn’t dictate the future of the personal computer, as we’ve learned from the failures of Windows 10X, the Surface Neo, and so on. But it does have significant influence, and the power of the cloud has already touched your life in subtle but meaningful ways. Microsoft appears to be trying to tug the PC into a direction that puts cloud connectivity plus local AI at the head of the table, with Azure as the main dish.

Will the PC industry follow suit? PC makers have generally been willing to experiment with Cortana buttons and Surface-like tablets and Windows 10 in S Mode and the like. But they’re quick to steer back towards what makes them money, too, ruthlessly cutting experiments that don’t pan out. Still, there’s no denying that Microsoft has staked a claim with a fresh vision for the future of the PC, and that’s worth watching.

TT
Thanks TT,

Great article: right place, right time!

:)
 
  • Like
  • Love
Reactions: 11 users
ARM over 29 billion semiconductors produced in the last 12 months

Mercedes Benz around 3 million passenger vehicles and if each one had 10 x AKD1000 that is 30 million semiconductors

30 million opportunities -v- 29 billion opportunities.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
  • Love
Reactions: 44 users

Evermont

Stealth Mode
With significant “dry powder” now at our disposal, we are more committed than ever to supporting the most visionary founders committed to solving the world’s hardest frontier technology problems for the largest markets. 🔥


1653563833193.png


Update on Our Inaugural Fund​

We are delighted to share that, following our most recent closing, GFT Fund I is now officially oversubscribed for our original $100M target fundraise! Notwithstanding, due to strong investor demand, we’ve decided to increase our inaugural fund size, and are still accepting additional commitments from both new and existing Limited Partners. Although our final closing will likely occur in the coming months, now would be a good time to connect with us if you still have interest in being part of (or increasing your commitment to) GFT Fund I. As always, we very much appreciate the strong support we’ve received from our investors, partners, colleagues and friends during our launch the GFT Ventures franchise. We are ecosystem builders and supporters at our core, and we couldn’t be more excited by the industry reception we’ve received.

Given the much more challenging macroeconomic environment in which we now find ourselves, a number of you have asked us to explain the exceptionally strong investor interest in our GFT Fund I. We feel the answer is quite simple, and speaks to the fundamental premises upon which we created the GFT franchise. That is — every industry, every vertical and every business model is likely to be influenced (or, disrupted) by modern AI, Data Science and/or Blockchain technologies and platforms. We are just now at the beginning of a multi-decade period, driven by some of the most disruptive technologies we have seen in our lifetimes. And we fully expect these deep technology driven trends to transcend multiple economic cycles.

In fact, history suggests that times like these are ideal for launching new venture funds, especially those driven by experienced managers with discipline and focus. With significant “dry powder” now at our disposal, we are more committed than ever to supporting the most visionary founders committed to solving the world’s hardest frontier technology problems for the largest markets. A more challenging environment presents us with the opportunity to deploy our capital even more carefully, prudently, and at much more attractive valuations, than would have been encountered just a few short months ago. Our timing could not be better!


In the prior quarter (and since our last newsletter), GFT Ventures has closed our investment in Mars Auto (more details below), an autonomous trucking company operating in South Korea and beyond. Our pipeline remains full, and we continue to welcome introductions to, and inquiries from, the best early stage AI, Data Science and Blockchain-related companies seeking capital at reasonable valuations.

Thank you all again for your support and friendship. It means the world to us.

Best regards,

Jeff Herbst and Jay Eum

Founding Managing Partners, GFT Ventures

 

Attachments

  • 1653563787367.png
    1653563787367.png
    96.1 KB · Views: 40
  • Like
  • Fire
  • Wow
Reactions: 24 users

Diogenese

Top 20
To

An article along the same lines of Microsoft, Arm, NPU's and co proccessors.

www.pcworld.com/article/704538/microsoft-thinks-cloud-powered-arm-devices-are-the-pcs-future.html/amp

Microsoft’s vision for the PC’s future: AI-infused ‘NPUs’ and the cloud​

Will the PC transition from powerful processors to AI-powered, cloud-connected devices? Microsoft seems to hope so.
By Mark Hachman
Senior Editor, PCWorld MAY 24, 2022 3:10 PM PDT

We’ve become used to sharing files between the cloud and our local PCs, so much that we often don’t think about where a file physically resides. It sounds like Microsoft wants us to start thinking about our computing resources in the same way — and start considering AI when buying a new PC or processor, too.

In the future, an app you use will make a decision between whether to use the power of your local CPU; a local “neural processing unit,” or NPU; or the cloud to get your task done fast.


“We’re entering a world where every Windows computer will draw on the combined power of CPUs, GPUs, NPUs, and even a new coprocessor, Azure Compute, in this hybrid cloud-to-edge world,” said Microsoft chief executive Satya Nadella during his keynote address at the Microsoft Build conference, held virtually. “You will be able to do large scale training in the cloud and do inference at the edge and have the fabric work as one.”

What does this mean? Today, we buy beefy X86 processors to run games, Microsoft Excel, and other intensive applications on our PCs. What Microsoft is trying to accomplish is to bring new classes of applications to the PC, “magical experiences” that increasingly depend on artificial intelligence. These apps will use the CPUs and GPUs that are already in your PC, but also tap into the resources like the cloud and an AI processor like an NPU to help out, too.


What Nadella is proposing is that developers — Nadella mentioned Adobe by name — build in more intelligent capabilities into their own apps. (Think Photoshop’s “Magic Select” tool, which can, well, magically select an object by guessing that’s what you’re highlighting.) Those apps would be trained and improved in the cloud, teaching the apps through machine learning how to get smarter. They would then be tested — “inferencing,” in AI-speak — on your PC.

In other words, Microsoft sees the evolution of apps as one where the cloud will interact with the local PC in two ways. You’ll save files on your PC (or on a backup drive or USB key) and on OneDrive; and you’ll run apps on your local CPU and GPU, as well as Azure. The wild card will be the NPU, an AI coprocessor that has been somewhat ignored by X86 chipmakers like AMD and Intel, but prioritized by Arm. Deciding between what to use — CPU, GPU, NPU, or the cloud — is what Microsoft calls “the hybrid loop,” and it will be an important decision your Windows PC will have to make. From your perspective, however, it should just work.

More to come at Microsoft Build
At its Build developer conference, executive vice president and chief product officer Panos Panay will talk about a “vision for a world of intelligent hybrid compute, bringing together local
compute on the CPU, GPU, and NPU [neural processing unit] and cloud compute with Azure,” the company’s cloud technology.

“In the future, moving compute workloads between client and cloud will be as dynamic and seamless as moving between Wi-Fi and cellular on your phone today,” Panay wrote in a blog post titled “Create Next Generation Experiences at Scale with Windows,” that accompanied the opening of the Build conference.


“We’re building on the GPU, the CPU, the MPU, and in essence, and in essence, we’re introducing a fourth processor to Windows with Azure Compute — using Azure, one of the world’s most powerful computers, to enable rich local experiences on Windows.” Panay said at the Build conference.

We don’t know exactly what AI-powered apps these will be, but Microsoft itself has provided a few past examples, such as:

So far, all of these applications depend on your local PC’s processor, such as automatic captioning of local video. Others, such as automatic scheduling of meetings in Outlook, can certainly use your own local processing power, but they could also use the Azure cloud that powers Outlook.com. The point is: you don’t know, and you don’t care. It just gets done.

As to what those applications might be? We don’t know, though Microsoft is clearly trying to rally developers to build those applications using Build as inspiration. We know Microsoft would really love for you to start incorporating its Azure cloud into your computing experiences, even if you don’t think of “Azure” as something you’d sign up for. Microsoft’s Xbox cloud gaming? The “Windows in a cloud,” or Windows 365? Outlook on the Web? All of these heavily depend on Microsoft Azure, and they simply can’t be replicated without a cloud subscription backing them up.


A strong endorsement for Arm
What’s somewhat surprising, however, is that how strongly Microsoft seems to believe that Arm PCs will be necessary to enable this future. “Increasingly, magical experiences powered by AI will require enormous levels of processing power beyond the capabilities of traditional CPU and GPU alone. But new silicon like neural processing units (NPUs) will add expanded capacity for key AI workloads,” Panay wrote.

Put another way, “in the future, the speed of your computer will be measured by the power of its neural processor,” said Microsoft technical fellow Steven Bathiche, at Build.

Both AMD and Intel have made noises about AI capabilities within their processors, beginning with Intel’s 10th-gen “Ice Lake” chip. There, Intel showed off how AI could be used to filter out background noises in conference calls, accelerate photo-editing tools, and more. But AI hasn’t really been a focus in subsequent presentations. For their part, AMD executives mentioned that Ryzen 7000 would have specific AI instructions that they would talk about later, and that was that.

Arm and its licensee Qualcomm, however, have made AI an enormous priority, and its recent Snapdragon 8+ Gen 1 chip contains what Qualcomm calls its 7th-gen AI engine, with a 3rd-gen Sensing Hub that works on a low level to filter out audible noise along with other features. In smartphones, the impact of AI is more immediately felt, with “portrait” images and video using AI to set off the subject from the background, and apply filters. Qualcomm may refer to it as an “AI engine,” but it’s an NPU by any other name. And Microsoft seems to want them on the PC.


In a demonstration, Bathiche showed off how a face tracker algorithm would require more than 20 watts on a traditional X86 CPU, while on an NPU it took only 137 milliwatts.

And how is it going to do all this? It’s not exactly clear. Surprisingly, one name that we haven’t heard anything about is Windows ML, an AI API that we first heard about in 2018 to bring to AI to Windows…then sort of disappeared.

On one hand, Microsoft hasn’t called out Arm specifically as a preferred NPU provider. But we’ve seen a continued push to support Arm over the past few years, dating from the first Windows on Arm implementations to 64-bit app support. Unfortunately, in a world where Microsoft (and customers) didn’t care so much about AI, Qualcomm’s Snapdragon chips were forced to try and differentiate themselves on battery life, an advantage that X86 chips cut into. AMD claimed that the longest-lasting laptop, as measured by MobileMark, now runs on a Ryzen for example.

Microsoft hasn’t given up. At Build, the company announced Project Volterra, a new device powered by Snapdragon chips. (Microsoft representatives declined to comment on the exact specifications.) Volterra, pictured in the image at the top of this story, will be used as a way for developers to create these “magical apps” that use NPUs — on Arm.


“With Project Volterra you will be able to explore many AI scenarios,” Panay wrote. “And because we expect to see NPUs being built into most, if not all future computing devices, we’re going to make it easy for developers to leverage these new capabilities, by baking support for NPUs into the end-to-end Windows platform.”

Right now, you’re probably working on a PC that uses an X86 processor of some type, either from AMD or Intel. And you probably rely on the power of that PC to accomplish whatever task you wish to complete. Microsoft certainly doesn’t dictate the future of the personal computer, as we’ve learned from the failures of Windows 10X, the Surface Neo, and so on. But it does have significant influence, and the power of the cloud has already touched your life in subtle but meaningful ways. Microsoft appears to be trying to tug the PC into a direction that puts cloud connectivity plus local AI at the head of the table, with Azure as the main dish.

Will the PC industry follow suit? PC makers have generally been willing to experiment with Cortana buttons and Surface-like tablets and Windows 10 in S Mode and the like. But they’re quick to steer back towards what makes them money, too, ruthlessly cutting experiments that don’t pan out. Still, there’s no denying that Microsoft has staked a claim with a fresh vision for the future of the PC, and that’s worth watching.

TT

Hi TT,

Love this bit:

Put another way, “in the future, the speed of your computer will be measured by the power of its neural processor,” said Microsoft technical fellow Steven Bathiche, at Build.

this is why our cozying up with SiFive triggered our newfound friendship with ARM.


Well the bit about the cloud reads on this:

Microsoft patent application:

US2021312257A1 DISTRIBUTED NEUROMORPHIC INFRASTRUCTURE

1653562800559.png


In non-limiting examples of the present disclosure, systems, methods and devices for synchronizing neuromorphic models are presented. A sensor input may be received by a first neuromorphic model implemented on a neuromorphic architecture of a first computing device. The neuromorphic model may comprise a plurality of neurons, with each of the plurality of neurons associated with a threshold value, a weight value, and a refractory period value. The first sensor input may be processed by the first model. A first output value may be determined based on the processing. The model may be modified via modification of one or more threshold values, weight values, and/or refractory period values. A modified version of the first neuromorphic model may be saved to the first computing device based on the modification. An update comprising the modification may be sent to a second computing device hosting the model.

#######################################################################

Arm and its licensee Qualcomm, however, have made AI an enormous priority

As for the CPU, GPU, NPU, here's one Qualcomm baked earlier:


WO2017136104A1 SPIKING MULTI-LAYER PERCEPTRON


1653563546538.png



1. A method of training a neural network with back propagation, comprising: generating error events representing a gradient of a cost function for the neural network based on a forward pass through the neural network resulting from input events, weights of the neural network and events from a target signal; and
updating the weights of the neural network based on the error event
s.

This application appears to have fizzled out. Appears they may have pre-published themselves.

Oh dear, Qualcomm are still messin' about in boats with analog neurons:
WO2022076067A1 COMPUTE-IN-MEMORY (CIM) CELL CIRCUITS EMPLOYING CAPACITIVE STORAGE CIRCUITS FOR REDUCED AREA AND CIM BIT CELL ARRAY CIRCUITS


1653564809438.png


A CIM bit cell circuit employing a capacitive storage circuit to store a binary weight data as a voltage occupies half or less of the area of a 6T SRAM CIM bit cell circuit, reducing the increase in area incurred in the addition of a CIM bit cell array circuit to an IC. The CIM bit cell circuit includes a capacitive storage circuit that stores binary weight data in a capacitor and generates a product voltage indicating a binary product resulting from a logical AND-based operation of the stored binary weight data and an activation signal. The capacitive storage circuit may include a capacitor and a read access switch or a transistor. The CIM bit cell circuit includes a write access switch to couple a write bit voltage to the capacitive storage circuit. In a CIM bit cell array circuit, the product voltages are summed in a MAC operation.

So, while Qualcomm's SNN tech looks a bit clunky IMO, ARM has a new friend ... Threesome with MS?
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 34 users

Diogenese

Top 20
The partnership with ARM on a scale of 1 to 10 in regards to revenue is what ?
"Put another way, “in the future, the speed of your computer will be measured by the power of its neural processor,” said Microsoft technical fellow Steven Bathiche, at Build."
 
  • Like
  • Fire
  • Love
Reactions: 26 users

TECH

Regular
These points I'm about to make, may have already been raised, as I don't read as many posts as I used to a number
of years ago, please don't take offence, as there's many great posters on this site and all have merit.

Sean, Antonio and Jerome are all driving our company towards a higher plain, this professional combination
will open up many doors over the next 12 months, build business relationships, renew old connections, and at the same
time keep Peter and Anil on their toes, good job they both have extremely disciplined work ethics !

Sean raised a number of key points that all shareholders need to absorb, they are:

Keeping in close contact with clients so that we are ready to pounce when we are aware that they are about
to enter a new "design cycle", that's when the proposals are ready to be presented.
Lou always mentioned this fact, that is, intersecting a company at the right time in their "design cycle", which could
mean a number of years, not months.

The initial license fee is a large investment for any company, but the cream on the top was really the royalty
stream that follows, once a product is in the market place, well, we sit back and let the good times roll !
The basic example was license fee $1.00 royalty fee could be $3.00, $4.00, maybe even $5.00.

BUT....it can take a company 1, 2 or even 3 years to bring their product/s to market.

So the point of all this information is that, REVENUE won't magically appear in 6/12 months, it's all linked to
the size of the companies concerned, the complexity of the design, and THEIR design cycles, yes revenue is
on the immediate horizon, but try to keep things realistic as each quarter unfolds.

Despite what Antonio said at the AGM with regards a potential listing on the Nasdaq, the Board did hold meetings
prior to his inclusion, on said Board....the day will come and I'm sticking with 2025, purely my own view.

Good evening....Tech 😴
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 49 users

Hrdwk

Regular
To

An article along the same lines of Microsoft, Arm, NPU's and co proccessors.

www.pcworld.com/article/704538/microsoft-thinks-cloud-powered-arm-devices-are-the-pcs-future.html/amp

Microsoft’s vision for the PC’s future: AI-infused ‘NPUs’ and the cloud​

Will the PC transition from powerful processors to AI-powered, cloud-connected devices? Microsoft seems to hope so.
By Mark Hachman
Senior Editor, PCWorld MAY 24, 2022 3:10 PM PDT

We’ve become used to sharing files between the cloud and our local PCs, so much that we often don’t think about where a file physically resides. It sounds like Microsoft wants us to start thinking about our computing resources in the same way — and start considering AI when buying a new PC or processor, too.

In the future, an app you use will make a decision between whether to use the power of your local CPU; a local “neural processing unit,” or NPU; or the cloud to get your task done fast.


“We’re entering a world where every Windows computer will draw on the combined power of CPUs, GPUs, NPUs, and even a new coprocessor, Azure Compute, in this hybrid cloud-to-edge world,” said Microsoft chief executive Satya Nadella during his keynote address at the Microsoft Build conference, held virtually. “You will be able to do large scale training in the cloud and do inference at the edge and have the fabric work as one.”

What does this mean? Today, we buy beefy X86 processors to run games, Microsoft Excel, and other intensive applications on our PCs. What Microsoft is trying to accomplish is to bring new classes of applications to the PC, “magical experiences” that increasingly depend on artificial intelligence. These apps will use the CPUs and GPUs that are already in your PC, but also tap into the resources like the cloud and an AI processor like an NPU to help out, too.


What Nadella is proposing is that developers — Nadella mentioned Adobe by name — build in more intelligent capabilities into their own apps. (Think Photoshop’s “Magic Select” tool, which can, well, magically select an object by guessing that’s what you’re highlighting.) Those apps would be trained and improved in the cloud, teaching the apps through machine learning how to get smarter. They would then be tested — “inferencing,” in AI-speak — on your PC.

In other words, Microsoft sees the evolution of apps as one where the cloud will interact with the local PC in two ways. You’ll save files on your PC (or on a backup drive or USB key) and on OneDrive; and you’ll run apps on your local CPU and GPU, as well as Azure. The wild card will be the NPU, an AI coprocessor that has been somewhat ignored by X86 chipmakers like AMD and Intel, but prioritized by Arm. Deciding between what to use — CPU, GPU, NPU, or the cloud — is what Microsoft calls “the hybrid loop,” and it will be an important decision your Windows PC will have to make. From your perspective, however, it should just work.

More to come at Microsoft Build
At its Build developer conference, executive vice president and chief product officer Panos Panay will talk about a “vision for a world of intelligent hybrid compute, bringing together local
compute on the CPU, GPU, and NPU [neural processing unit] and cloud compute with Azure,” the company’s cloud technology.

“In the future, moving compute workloads between client and cloud will be as dynamic and seamless as moving between Wi-Fi and cellular on your phone today,” Panay wrote in a blog post titled “Create Next Generation Experiences at Scale with Windows,” that accompanied the opening of the Build conference.


“We’re building on the GPU, the CPU, the MPU, and in essence, and in essence, we’re introducing a fourth processor to Windows with Azure Compute — using Azure, one of the world’s most powerful computers, to enable rich local experiences on Windows.” Panay said at the Build conference.

We don’t know exactly what AI-powered apps these will be, but Microsoft itself has provided a few past examples, such as:

So far, all of these applications depend on your local PC’s processor, such as automatic captioning of local video. Others, such as automatic scheduling of meetings in Outlook, can certainly use your own local processing power, but they could also use the Azure cloud that powers Outlook.com. The point is: you don’t know, and you don’t care. It just gets done.

As to what those applications might be? We don’t know, though Microsoft is clearly trying to rally developers to build those applications using Build as inspiration. We know Microsoft would really love for you to start incorporating its Azure cloud into your computing experiences, even if you don’t think of “Azure” as something you’d sign up for. Microsoft’s Xbox cloud gaming? The “Windows in a cloud,” or Windows 365? Outlook on the Web? All of these heavily depend on Microsoft Azure, and they simply can’t be replicated without a cloud subscription backing them up.


A strong endorsement for Arm
What’s somewhat surprising, however, is that how strongly Microsoft seems to believe that Arm PCs will be necessary to enable this future. “Increasingly, magical experiences powered by AI will require enormous levels of processing power beyond the capabilities of traditional CPU and GPU alone. But new silicon like neural processing units (NPUs) will add expanded capacity for key AI workloads,” Panay wrote.

Put another way, “in the future, the speed of your computer will be measured by the power of its neural processor,” said Microsoft technical fellow Steven Bathiche, at Build.

Both AMD and Intel have made noises about AI capabilities within their processors, beginning with Intel’s 10th-gen “Ice Lake” chip. There, Intel showed off how AI could be used to filter out background noises in conference calls, accelerate photo-editing tools, and more. But AI hasn’t really been a focus in subsequent presentations. For their part, AMD executives mentioned that Ryzen 7000 would have specific AI instructions that they would talk about later, and that was that.

Arm and its licensee Qualcomm, however, have made AI an enormous priority, and its recent Snapdragon 8+ Gen 1 chip contains what Qualcomm calls its 7th-gen AI engine, with a 3rd-gen Sensing Hub that works on a low level to filter out audible noise along with other features. In smartphones, the impact of AI is more immediately felt, with “portrait” images and video using AI to set off the subject from the background, and apply filters. Qualcomm may refer to it as an “AI engine,” but it’s an NPU by any other name. And Microsoft seems to want them on the PC.


In a demonstration, Bathiche showed off how a face tracker algorithm would require more than 20 watts on a traditional X86 CPU, while on an NPU it took only 137 milliwatts.

And how is it going to do all this? It’s not exactly clear. Surprisingly, one name that we haven’t heard anything about is Windows ML, an AI API that we first heard about in 2018 to bring to AI to Windows…then sort of disappeared.

On one hand, Microsoft hasn’t called out Arm specifically as a preferred NPU provider. But we’ve seen a continued push to support Arm over the past few years, dating from the first Windows on Arm implementations to 64-bit app support. Unfortunately, in a world where Microsoft (and customers) didn’t care so much about AI, Qualcomm’s Snapdragon chips were forced to try and differentiate themselves on battery life, an advantage that X86 chips cut into. AMD claimed that the longest-lasting laptop, as measured by MobileMark, now runs on a Ryzen for example.

Microsoft hasn’t given up. At Build, the company announced Project Volterra, a new device powered by Snapdragon chips. (Microsoft representatives declined to comment on the exact specifications.) Volterra, pictured in the image at the top of this story, will be used as a way for developers to create these “magical apps” that use NPUs — on Arm.


“With Project Volterra you will be able to explore many AI scenarios,” Panay wrote. “And because we expect to see NPUs being built into most, if not all future computing devices, we’re going to make it easy for developers to leverage these new capabilities, by baking support for NPUs into the end-to-end Windows platform.”

Right now, you’re probably working on a PC that uses an X86 processor of some type, either from AMD or Intel. And you probably rely on the power of that PC to accomplish whatever task you wish to complete. Microsoft certainly doesn’t dictate the future of the personal computer, as we’ve learned from the failures of Windows 10X, the Surface Neo, and so on. But it does have significant influence, and the power of the cloud has already touched your life in subtle but meaningful ways. Microsoft appears to be trying to tug the PC into a direction that puts cloud connectivity plus local AI at the head of the table, with Azure as the main dish.

Will the PC industry follow suit? PC makers have generally been willing to experiment with Cortana buttons and Surface-like tablets and Windows 10 in S Mode and the like. But they’re quick to steer back towards what makes them money, too, ruthlessly cutting experiments that don’t pan out. Still, there’s no denying that Microsoft has staked a claim with a fresh vision for the future of the PC, and that’s worth watching.

TT
Great article and hopefully Microsoft choose Brainchip for their NPU.
“And because we expect to see NPUs being built into most, if not all future computing devices, we’re going to make it easy for developers to leverage these new capabilities, by baking support for NPUs into the end-to-end Windows platform.”

The only thing it didn’t say was ubiquitous!

And yes potentially the right time and the right place.
 
  • Like
  • Love
  • Fire
Reactions: 17 users
Hi JK,

Akida's forte is identifying (classifying) input signals from sensors. In simple applications this may be sufficient to trigger a direct response or action.

However, in some cases the Akida output is used as an input to another CPU/GPU (von Neumann processor) to form part of that computer's program variables.

In the first case, the entire process gets the full power saving/speed improvement from Akida.

In the second case, the benefit is the reduction in power/time which Akida brings to the classification task while the CPU performs the remaining processes under the control of its software program. This is important because the classification task carried out on a software controlled CPU uses very large amounts of power and takes a relatively long time.

Classification of an image on a CPU uses CNN (convolutional neural network) processes which involve multiplying multi-bit (8, 16, 32, 63) bytes representing each pixel on the sensor. Multiplication involves the number of computer operations determined by the square of the number of bits in the byte, so an 8-bit byte multiplication would involve, 64 computer operations. For 32-bit bytes, 1024 operations are required to process the output from a single pixel, whether it's value has changed or not.

On the other hand, Akida ignores pixels whose output value does not change, and only performs a computer operation for the pixels whose output changes (an event). This is "sparsity". In addition, in 1-bit mode there is only a singe computer operation for each pixel event.

For example, the sparsity may reduce the number of events by, say, 40%.

Even in 4-bit mode, Akida only needs 16 computer operations, and that only for pixels whose output has changed.

Hence there are large savings in power and time in using Akida to do the classification task compared to using, eg, a 32-bit ARM Cortex microprocessor.

While the rest of the program may be carried out on the microprocessor, this uses comparatively little power compared to the power the microprocessor would have used performing the CNN task. So there are still large power savings to be made by using Akida in "accelerator" mode as an input device for a von Neumann processor.

The other point is that Akida performs its classification independent of any processor with which it is associated. For example, Akida 1000 includes an ARM Cortex processor, but this is only used for configuration of the arrangement of the NPU nodes to optimize performance for the particular task, but the ARM Cortex plays no part in actual classification task. The ARM Cortex does not form part of the Akida IP. Akida is "processor-agnostic" and can operate with any CPU.
Thanks so much for this explanation @Diogenese it is I think for me the clearest explanation I have read on how Akida operates and how it achieves power savings in different circumstances. I mean I am non technical and I could understand it haha. Thank you sir 🙏👍
 
  • Like
  • Love
Reactions: 15 users
ARM over 29 billion semiconductors produced in the last 12 months

Mercedes Benz around 3 million passenger vehicles and if each one had 10 x AKD1000 that is 30 million semiconductors

30 million opportunities -v- 29 billion opportunities.

My opinion only DYOR
FF

AKIDA BALLISTA
Institutional investors understand the significance.

Retail like to stand at the Mercedes Showroom and drool over the amazing EQXX.

I personally drool over both.

If you have properly researched ARM you will too.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Love
  • Fire
Reactions: 36 users

Xhosa12345

Regular
Institutional investors understand the significance.

Retail like to stand at the Mercedes Showroom and drool over the amazing EQXX.

I personally drool over both.

If you have properly researched ARM you will too.

My opinion only DYOR
FF

AKIDA BALLISTA
2000th message FF, im looking forward to your next 2000 also... we love your work!!
 
  • Like
  • Love
Reactions: 31 users

Evermont

Stealth Mode

MegaChips Forms Strategic Partnership with BrainChip​

Thu, May 26, 2022, 11:00 PM

Partnership Provides Ultra-Low Power and High-Performance Edge AI Solutions for ASIC Customers

SAN JOSE, Calif. and LAGUNA HILLS, Calif., May 26, 2022 /PRNewswire/ -- MegaChips, the leading custom ASIC company in Japan, and BrainChip Holdings Ltd, the world's first commercial producer of ultra-low power neuromorphic AI IP, formally announced a strategic partnership signed late last year. MegaChips, which recently announced its emergence into the U.S. custom ASIC market, will now leverage BrainChip Akida™ IP to provide U.S. customers with innovative applications across a wide variety of applications, such as consumer tech, telecom/network, industrial and automotive.

"By MegaChips partnering with BrainChip we can deliver customers highly sought-after edge AI solutions and expedite time to market," said Douglas Fairbairn, Director of Business Development for MegaChips LSI USA. "Together we plan to develop innovative LSIs as the silicon partner of BrainChip, helping take customers from ideation to silicon. This collaboration helps us achieve the goal of becoming the leading U.S. ASIC supplier for any customer looking to implement AI solutions at volume."

BrainChip deploys AI at the edge in a way that existing technologies cannot. The company's tech is both high-performance and ultra-low power, enabling a range of capabilities including on-chip, in-device one-shot learning. BrainChip's IP can be used in a wide range of applications from consumer electronics to industrial IoT to electric vehicles, and smart sensors that can detect and act on visual features, odors, taste, touch, and sound.

"The MegaChips and BrainChip partnership furthers both companies' mission to offer unprecedented products," said Rob Telson, BrainChip Vice President of Worldwide Sales and Marketing. "By providing Akida's on-chip learning and ultra-low power Edge AI capabilities as an integrated technology in MegaChips' ASIC solutions, we bring practical, cutting-edge capabilities to the edge that ensure power efficiency without compromising accuracy. This is an exciting collaboration from both a business and industry-propelling perspective."

About MegaChips LSI USA Corporation

MegaChips LSI USA in Campbell, California is a wholly owned subsidiary of MegaChips Corporation headquartered in Osaka, Japan. It is one of the world's leading custom ASIC providers for consumer, telecom/network, industrial and automotive applications. MegaChips has over 30 years in business and has successfully completed more than 1,500 ASIC projects. MegaChips operates as an extension of our customers' design teams, to provide a whole solution from concept-to-silicon and has recently expanded to address the growing global demand for embedded AI solutions. With a strong emphasis on cost effectiveness, delivery schedule, and product quality, MegaChips is ISO9001 certified and ensures the highest levels of intellectual property security.

Follow MegaChips LSI USA on their website, LinkedIn, Twitter, and Facebook for more information.

About BrainChip Holdings Ltd (ASX: BRN) (OTCQX: BRCHF) (ADR: BCHPY)

BrainChip is the worldwide leader in edge AI on-chip processing and learning. The company's first-to-market neuromorphic processor, Akida™, mimics the human brain to analyze only essential sensor inputs at the point of acquisition, processing data with unparalleled efficiency, precision, and economy of energy. Keeping machine learning local to the chip, without the need to access the cloud, dramatically reduces latency while improving privacy and data security. In enabling effective edge compute to be universally deployable across real-world applications, such as connected cars, consumer electronics and industrial IoT, BrainChip is proving that on-chip AI close to the sensor is the future for customers' products as well as the planet. Explore the benefits of Essential AI at www.brainchip.com.

Follow BrainChip on Twitter: https://www.twitter.com/BrainChip_inc

Follow BrainChip on LinkedIn: https://www.linkedin.com/company/7792006

All registered or unregistered trademarks are the sole property of their respective owners.

MegaChips Media Contact
Lauren Chouinard
FortyThree, Inc.
lauren@43pr.com
831.621.5661
 
  • Like
  • Fire
  • Love
Reactions: 82 users
Top Bottom