BRN Discussion Ongoing

Cardpro

Regular
Missed the point?

The point was that Brainchips Management/Directors were earning more than their counterparts in CBA and BHP.

That point has been proven false by @SERA2g above.

You're now trying to spin it to sound like something else šŸ˜‚ I've got 2 words for ya mate but I'll probably get a warning from Dreadd for it. So I'll leave it up to your imagination to figure it out
The point was that the management / directors gets paid shitload and to prove that point he compared it with executives from large corps which was incorrect. But I think point is still valid IMO.
 
  • Like
Reactions: 5 users

Dr E Brown

Regular
I wouldn't say MVP.

Whilst Roger's speech was spirited, it was not accurate which is a bit of shame given it was long-winded, soaked up valuable Q&A time and possibly had an impact on the way some shareholders voted with respect to the remuneration report.

Out of curiosity, given the statements Roger made, I have reviewed CBA's remuneration report and have then compared it to BRN's.

Below is the outcome of that comparison.

In summary, CBA KMP's are paid SUBSTANTIALLY more in both cash and equity than Brainchip's KMP's.

It's not even close.

On an average across the board, CBA KMP's get 4x the cash, 2x the equity and 2.7x in total remuneration.

Roger also said the BRN directors were paid more than BHP's, but I haven't bothered to check that given how wrong he was with CBA.

Over to you for some clarification @Realinfo

Perhaps I'm missing something?
View attachment 37458



View attachment 37453

https://www.commbank.com.au/content...out-us/2022-08/2022-annual-report_spreads.pdf

View attachment 37454
Thank you!
 
  • Like
Reactions: 5 users

buena suerte :-)

BOB Bank of Brainchip
I wouldn't say MVP.

Whilst Roger's speech was spirited, it was not accurate which is a bit of shame given it was long-winded, soaked up valuable Q&A time and possibly had an impact on the way some shareholders voted with respect to the remuneration report.

Out of curiosity, given the statements Roger made, I have reviewed CBA's remuneration report and have then compared it to BRN's.

Below is the outcome of that comparison.

In summary, CBA KMP's are paid SUBSTANTIALLY more in both cash and equity than Brainchip's KMP's.

It's not even close.

On an average across the board, CBA KMP's get 4x the cash, 2x the equity and 2.7x in total remuneration.

Roger also said the BRN directors were paid more than BHP's, but I haven't bothered to check that given how wrong he was with CBA.

Over to you for some clarification @Realinfo

Perhaps I'm missing something?
View attachment 37458



View attachment 37453

https://www.commbank.com.au/content...out-us/2022-08/2022-annual-report_spreads.pdf

View attachment 37454
Wow ... very nice work mate ...šŸ‘šŸ‘šŸ‘
 
  • Like
Reactions: 6 users
Is Qualcomm really doing this without us????? What's their magic sauce????




Qualcomm exec: AI adoption becoming 'seminal moment' in tech

Scroll back up to restore default view.
Yahoo Finance
Yahoo Finance
Wed, 24 May 2023 at 7:14 am GMT+10


Qualcomm SVP of Product Management Ziad Asghar joins Yahoo Finance Live to discuss Qualcomm's partnership with Microsoft to develop a new AI chip, competition in the AI technology sector, privacy, and data security.

Video transcript​

- Well, Qualcomm is building on its partnership with Microsoft today, unveiling new features to scale on-device AI. That announcement at Microsoft's Build Developers Conference means further integration of Qualcomm's Snapdragon in Windows devices. Qualcomm saying in a statement, quote, "On-device generative AI solutions allow developers and cloud service providers to make generative AI more affordable, reliable, and private by moving queries and inferences to edge devices, including PCs and phones."
Joining us to discuss is Ziad Asghar, Qualcomm's senior vice president of product management. We've also got our very own Ali Garfinkel joining in on the conversation. Ziad good to talk to you. The announcement today really does build on an existing partnership you've already had with Microsoft. Talk to me about the scale of the opportunity you see in this partnership centered around AI with Microsoft.
ZIAD ASGHAR: I think this generative AI opportunity is just amazing for Qualcomm. You have to understand that this is how we're able to bring all of these experiences onto the devices that are in people's hands. Right now, many of these generative experiences are sitting in the cloud, so people, at large, are not able to access these experiences.
But now whether you think about productivity applications on PC-like devices or other entertainment-like applications on smartphones, with this change that's coming right now, with the technology that we have been building, we can actually bring all these experiences to the edge devices. To your PCs, to XR products, to smartphones, to IoT products, and that's really the amazing opportunity for Qualcomm. And we believe we have some unique differentiated technology that sets us apart from everybody else in this space.

ALEXANDRA GARFINKLE: Ziad, Ali here. Investors aren't necessarily thinking of Qualcomm as an AI stock, but should they? How big of an opportunity is Qualcomm looking at in AI?
ZIAD ASGHAR: Oh, absolutely, right. So think about it. If you think from the perspective of privacy, if you think from the perspective of the contextual data that's available on a device, to be able to run these generative AI use cases, large language models on the device, it completely changes the story. It allows people to be able to run these devices while keeping those queries private.
To be able to generate that experience even when you are not connected. To be able to use the contextual information in a way that, basically, you can give a far better experience to the user than is possible today. So let's take an example, right. So if you are talking about, you're creating a virtual world around you. If you already know the kind of textures, the colors, the themes that you like, we can actually optimize that experience for you.
So the experience is better, the privacy is basically preserved, you have the experience, whether you're connected or not, and essentially, the experience is far better. So we really think this brings the generative AI opportunity to the edge and really move the center of gravity towards edge devices. So it's a massive opportunity for us at Qualcomm.
- Yeah, I've heard you say over and over about the experience here. It feels like we're just really at the tip of experiencing generative AI. What does that future look like? When you talk about user experience on your smartphone, what are we talking about?
ZIAD ASGHAR: Let's take an example, right. So on the smartphone right now, you take a picture, already, we are using AI to be able to enhance the quality of that image. But what we can do now is, you can literally talk into your smartphone and you can say, well, put me in front of the Eiffel Tower at sunset with the beautiful clouds in the distance. And we can go and run a text two image model that actually generates that background and puts you right in front of it, right. That sort of an experience. That sort of an interactive experience on devices is really truly only possible if you can do this on the device.
And what makes this possible is our special hardware. Our pedigree is about doing more processing at the lowest power possible, and that's precisely what we're able to do with AI too. And we can essentially do that on a device that's not plugged, whereas some of our other competitors run it on graphics engines that takes hundreds of Watts of power, we literally do the experiences with milliwatts of power.
And that's what takes it to the masses. That's what takes it to every user that uses a smartphone at this point in time.
ALEXANDRA GARFINKLE: Ziad, we've heard a lot about AI risks in the context, for instance, of ChatGPT, which kicked all this off. But I'd love to hear a little bit about how AI risks look from your vantage point.
ZIAD ASGHAR: Yeah, so I think one of the main concerns that people have with some of the generative AI examples is privacy. You have seen some of the news come out recently about certain information getting out and staying and residing on these cloud servers.
Well, with AI-- generative AI running on the device, that query, that question that you ask from a text or text-like example, stays precisely on the device. It never leaves it. So actually, we are able to address some of the concerns that people have. Of course, additionally, the models have continued to get better. That will actually help with doing a lot more, in terms of improving the experience and taking away some of the concerns that people have with these generative AI use cases.
And of course, what is also going to happen is that people will actually get to use these experiences. I think this is a seminal moment in our industry because it really changes the way we interact with our products, with our devices, and this is not one of those things that kind of follow the hype curve. This is actually an example where we have customers coming up to us today and saying, I want to run these use cases on the device. So this is very true, and this is where we believe that we'll be able to actually address a lot of the concerns that people have.
Another big, very quick, concern that people have is the cost, right. If you can imagine that these models are getting larger, the applications are growing, at the same time, a lot more people are using them, you will get to a point where you will not have enough resources in the cloud. So you actually need the edge to be able to do generative AI. And that's the unique opportunity for us.
ALEXANDRA GARFINKLE: So I'd love to take this a step further then and say, is it possible that the technology you're talking about developing at Qualcomm will make ChatGPT and apps like it more accessible and less costly?
ZIAD ASGHAR: Absolutely. Right, so what we can do today, what we showed at some of our recent conferences, was a $1 billion parameter model, what we call stable diffusion running entirely on the device. We showed it today at BuildWare, that same model is now running on a PC.
Now what we have a roadmap for is to be able to do up to $10 billion parameters models on the device. And you can envision some of the models that are out there today, text to text-like models. And we see on a daily basis, new modalities coming into it.
Text to music, text to audio, image to video. All of those-- many of those, actually, fit very well in this range that I'm talking about, which we believe can run comfortably on the device in a sustained fashion. And that's why it comes to everybody. It comes to anybody and everybody that has a smartphone, or a PC, or an XR device not just connected to the cloud at this point in time.
- Qualcomm's Ziad Asghar, good to talk to you today. Fascinating to think about all the applications for AI. Our thanks to Ali Garfinkel as well for joining in on the conversation.

That Synsense through Prophese or something?

Sounds like the big cos will do anything to try to try to hijack first mover status, regardless of best tech..

If not, looking forward to BRNs first $100mill in revenue through a Civil action plus royalty payments for IP theft.. šŸ«£šŸ«£šŸ¤”šŸ¤”
 
  • Like
  • Fire
  • Thinking
Reactions: 5 users

Bravo

If ARM was an arm, BRN would be its bicepsšŸ’Ŗ!
SoftBank's shares rose by 8.2% after Arm's news about its new technology for mobile devices. The jump in the stock price comes off the back of yesterdays announcement of the partnership with MediaTek for its next-generation smartphone. Quite understandable really, given MediaTek enables nearly 2 billion devices a year.

Now if we could only confirm our involvement in this new technology, we'd have all the shorters clambering for their incontinence pads.šŸ©²

Some more from Arm on their new smartphone tech, this time from their website.


New Arm Total Compute Solutions enable a mobile future built on Arm​


May 29, 2023


By Chris Bergey, senior vice president and general manager, Client Line of Business, Arm

News Highlights:
  • New 5th Generation GPU architecture provides a foundation for the next several generations of visual computing on Arm GPUs, including new Arm Immortalis-G720
  • The most powerful Armv9 Cortex compute cluster delivers double digit performance gains for the third consecutive year
  • TCS23 is the platform for premium mobile computing and will power immersive games, real-time 3D experiences and next-gen AI applications

Mobile devices touch every aspect of our digital lives. In the palm of your hand is the ability to both create and consume increasingly immersive, AI-accelerated experiences that continue to drive the need for more compute. Arm is at the heart of many of these, bringing unlimited delight, productivity and success to more people than ever. Every year we build foundational platforms designed to meet these increasing compute demands, with a relentless focus on high performance and efficiency. Working closely with our broader ecosystem, weā€™re delivering the performance, efficiency and intelligence needed on every generation of consumer device to expand our digital lifestyles.

Today we are announcing Arm Total Compute Solutions 2023 (TCS23), which will be the platform for mobile computing, offering our best ever premium solution for smartphones. TCS23 delivers a complete package of the latest IP designed and optimized for specific workloads to work seamlessly together as a complete system. This includes a new world-class Arm Immortalis GPU based on our brand-new 5th Generation GPU architecture for ultimate visual experiences, a new cluster of Armv9 CPUs that continue our performance leadership for next-gen artificial intelligence (AI), and new enhancements to deliver more accessible software for the millions of Arm developers.

For more details on TCS23, visit our blog.

The best visual experiences run on Arm

Last year, we promised a supercharged visual experience with the all-new flagship Immortalis-G715 GPU. We are delivering on this promise in partnership with MediaTek through the TCS22-based Dimensity 9200, which is now powering benchmark-topping flagship smartphones from OPPO and vivo.

This year, we are excited to announce that our latest GPUs are built on the brand-new 5th Gen GPU architecture. Designed as the most efficient GPU architecture that Arm has ever created, the 5th Gen architecture redefines parts of the graphics pipeline to reduce memory bandwidth enabling the next generation of high geometry games and real-time 3D applications, while also bringing smoother gameplay and complex PC and console-like experiences to mobile. Deferred Vertex Shading (DVS) is a new graphics feature introduced in the 5th Gen GPU architecture that redefines the dataflow, which enables partners to scale for larger core counts and higher performance points. Already, weā€™re seeing the benefits of DVS across many popular games from Genshin Impact to Fortnite.

The new Immortalis-G720 is Armā€™s most performant and efficient GPU ever, as we continue to push the boundaries of visual computing. It delivers 15 percent performance and efficiency improvements over the previous generation, as well as a 40 percent uplift in system-level efficiency, leading to higher quality graphics for more immersive visual experiences.

Alongside the Immortalis-G720, we add to our world-class portfolio of GPUs with the new Arm Mali-G720 and Mali-G620. Through these new Mali GPUs, we are committed to bringing premium graphics features to a wider market of consumer devices quicker.

For more details on Immortalis-G720 and the new Mali GPUs, visit our blog.

CPU performance leadership for intelligent AI

As part of TCS23, we are demonstrating our ongoing commitment to CPU performance leadership. We are announcing a new Armv9 Cortex CPU compute cluster, which for the third consecutive year delivers double-digit performance gains alongside significant efficiency improvements.

A vital part of this high-performance cluster is the new Arm Cortex-X4, our fourth-generation Cortex-X core, which pushes the limit of performance on flagship smartphones. It is the fastest CPU that we have ever built, bringing 15 percent more performance compared to the Cortex-X3. Meanwhile, the new power efficient microarchitecture consumes 40 percent less power than Cortex-X3 on the same process. These performance and efficiency gains bring the on-device experiences, like UI responsiveness and application launch time, to the next level and enable next-gen AI and ML-based applications.

This CPU performance leadership extends to our new big and LITTLE cores ā€“ the Arm Cortex-A720 and Cortex-A520. Cortex-A720 is industry leading CPU IP that boosts sustained performance as the workhorse of the cluster. Meanwhile, Cortex-A520 is Armā€™s most performant high-efficiency CPU core ever. Use cases like AAA gaming, all-day productivity and background tasks all benefit from the 20 percent power efficiency improvements of these new CPU designs over previous generations.

Key to delivering the highest performance and most efficient designs is tighter coupling on process nodes and compute capabilities. In this new generation of CPU designs, we are taking our long-standing partnership with TSMC a step further through taping out the Cortex-X4 on the TSMC N3E process ā€“ an industry first. This ensures that our ecosystem is ready to maximize the PPA benefits of our processor technologies once they are taped out.

Rounding off the 2023 CPU cluster is our new DynamIQ Shared Unit, DSU-120, which is designed for demanding multi-thread use cases and enables a broad range of devices from wearables to smartphones and laptops. Our new CPU cluster provides performance when you want it and efficiency when you need it.

For more details on the new Armv9 Cortex CPU Compute Cluster and Cortex-X4 CPU, visit our blog.

For more details on the new Cortex-A720 CPU, Cortex-A520 CPU and DSU-120, visit our blog.

Delivering the worldā€™s software and security

Through TCS23, we are ensuring that the millions of mobile developers developing on Arm, for Arm, have the capabilities and tools to write easier, simpler, faster and more secure software.

During the past year, new intelligent experiences, like generative AI, have amazed the world, with AI processing capabilities doubling every two years on smartphones. Arm is leading the way in supporting developers to take advantage of AI and machine learning (ML) workloads by enabling our hardware with increased ML capabilities via our open-source software libraries. Arm NN and Arm Compute Library are being used by Google apps on Android with over 100 million active users already, enabling developers to optimize the execution of their ML workloads on Armv9 Cortex-A CPUs and Arm GPUs.

All our new CPUs deliver 64-bit computing and Armv9 security innovation to protect against more advanced digital threats. We continue to successfully deploy Arm Memory Tagging Extension (MTE), which eliminates memory safety bugs that make up 70 percent of all software vulnerabilities, across the mobile ecosystem through our Armv9 generation of CPUs.

Redefining the future of mobile computing

The products we are launching today will be powering the next generation of flagship smartphones, but we are also looking further out as well. Weā€™ve never been more committed to our CPU and GPU roadmap and over the next few years weā€™ll invest heavily in key IP, such as the Krake GPU and the Blackhawk CPU to deliver the compute and graphics performance our partners demand.

Arm is delivering mobile innovation from silicon to software, to support the increasingly immersive digital experiences being brought to life by our vast global ecosystem, and itā€™s clear that the mobile future will be built on Arm.

Supporting Partner Quotes

Asus

ā€œMobile Gaming has increasingly become the world's preferred form of gaming, and the smartphone is now the most popular way to game, with users demanding ever more performance and features to improve their gaming experience.

Since the launch of the first ROG phone, ASUSā€™ innovation has been continuous as we optimize the gaming experience. In addition, we also focus on the gaming ecosystem by collaborating with world leading global games studios, leading Silicon providers and Arm. In search of incredible, we are creating the worldā€™s best gaming phones that will inspire the next generation of mobile gamers.ā€ Bryan Chang, General Manager, ASUS Phone Business Unit

Google (Android)

ā€œTogether with the developer community, Android is committed to bringing the power of computing to as many people as possible. We're excited to see how Arm's new hardware advancements are adopted by vendors, with security and performance improvements that can benefit the Android ecosystem as a whole.ā€ Dave Burke, VP of Engineering, Android

HONOR

ā€œHONOR has a proven track record of bringing the most immersive and powerful camera experiences to its customers, delivering a premium experience, whilst showing leadership with its philosophy of all-round protection, its discrete security chip and innovative and groundbreaking use of Armā€™s MTE technology. With 70% of vulnerabilities being due to memory bugs, close collaboration with Arm on MTE will define the direction for the industry, help ensure customer trust and deliver amazing mobile experiences for this generation and the next.ā€ Samul Deng, President of Research & Development Mgmt Dept, HONOR

Intel Foundry Services

ā€œThe combination of leading-edge Intel 18A technology with Armā€™s newest and most powerful CPU core, the Cortex-X4, will create opportunities for companies looking to design the next generation of innovative mobile SoCs. Arm is a critical partner as we work to build a comprehensive foundry ecosystem for our customers around the world.ā€ Stuart Pann, Senior Vice President and General Manager, Intel Foundry Services (IFS)

MediaTek

ā€œArmā€™s innovative 2023 IP, the Cortex-X4 and Cortex-A720, and Immortalis G720 have provided an excellent foundation for our next-generation Dimensity flagship 5G smartphone chip, which will deliver impressive performance and efficiency through groundbreaking chip architecture and technical innovations. Using Arm's industry-leading technologies, MediaTek Dimensity will enable users to do more at once than ever before and unlock incredible new experiences, longer gaming sessions, and excellent battery life.ā€ Dr. JC Hsu, Corporate Senior Vice President and General Manager of Wireless Communications Business Unit, MediaTek

OPPO

ā€œOPPO has been working with industry leaders to deliver breakthrough smart mobile devices and new experiences into our customerā€™s hands and one of those industry leaders is Arm. Using their latest and greatest CPU and GPU IP guarantees the smooth multitasking of apps and flow needed to successfully empower new multi-screen use cases and with support from the wider Arm ecosystem we will continue to create more fantastic mobile smartphones on Armā€™s latest IP.ā€ Henry Duan, Vice President, President of Smartphones Product, OPPO

Samsung Electronics

"Arm has been an invaluable partner in delivering new levels of mobile performance and efficiency, enabled through their latest architectures like Armv9.2. We look forward to continuing this long-term strategic collaboration with Arm to further advance CPU technologies." Seogjun Lee, Executive Vice President of Application Processor (AP) Development, Samsung Electronics

Tencent Games

ā€œTencent Games and Arm have a solid long-term partnership. Every time Arm introduces new products to the mobile gaming ecosystem, it means players around the world will enjoy the next generation of computing power, and is also a precious opportunity for us to take the mobile gaming experience to a whole new level.

Recently, we have increased our cooperation with Arm to push the frontiers of real-time lighting, from the ultimate ray tracing acceleration, based on Arm Immortalis GPUs, to highly optimal mobile renderers and hybrid global illumination solutions such as SmartGI.

We look forward to the continuing growth of our partnership around the next-gen Arm Total Compute Solutions, and will never stop delivering unprecedented gaming experiences to players worldwide.ā€ Congbing Li, Vice General Manager of CROS, Tencent Games

TSMC

ā€œOur latest collaboration with Arm is an excellent showcase of how we can enable our customers to reach new levels of performance and efficiency with TSMCā€™s most advanced process technology and the powerful Armv9 architecture. We will continue to work closely with our Open Innovation PlatformĀ® (OIP) ecosystem partners like Arm to push the envelope on CPU innovations to accelerate AI, 5G, and HPC technology advances.ā€ Dan Kochpatcharin, Head of the Design Infrastructure Management Division, TSMC

Unity

ā€œWe at Unity recognize that producing world-class 3D graphical content, with rich AI, in real-time, must be balanced against power consumption. We are excited to see Arm continuing to innovate in this space, and are looking forward to seeing how Arm's flagship Immortalis GPU will further that effort.

Arm and Unity are committed to working together to ensure developers can unlock their potential on Arm devices. Our work on Adaptive Performance demonstrates the impact today, and for the future, by enabling developers to optimize their user experience over a broader range of devices.ā€ Nick Rapp, Senior Director, Platform, Unity

vivo

ā€œvivo has been focusing on value co-creation and innovation collaboration with industry partners and so we are delighted to see the launch of the new Arm Total Compute Solutions. vivo will continue to work with Arm to drive the technology breakthrough in intelligent products, enabling broader global users to enjoy the digital world with outstanding performance.ā€ Yujian Shi, Senior Vice President & CTO, vivo

Xiaomi

ā€œXiaomi provides the best-in-class experience to our users powered by leading-industry platform capabilities. Through close collaboration with Arm and chipset partners, smartphones across all price segments are equipped with the ideal CPUs and GPUs to meet their ultimate needs. We are very pleased to see that the new generation of TCS23 can offer flexible levels of performance and imaging quality, helping us better serve the majority of users and bring them their ultimate experience.ā€ Xuezhong Zeng, Senior Vice President, Xiaomi Group


swatch





About Arm​

Arm technology is defining the future of computing. Our energy-efficient processor designs and software platforms have enabled advanced computing in more than 250 billion chips and our technologies securely power products from the sensor to the smartphone and the supercomputer. Together with 1,000+ technology partners, we are enabling artificial intelligence to work everywhere, and in cybersecurity, we are delivering the foundation for trust in the digital world ā€“ from chip to cloud. The future is being built on Arm.

All information is provided "as is" and without warranty or representation. This document may be shared freely, attributed and unmodified. Arm is a registered trademark of Arm Limited (or its subsidiaries). All brands or product names are the property of their respective holders. Ā© 1995-2023 Arm Group.
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 34 users
They might be doing what Sean mentioned in the Noah interview.. Just ā€œ Macgyverā€™ingā€ the crap out current tech to squeeze out products that still require cloud training and compute.
 
  • Like
Reactions: 10 users
D

Deleted member 118

Guest
Is Qualcomm really doing this without us????? What's their magic sauce????




Qualcomm exec: AI adoption becoming 'seminal moment' in tech

Scroll back up to restore default view.
Yahoo Finance
Yahoo Finance
Wed, 24 May 2023 at 7:14 am GMT+10


Qualcomm SVP of Product Management Ziad Asghar joins Yahoo Finance Live to discuss Qualcomm's partnership with Microsoft to develop a new AI chip, competition in the AI technology sector, privacy, and data security.

Video transcript​

- Well, Qualcomm is building on its partnership with Microsoft today, unveiling new features to scale on-device AI. That announcement at Microsoft's Build Developers Conference means further integration of Qualcomm's Snapdragon in Windows devices. Qualcomm saying in a statement, quote, "On-device generative AI solutions allow developers and cloud service providers to make generative AI more affordable, reliable, and private by moving queries and inferences to edge devices, including PCs and phones."
Joining us to discuss is Ziad Asghar, Qualcomm's senior vice president of product management. We've also got our very own Ali Garfinkel joining in on the conversation. Ziad good to talk to you. The announcement today really does build on an existing partnership you've already had with Microsoft. Talk to me about the scale of the opportunity you see in this partnership centered around AI with Microsoft.
ZIAD ASGHAR: I think this generative AI opportunity is just amazing for Qualcomm. You have to understand that this is how we're able to bring all of these experiences onto the devices that are in people's hands. Right now, many of these generative experiences are sitting in the cloud, so people, at large, are not able to access these experiences.
But now whether you think about productivity applications on PC-like devices or other entertainment-like applications on smartphones, with this change that's coming right now, with the technology that we have been building, we can actually bring all these experiences to the edge devices. To your PCs, to XR products, to smartphones, to IoT products, and that's really the amazing opportunity for Qualcomm. And we believe we have some unique differentiated technology that sets us apart from everybody else in this space.

ALEXANDRA GARFINKLE: Ziad, Ali here. Investors aren't necessarily thinking of Qualcomm as an AI stock, but should they? How big of an opportunity is Qualcomm looking at in AI?
ZIAD ASGHAR: Oh, absolutely, right. So think about it. If you think from the perspective of privacy, if you think from the perspective of the contextual data that's available on a device, to be able to run these generative AI use cases, large language models on the device, it completely changes the story. It allows people to be able to run these devices while keeping those queries private.
To be able to generate that experience even when you are not connected. To be able to use the contextual information in a way that, basically, you can give a far better experience to the user than is possible today. So let's take an example, right. So if you are talking about, you're creating a virtual world around you. If you already know the kind of textures, the colors, the themes that you like, we can actually optimize that experience for you.
So the experience is better, the privacy is basically preserved, you have the experience, whether you're connected or not, and essentially, the experience is far better. So we really think this brings the generative AI opportunity to the edge and really move the center of gravity towards edge devices. So it's a massive opportunity for us at Qualcomm.
- Yeah, I've heard you say over and over about the experience here. It feels like we're just really at the tip of experiencing generative AI. What does that future look like? When you talk about user experience on your smartphone, what are we talking about?
ZIAD ASGHAR: Let's take an example, right. So on the smartphone right now, you take a picture, already, we are using AI to be able to enhance the quality of that image. But what we can do now is, you can literally talk into your smartphone and you can say, well, put me in front of the Eiffel Tower at sunset with the beautiful clouds in the distance. And we can go and run a text two image model that actually generates that background and puts you right in front of it, right. That sort of an experience. That sort of an interactive experience on devices is really truly only possible if you can do this on the device.
And what makes this possible is our special hardware. Our pedigree is about doing more processing at the lowest power possible, and that's precisely what we're able to do with AI too. And we can essentially do that on a device that's not plugged, whereas some of our other competitors run it on graphics engines that takes hundreds of Watts of power, we literally do the experiences with milliwatts of power.
And that's what takes it to the masses. That's what takes it to every user that uses a smartphone at this point in time.
ALEXANDRA GARFINKLE: Ziad, we've heard a lot about AI risks in the context, for instance, of ChatGPT, which kicked all this off. But I'd love to hear a little bit about how AI risks look from your vantage point.
ZIAD ASGHAR: Yeah, so I think one of the main concerns that people have with some of the generative AI examples is privacy. You have seen some of the news come out recently about certain information getting out and staying and residing on these cloud servers.
Well, with AI-- generative AI running on the device, that query, that question that you ask from a text or text-like example, stays precisely on the device. It never leaves it. So actually, we are able to address some of the concerns that people have. Of course, additionally, the models have continued to get better. That will actually help with doing a lot more, in terms of improving the experience and taking away some of the concerns that people have with these generative AI use cases.
And of course, what is also going to happen is that people will actually get to use these experiences. I think this is a seminal moment in our industry because it really changes the way we interact with our products, with our devices, and this is not one of those things that kind of follow the hype curve. This is actually an example where we have customers coming up to us today and saying, I want to run these use cases on the device. So this is very true, and this is where we believe that we'll be able to actually address a lot of the concerns that people have.
Another big, very quick, concern that people have is the cost, right. If you can imagine that these models are getting larger, the applications are growing, at the same time, a lot more people are using them, you will get to a point where you will not have enough resources in the cloud. So you actually need the edge to be able to do generative AI. And that's the unique opportunity for us.
ALEXANDRA GARFINKLE: So I'd love to take this a step further then and say, is it possible that the technology you're talking about developing at Qualcomm will make ChatGPT and apps like it more accessible and less costly?
ZIAD ASGHAR: Absolutely. Right, so what we can do today, what we showed at some of our recent conferences, was a $1 billion parameter model, what we call stable diffusion running entirely on the device. We showed it today at BuildWare, that same model is now running on a PC.
Now what we have a roadmap for is to be able to do up to $10 billion parameters models on the device. And you can envision some of the models that are out there today, text to text-like models. And we see on a daily basis, new modalities coming into it.
Text to music, text to audio, image to video. All of those-- many of those, actually, fit very well in this range that I'm talking about, which we believe can run comfortably on the device in a sustained fashion. And that's why it comes to everybody. It comes to anybody and everybody that has a smartphone, or a PC, or an XR device not just connected to the cloud at this point in time.
- Qualcomm's Ziad Asghar, good to talk to you today. Fascinating to think about all the applications for AI. Our thanks to Ali Garfinkel as well for joining in on the conversation.

 
  • Haha
  • Like
Reactions: 3 users

Diogenese

Top 20
Is Qualcomm really doing this without us????? What's their magic sauce????




Qualcomm exec: AI adoption becoming 'seminal moment' in tech

Scroll back up to restore default view.
Yahoo Finance
Yahoo Finance
Wed, 24 May 2023 at 7:14 am GMT+10


Qualcomm SVP of Product Management Ziad Asghar joins Yahoo Finance Live to discuss Qualcomm's partnership with Microsoft to develop a new AI chip, competition in the AI technology sector, privacy, and data security.

Video transcript​

- Well, Qualcomm is building on its partnership with Microsoft today, unveiling new features to scale on-device AI. That announcement at Microsoft's Build Developers Conference means further integration of Qualcomm's Snapdragon in Windows devices. Qualcomm saying in a statement, quote, "On-device generative AI solutions allow developers and cloud service providers to make generative AI more affordable, reliable, and private by moving queries and inferences to edge devices, including PCs and phones."
Joining us to discuss is Ziad Asghar, Qualcomm's senior vice president of product management. We've also got our very own Ali Garfinkel joining in on the conversation. Ziad good to talk to you. The announcement today really does build on an existing partnership you've already had with Microsoft. Talk to me about the scale of the opportunity you see in this partnership centered around AI with Microsoft.
ZIAD ASGHAR: I think this generative AI opportunity is just amazing for Qualcomm. You have to understand that this is how we're able to bring all of these experiences onto the devices that are in people's hands. Right now, many of these generative experiences are sitting in the cloud, so people, at large, are not able to access these experiences.
But now whether you think about productivity applications on PC-like devices or other entertainment-like applications on smartphones, with this change that's coming right now, with the technology that we have been building, we can actually bring all these experiences to the edge devices. To your PCs, to XR products, to smartphones, to IoT products, and that's really the amazing opportunity for Qualcomm. And we believe we have some unique differentiated technology that sets us apart from everybody else in this space.

ALEXANDRA GARFINKLE: Ziad, Ali here. Investors aren't necessarily thinking of Qualcomm as an AI stock, but should they? How big of an opportunity is Qualcomm looking at in AI?
ZIAD ASGHAR: Oh, absolutely, right. So think about it. If you think from the perspective of privacy, if you think from the perspective of the contextual data that's available on a device, to be able to run these generative AI use cases, large language models on the device, it completely changes the story. It allows people to be able to run these devices while keeping those queries private.
To be able to generate that experience even when you are not connected. To be able to use the contextual information in a way that, basically, you can give a far better experience to the user than is possible today. So let's take an example, right. So if you are talking about, you're creating a virtual world around you. If you already know the kind of textures, the colors, the themes that you like, we can actually optimize that experience for you.
So the experience is better, the privacy is basically preserved, you have the experience, whether you're connected or not, and essentially, the experience is far better. So we really think this brings the generative AI opportunity to the edge and really move the center of gravity towards edge devices. So it's a massive opportunity for us at Qualcomm.
- Yeah, I've heard you say over and over about the experience here. It feels like we're just really at the tip of experiencing generative AI. What does that future look like? When you talk about user experience on your smartphone, what are we talking about?
ZIAD ASGHAR: Let's take an example, right. So on the smartphone right now, you take a picture, already, we are using AI to be able to enhance the quality of that image. But what we can do now is, you can literally talk into your smartphone and you can say, well, put me in front of the Eiffel Tower at sunset with the beautiful clouds in the distance. And we can go and run a text two image model that actually generates that background and puts you right in front of it, right. That sort of an experience. That sort of an interactive experience on devices is really truly only possible if you can do this on the device.
And what makes this possible is our special hardware. Our pedigree is about doing more processing at the lowest power possible, and that's precisely what we're able to do with AI too. And we can essentially do that on a device that's not plugged, whereas some of our other competitors run it on graphics engines that takes hundreds of Watts of power, we literally do the experiences with milliwatts of power.
And that's what takes it to the masses. That's what takes it to every user that uses a smartphone at this point in time.
ALEXANDRA GARFINKLE: Ziad, we've heard a lot about AI risks in the context, for instance, of ChatGPT, which kicked all this off. But I'd love to hear a little bit about how AI risks look from your vantage point.
ZIAD ASGHAR: Yeah, so I think one of the main concerns that people have with some of the generative AI examples is privacy. You have seen some of the news come out recently about certain information getting out and staying and residing on these cloud servers.
Well, with AI-- generative AI running on the device, that query, that question that you ask from a text or text-like example, stays precisely on the device. It never leaves it. So actually, we are able to address some of the concerns that people have. Of course, additionally, the models have continued to get better. That will actually help with doing a lot more, in terms of improving the experience and taking away some of the concerns that people have with these generative AI use cases.
And of course, what is also going to happen is that people will actually get to use these experiences. I think this is a seminal moment in our industry because it really changes the way we interact with our products, with our devices, and this is not one of those things that kind of follow the hype curve. This is actually an example where we have customers coming up to us today and saying, I want to run these use cases on the device. So this is very true, and this is where we believe that we'll be able to actually address a lot of the concerns that people have.
Another big, very quick, concern that people have is the cost, right. If you can imagine that these models are getting larger, the applications are growing, at the same time, a lot more people are using them, you will get to a point where you will not have enough resources in the cloud. So you actually need the edge to be able to do generative AI. And that's the unique opportunity for us.
ALEXANDRA GARFINKLE: So I'd love to take this a step further then and say, is it possible that the technology you're talking about developing at Qualcomm will make ChatGPT and apps like it more accessible and less costly?
ZIAD ASGHAR: Absolutely. Right, so what we can do today, what we showed at some of our recent conferences, was a $1 billion parameter model, what we call stable diffusion running entirely on the device. We showed it today at BuildWare, that same model is now running on a PC.
Now what we have a roadmap for is to be able to do up to $10 billion parameters models on the device. And you can envision some of the models that are out there today, text to text-like models. And we see on a daily basis, new modalities coming into it.
Text to music, text to audio, image to video. All of those-- many of those, actually, fit very well in this range that I'm talking about, which we believe can run comfortably on the device in a sustained fashion. And that's why it comes to everybody. It comes to anybody and everybody that has a smartphone, or a PC, or an XR device not just connected to the cloud at this point in time.
- Qualcomm's Ziad Asghar, good to talk to you today. Fascinating to think about all the applications for AI. Our thanks to Ali Garfinkel as well for joining in on the conversation.


As recently as September 2021, Qualcomm thought NPUs were software:
WO2023049655A1 TRANSFORMER-BASED ARCHITECTURE FOR TRANSFORM CODING OF MEDIA 2021-09-27

1685439411751.png


Systems and techniques are described herein for processing media data using a neural network system. For instance, a process can include obtaining a latent representation of a frame of encoded image data and generating, by a plurality of decoder transformer layers of a decoder sub-network using the latent representation of the frame of encoded image data as input, a frame of decoded image data. At least one decoder transformer layer of the plurality of decoder transformer layers includes: one or more transformer blocks for generating one or more patches of features and determine self-attention locally within one or more window partitions and shifted window partitions applied over the one or more patches; and a patch un-merging engine for decreasing a respective size of each patch of the one or more patches.

[0075] The SOC 100 may also include additional processing blocks tailored to specific functions, such as a GPU 104, a DSP 106, a connectivity block 110, which may include fifth generation (5G) connectivity, fourth generation long term evolution (4G LTE) connectivity, Wi-Fi connectivity, USB connectivity, Bluetooth connectivity, and the like, and a multimedia processor 112 that may, for example, detect and recognize gestures. In one implementation, the NPU is implemented in the CPU 102, DSP 106, and/or GPU 104. The SOC 100 may also include a sensor processor 114, image signal processors (ISPs) 116, and/or navigation module 120, which may include a global positioning system
.
 
  • Like
  • Fire
Reactions: 17 users

Earlyrelease

Regular
I wouldn't say MVP.

Whilst Roger's speech was spirited, it was not accurate which is a bit of shame given it was long-winded, soaked up valuable Q&A time and possibly had an impact on the way some shareholders voted with respect to the remuneration report.

Out of curiosity, given the statements Roger made, I have reviewed CBA's remuneration report and have then compared it to BRN's.

Below is the outcome of that comparison.

In summary, CBA KMP's are paid SUBSTANTIALLY more in both cash and equity than Brainchip's KMP's.

It's not even close.

On an average across the board, CBA KMP's get 4x the cash, 2x the equity and 2.7x in total remuneration.

Roger also said the BRN directors were paid more than BHP's, but I haven't bothered to check that given how wrong he was with CBA.

Over to you for some clarification @Realinfo

Perhaps I'm missing something?
View attachment 37458



View attachment 37453

https://www.commbank.com.au/content...out-us/2022-08/2022-annual-report_spreads.pdf

View attachment 37454
Nic work Sera2g you must be good with numbers- :love:.
 
  • Like
  • Haha
Reactions: 7 users

alwaysgreen

Top 20
As recently as September 2021, Qualcomm thought NPUs were software:
WO2023049655A1 TRANSFORMER-BASED ARCHITECTURE FOR TRANSFORM CODING OF MEDIA 2021-09-27

View attachment 37469

Systems and techniques are described herein for processing media data using a neural network system. For instance, a process can include obtaining a latent representation of a frame of encoded image data and generating, by a plurality of decoder transformer layers of a decoder sub-network using the latent representation of the frame of encoded image data as input, a frame of decoded image data. At least one decoder transformer layer of the plurality of decoder transformer layers includes: one or more transformer blocks for generating one or more patches of features and determine self-attention locally within one or more window partitions and shifted window partitions applied over the one or more patches; and a patch un-merging engine for decreasing a respective size of each patch of the one or more patches.

[0075] The SOC 100 may also include additional processing blocks tailored to specific functions, such as a GPU 104, a DSP 106, a connectivity block 110, which may include fifth generation (5G) connectivity, fourth generation long term evolution (4G LTE) connectivity, Wi-Fi connectivity, USB connectivity, Bluetooth connectivity, and the like, and a multimedia processor 112 that may, for example, detect and recognize gestures. In one implementation, the NPU is implemented in the CPU 102, DSP 106, and/or GPU 104. The SOC 100 may also include a sensor processor 114, image signal processors (ISPs) 116, and/or navigation module 120, which may include a global positioning system
.

So does Qualcomm fall into the category that Sean mentioned where customers are using existing solutions that are considered "good enough"?

Jump on board the Brainchip train Qualcomm. We need some passengers!
 
Last edited:
  • Like
Reactions: 8 users

wilzy123

Founding Member
The point was that the management / directors gets paid shitload and to prove that point he compared it with executives from large corps which was incorrect. But I think point is still valid IMO.
NO.

The point is... if the company wants a certain calibre of expertise, they have to be willing to pay the market rate. Either Antonio or Sean (I forget who) made that pretty clear. Anyone thats not a six pack short of a six pack would know that the market is pretty tough right now, adding to the cost and time attached to bringing in AND retaining staff.

Very bad card player. No question.
 
  • Like
  • Haha
  • Fire
Reactions: 15 users
As recently as September 2021, Qualcomm thought NPUs were software:
WO2023049655A1 TRANSFORMER-BASED ARCHITECTURE FOR TRANSFORM CODING OF MEDIA 2021-09-27

View attachment 37469

Systems and techniques are described herein for processing media data using a neural network system. For instance, a process can include obtaining a latent representation of a frame of encoded image data and generating, by a plurality of decoder transformer layers of a decoder sub-network using the latent representation of the frame of encoded image data as input, a frame of decoded image data. At least one decoder transformer layer of the plurality of decoder transformer layers includes: one or more transformer blocks for generating one or more patches of features and determine self-attention locally within one or more window partitions and shifted window partitions applied over the one or more patches; and a patch un-merging engine for decreasing a respective size of each patch of the one or more patches.

[0075] The SOC 100 may also include additional processing blocks tailored to specific functions, such as a GPU 104, a DSP 106, a connectivity block 110, which may include fifth generation (5G) connectivity, fourth generation long term evolution (4G LTE) connectivity, Wi-Fi connectivity, USB connectivity, Bluetooth connectivity, and the like, and a multimedia processor 112 that may, for example, detect and recognize gestures. In one implementation, the NPU is implemented in the CPU 102, DSP 106, and/or GPU 104. The SOC 100 may also include a sensor processor 114, image signal processors (ISPs) 116, and/or navigation module 120, which may include a global positioning system
.


Not sure if this is useful to you @Diogenese Itā€˜s from their principal engineer explaining how theyā€™ve increased their AI speed.



Iā€™ve read it and to me it reads like the shell game; but I know nothing!

1685446569342.png



:)
 
  • Like
  • Haha
  • Fire
Reactions: 14 users

Vanman1100

Regular
Mercedes still aiming for the EQXX
NICE!
 
  • Like
  • Love
  • Fire
Reactions: 21 users

Labsy

Regular
Qualcomms "Zeroth" is back by the looks of it... miraculous! Given a leader in his field and pure genius PVDM insists we are 3 yrs in front with patents coming out of our arses.
Or....... Sean has been down playing the uptake of our teck, and has been taking kicks in the nuts from an angry mob for the team, in order to desperately keep the cat in the bag until.......
 
  • Like
  • Fire
  • Haha
Reactions: 14 users

alwaysgreen

Top 20
NO.

The point is... if the company wants a certain calibre of expertise, they have to be willing to pay the market rate. Either Antonio or Sean (I forget who) made that pretty clear. Anyone thats not a six pack short of a six pack would know that the market is pretty tough right now, adding to the cost and time attached to bringing in AND retaining staff.

Very bad card player. No question.

Is that how you talk to people on the street? Or does your partner hurl abuse at you?

Is that why you feel the need to abuse others for their opinion?
 
  • Like
Reactions: 11 users

goodvibes

Regular

BrainChip and CVEDIA to advance Edge AI and Neuromorphic computing​



BrainChip and CVEDIA, a provider of AI-based video analytics solutions, have entered into a partnership to further develop Edge AI and neuromorphic computing.

edgeai.jpg

The partnership will focus on the integration of the CVEDIA-RT platform for video analytics running on BrainChipā€™s Akida neuromorphic IP. The CVEDIA-RT platform has been developed to enable rapid creation and development for video analytics for security and surveillance, transportation, ITS, and retail applications.
By combining the two technologies, the companies will be able to create and train AI models that are optimised for event-based processing and inference at the edge without relying on cloud connectivity or high-power consumption and, in the process, addressing some of the most challenging problems in edge AI, such as perception, cognition, security and privacy.
ā€œWeā€™re keen to partner with BrainChip to bring the benefits of CVEDIA-RTā€™s analytics to Akida enabled Edge AI devices enabling developers to scale much faster without needing the cloud,ā€ said Arjan Wijnveen, CEO of CVEDIA. ā€œAnalysing and tuning the complete solution at the edge, on the target platform that optimises the applications and accelerates time to market, propels value added devices and services in AIoT.ā€
ā€œFrom object detection to analytics to inference, CVEDIA-RT is a powerful application that simplifies deploying vision AI on hardware,ā€ added Rob Telson, BrainChip Vice President of Ecosystem and Partnerships. ā€œWith CVEDIA, we progress on our mission to expand the deployment of AI solutions by enabling developers to rapidly and cost-effectively build and tune to new use cases.ā€
Artificial Intelligence
Edge AI
Electronics
http://www.brainchip.com
http://www.cvedia.com

 
  • Like
  • Fire
  • Love
Reactions: 44 users
D

Deleted member 118

Guest
  • Like
Reactions: 3 users

Esq.111

Fascinatingly Intuitive.
Good Morning Chippers,

News article....

GlobalFoundries.
US Goverment accredits GlobalFoundries to manufacture trusted semiconductors at their New York facility.

šŸš€.

Source: May 30th, 2023, GLOBAL NEWSWIRE.

Regards,
Esq.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 56 users

Quercuskid

Regular
Good Morning Chippers,

News article....

GlobalFoundries.
US Goverment accredits GlobalFoundries to manufacture trusted semiconductors at their New York facility.

Source: May 30th, 2023, GLOBAL NEWSWIRE.

Regards,
Esq.
So that could be brilliant for BRN.

BrainChip, a leading provider of ultra-low power edge AI technology, has successfully taped out its AKD1500 chipon GlobalFoundriesā€™ 22nm FD-SOI process. This marks an important milestone for BrainChip as it proves the portability of its technology and enables the company to take advantage of the benefits of GlobalFoundriesā€™ process.
 
  • Like
  • Love
  • Fire
Reactions: 42 users

Tothemoon24

Top 20

Know Labsā€™ Non-Invasive Glucose Monitoring Technology Shows Improved Accuracy​


Latest study demonstrates a machine learning model improved Bio-RFIDā„¢ sensorā€™s accuracy for predicting blood glucose, using the Dexcom G6Ā® as reference device





SEATTLE ā€“ May 30, 2023 ā€“
Know Labs, Inc. (NYSE American: KNW) today announced the results of a new study titled, ā€œAlgorithm Refinement in the Non-Invasive Detection of Blood Glucose Using Know Labsā€™ Bio-RFID Technology.ā€ The study demonstrates that algorithm optimization using a light gradient-boosting machine (lightGBM) machine learning model improved the accuracy of Know Labsā€™ Bio-RFIDā„¢ sensor technology at quantifying blood glucose, demonstrating an overall Mean Absolute Relative Difference (MARD) of 12.9% ā€“ which is within the range of FDA-cleared blood glucose monitoring devices. Bio-RFID is a novel technology platform that uses electromagnetic energy in the form of radio waves to non-invasively capture molecular signatures and convert them into meaningful information.



Like all previous Know Labs clinical studies, this study was designed to assess the ability of the Bio-RFID sensor to non-invasively and continuously quantify blood glucose, using the Dexcom G6Ā® continuous glucose monitor (CGM) as a proxy for the measurement of blood glucose. Unique from previous studies, Know Labs tested new data science techniques and trained a lightGBM model to predict blood glucose using 1,555 observations ā€“ or reference device values ā€“ from over 130 hours of data collection across five healthy participants. Using this model, Know Labs was able to predict blood glucose in the test set ā€“ the dataset that provides a blind evaluation of model performance ā€“ with a MARD of 12.7% in the normoglycemic range and 14.0% in the hyperglycemic range.



ā€œThis is a transformational time for Know Labs. We are constantly uncovering new learnings in our research, and in this case found that the lightGBM model is well-suited for these early datasets given the amount of data available,ā€ said Steve Kent, Chief Product Officer at Know Labs. ā€œIn our previous technical feasibility study we utilized a neural network, and as is best practice when developing algorithms, our data science team is constantly refining our machine learning models to understand and optimize system performance and accuracy. This positive development is another critical step in our data collection, algorithm refinement, and technical development.ā€



This study, which was peer-reviewed by Know Labsā€™ Scientific Advisory Board, builds upon recently released peer-reviewed research. In February, Know Labs published a
proof-of-concept study that examined the efficacy of the Bio-RFID sensor using one participant, resulting in a MARD of 19.3%. Earlier this month, Know Labs also released study results validating the technical feasibility of Bio-RFID using a neural network (NN) model to predict readings of the Dexcom G6Ā® as a proxy for blood glucose, which resulted in a MARD of 20.6%. The techniques used to analyze the data differed from previous analyses among the same (N=5) participant population, including: approach to feature reduction, stratification of the data by glycemic range and only from the arm corresponding to the reference device, and a different machine learning model. The improved accuracy as measured by a MARD of 12.9% achieved in this study is comparable to other independently validated MARD values reported for todayā€™s FDA-cleared, commercially available CGM devices.



ā€œA MARD of 12.9% at this stage in our development is a truly remarkable feat. Our whole team is thrilled by these findings and the improved accuracy of our Bio-RFID technology as we continue to refine our approach,ā€ said Ron Erickson, CEO and Chairman at Know Labs. ā€œOur goal with these ongoing clinical studies is to develop large volumes of data to enable further model development, which is a critical step in our goal to bring the first FDA-cleared non-invasive glucose monitoring device to the market so that millions of people can manage their diabetes more efficiently.ā€



The full manuscript of this study will be submitted to a peer-review journal as Know Labs continues to prioritize external validation of the Bio-RFID technology. To view Know Labsā€™ growing body of peer-reviewed research, visit
 
  • Like
  • Fire
Reactions: 28 users
Top Bottom