We tend to agree "“If you can run a model, inference-wise, on the device…that’s a huge advantage for us…We have the opportunity to expand the capacity of this low-power engine [developed for smartphones] and apply it to large language models" Akash Palkhiwala Qualcomm
Qualcomm views on-device AI as key to reducing cloud costs associated with AI processing
As bullishness, or at least perceived bullishness, around artificial intelligence (AI) drives up stock prices and market caps for companies like NVIDIA, AMD and Marvell, Qualcomm Technologies CFO
Akash Palkhiwala this week gave a look at what these movements mean for the San Diego-based firm. With existing expertise and product for on-device AI, Palkhiwala told investors at the
JP Morgan Global Technology, Media and Communications Conference that Qualcomm’s technology roadmap puts it in a “unique place.”
To set the stage, large language model-based solutions like ChatGPT and MidJourney are capturing hearts, minds and dollars given the broad applicability of AI-assisted workflows. Two things to note: running large language models (LLMs) like those used by ChatGPT and MidJourney consumes a lot of cloud computing resources which, in turn, costs a lot of money; and these types of LLMs that ingest enormous amounts of data are not really optimized for specific enterprise use cases. To borrow an anecdote from Michael Dell as articulated earlier this week at Dell Tech World, if a bank wants to use AI for fraud detection, it doesn’t need an AI solution built on an LLM that has consumed hundreds of years of literature.
What does that mean for Qualcomm? “This is a rapidly evolving industry…over the last several months and it’s going to continue to be that over the next year or two,” Palkhiwala said. “If you think about the hyperscalers, there’s a tremendous focus and effort on having large language models running in the cloud, but also reducing the size so you can run it on the device.” Given that divergence in the size of LLMs and Qualcomm’s proven capabilities around on-device AI, that’s important “because you could run the smaller models on the device with very good accuracy and performance which we can take across our ecosystem.”
Key point here is that, for Qualcomm, on-device AI maps to the company’s ongoing strategy to diversify beyond smartphones. As the company looks to grow its consumer and industrial IoT businesses, as well as its booming automotive business, on-device AI tech developed for handsets can be tweaked and ported across all lines of business.
Palkhiwala explained: “From an AI perspective, our view is as large language models come into play, a lot of the inference is going to happen on the device rather than in the cloud…The cost is definitely way cheaper on the device side,” not to mention considerations around data privacy and security, and application-specific latency needs. “If you can run a model, inference-wise, on the device…that’s a huge advantage for us…We have the opportunity to expand the capacity of this low-power engine [developed for smartphones] and apply it to large language models…That’s what creates an advantage for us going forward…It’s something that creates a competitive advantage for us across all edge devices.”
In terms of how that drives new revenue for Qualcomm, Palkhiwala said its current AI engine that’s integrated into its mobile platforms needs to evolve in terms of the size of the engine and memory bandwidth to better support LLMs. As that evolves into different products, it increases per-device content opportunity. Another driver relates to the diversification strategy. Palkhiwala gave the example of Qualcomm’s focus on PCs; as that business ramps, having on-device AI content “improves of the probability of success in those areas.” He said the company is also exploring additional avenues for monetization.