Hi Bravo,I don't see why not—unless Qualcomm is already working on a new iteration of Snapdragon that incorporates features similar to AKIDA. How feasible that is, I'm not entirely sure.
As far as I know, the current Snapdragon product isn't neuromorphic, isn't event-based, doesn't operate at ultra-low power, and doesn’t support real-time on-device learning.
Judd Heape, VP of Product Management for Camera, Computer Vision, and Video at Qualcomm Technologies, was quoted in a June 2023 EE Times article saying, “These event-based sensors are much more efficient because they can be programmed to easily detect motion at very low power. When there’s no movement or change in the scene, the sensor consumes almost no power. So that’s really interesting to us.”
In that context, he was referring to Prophesee and image-based sensors, but the underlying principle still applies I would have thought—event-based sensors just make sense. And AKIDA is event-based.
We also know that a drone company is exploring a combination of AKIDA and Prophesee’s camera. Not Snapdragon. That says something.
So why wouldn’t Qualcomm want to integrate our technology to gain a foothold in new markets—especially ones that are battery-powered and highly power-constrained? Surely paying for a licence would be chicken feed for the likes of Qualcomm, so I honestly don't know why they wouldn't be considering it.
Maybe someone with deeper technical insight can weigh in here.
View attachment 81620
EETimes article about Prophesee-Qualcomm deal
Full article here: https://www.eetimes.com/experts-weigh-impact-of-prophesee-qualcomm-deal/ Experts Weigh Impact of Prophesee-Qualcomm Deal ...image-sensors-world.blogspot.com
Qualcomm sees NPUs as applicable to low power devices, preferring to use CPU/GPU where latency is an issue (CPU), or where LLMs are concerned (GPU):
Qualcomm Hexagon AI Hybrid processor selection
https://www.qualcomm.com/content/da...I-with-an-NPU-and-heterogeneous-computing.pdf
most generative AI use cases can be categorized into on-demand, sustained, or pervasive. For on-demand applications, latency is the KPI since users do not want to wait. When these applications use small models, the CPU is usually the right choice. When models get bigger (e.g., billions of parameters), the GPU and NPU tend to be more appropriate. For sustained and pervasive use cases, in which battery life is vital and power efficiency is the critical factor, the NPU is the best option.
Qualcomm have several analog type NPU patents
US2023025068A1 HYBRID MACHINE LEARNING ARCHITECTURE WITH NEURAL PROCESSING UNIT AND COMPUTE-IN-MEMORY PROCESSING ELEMENTS 20210721
WO2023019104A1 SPARSITY-AWARE COMPUTE-IN-MEMORY 20210809
...
PS: Of course, as well as low power, Akida can handle latency-sensitive cases, and it can implement LLMs in smaller bites with the aid of RAG (Retrieval Augmented Generation).
PPS: Perhaps instead of Hexagon, Qualcomm should have called their AI system Cerberus.
Last edited: