Anil Mankar has "liked" a LinkedIn post from Lantronix.
Lantronix says it has a “strong relationship” with Qualcomm Technologies for more than 15 years. In October 2024, Lantronix announced five new System-in-Package (SiP) solutions powered by Qualcomm Technologies’ chipsets, for AI/ML and video solutions at the edge.
Anyhoo, I know some will argue that Anil liking a post doesn’t mean much and he could’ve just as easily liked a photo of a new power drill. Fair enough, but I’m hoping there’s more than a trip to Bunnings behind this like.
The post in question relates to Lantronix and if you watch the video I found (published 17 Sept 2025, featuring Saleel Awsare, CEO of Lantronix), I think it's pretty interesting.
At around the 1:30 minute mark, Saleel describes a use case involving drones that recognise an event and only transmit data after that event occurs, rather than continuously streaming from the edge back to a base station. This seems like a description of event-based computing - the same paradigm at the heart of neuromorphic processing. Think Prophesee + BrainChip.
Saleel goes on to discuss work supporting the U.S. Government. He also mentions collaborations with 10 drone manufacturers, focusing on enabling advanced camera systems. He suggests the drone vertical alone could be a $10 billion opportunity by 2030, if I heard him correctly??
This kind of low-power, high-efficiency, event-driven vision application for drones is exactly where BrainChip + Prophesee shine. Think along the same lines as the AQUIMERA + Prophesee + BrainChip project, the Neurobus and BRAVE1 initiatives, or the Data Science UA + BrainChip reforestation drone project. Not to mention the ISL / RTX / U.S. Air Force Research Lab alliance, which you'd have to think would almost certainly involve drone-based autonomy at the edge at some point in the future.
Outside of neuromorphic computing, I believe there are only a few architectures that can approximate this kind of efficiency, but I don't think any do it natively:
- Edge NPUs + event cameras (Qualcomm Hexagon, Nvidia Jetson) can simulate event triggers but still rely on frame-based AI.
- FPGA trigger logic can pre-filter events before hitting the CPU/GPU, but it’s not learning-adaptive.
- TinyML on MCUs can handle simple wake-word-type events but lacks true spiking efficiency.
I believe neuromorphic processors are uniquely suited to the drone applications that Saleel Awsare described in the video.
If Lantronix are already integrating event-based vision into their drone platforms, then I'm hoping that a BrainChip collaboration might be a natural fit.
Time will tell.