Just another reminder that we are not alone in the Edge AI market…
Neurala was a wild idea we had in 2006: embed a new kind of AI at the edge, learning in real-time and interacting with the physical world, powering tens of millions of devices as diverse as robots, drones, cell phones, and cameras on the manufacturing floor. From pioneering brain-inspired AI...
www.linkedin.com
In 2006, AI was not mainstream. One of my earlier memories of Neurala is pitching the idea of using Neural Networks to solve problems in robotics, computer vision, defense, biology, and more, and be met with chuckles and eventually being shown the door.
www.linkedin.com
Neurala's journey
From Mars rovers to factory floors, from NASA grants to commercial success, leading Neurala has been an amazing ride!
Vice President, Emergent AI at Analog Devices
September 23, 2025
In 2006, AI was not mainstream.
One of my earlier memories of
Neurala is pitching the idea of using Neural Networks to solve problems in robotics, computer vision, defense, biology, and more, and be met with chuckles and eventually being shown the door. When we
launched Neurala, we were lonely, but we had a vision: build
AI that learns like the human brain, and put it where it matters and has impact for our lives, not necessarily the cloud, but in the real world. Edge-native, brain-inspired, but more importantly, practical: like its counterpart biological intelligence, AI is designed to solve problems in ways traditional machine learning could not.
Not a popular idea back then, we stuck to it. And 100M Neurala-powered devices and counting (and lots of sweat and tears…) later, we were proven right.
And what a journey it has been!
Since we left Boston University in 2013 (Neurala was stealth for a few years), we invented Lifelong DNN (L-DNN), an AI inspired by the way our brains learn and compute, an
AI that adapts continuously with minimal data and minimal compute power. We helped
NASA design autonomous systems to
navigate the surface of Mars, and DARPA
design neuromorphic brains. We brought that same tech down to Earth, powering drones, cameras, mobile devices, and
industrial lines with intelligence that adapts in real time, learning from just a simple image, pretty much
like humans learn continuously from all our experience, at low power, at the “edge” (before Edge was a thing!)
In the past two years, Neurala achieved something truly remarkable: more than 100% year-over-year revenue growth, launching partnerships with industry leaders (
Sony,
Lattice, and more), making our product
Neurala VIA modular, scalable, and insanely simple for manufacturers.
Commercial success proved that our vision of edge-native, efficient, privacy-respecting AI was not only a revolutionary AI technology, but profitable, scalable, and desperately needed.
More importantly, I am proud of “Neuraliens”, the most talented minds I have ever had the privilege to work with. We built technology, products, deployed our AI in tens of millions of devices, from cell phones to drones to cameras inspecting everyday products we all buy in stores.
But we did not just build AI-powered products. We built a culture of openness, pursuit of scientific truth (ideas, not people or their hierarchies, were always the protagonist at Neurala) translated into useful AI, a company, and a future for AI beyond the massive hype that started to surround AI over the years.
One of my early investors,
Warren Kats, once told me:
“Remember Max, building a company is 1% inspiration, and 99% perspiration!”.
How right he was! A few buckets of sweat later, and with Neurala
breaking records in its revenues, it is time for my next leap. Today, I am leading the Emergent AI initiative at
Analog Devices. The reason is simple: the world is ready for what’s next.
What’s next in AI?
My journey in AI has been long. I still remember my first 5-neuron simulation back at the University of Trieste. It was 1996, I was an undergrad tinkering with multi-layer perceptrons. Five neurons, five hours, on a “state-of-the-art” Mac.
Almost 30 years later (no, AI is not an overnight sensation), something fundamental is changing, with AI ready to finally leave datacenters. Intelligence is moving to its next frontier, from the digital into the physical, embedded in the sensors, chips, and devices that shape our lives, from cars to robots to medical devices.
But to make this jump, the challenges for AI are one order of magnitude harder than creating AI for datacenters. Brains operate with ~86 billion neurons and over 100 trillion synapses, all on just ~20 watts. In other words, they can compute in minutes what today’s AI hardware does in megawatts. State-of-the-art GPUs draw 400–1200 watts per unit. When training modern AI models, clusters easily hit 10 megawatts or more. That makes biological brains roughly 10,000 to 1,000,000 times more energy efficient, a poor approximation, of course, since brains are still not fully understood by anyone. But we do know some of that efficiency comes from integrating memory and compute, and from using sparse, spike-driven signaling. I know that since it was my PhD thesis!
For the next wave of intelligent machines to enter our world, we need to bring this biology-inspired efficiency to life. Practically, this means collapsing the boundary between sensing and thinking by designing low-power, low-latency, physically embedded AI systems that can operate in real time, ingesting directly sensory data from the physical world.
This is my mission at ADI.
As the world’s leading analog and mixed-signal semiconductor company, with amazing capabilities in sensing, signal conditioning, and edge compute, Analog Devices is in pole position to make this shift happen and transition AI from massive data centers, straining power infrastructure, to the real world. The future of AI lies in enabling intelligent computation to live closer to where it matters: in your phone, in your car’s battery management system, in healthcare devices, and, why not, even in humanoid robots!
In this new paradigm, we blur the line between sensing, computing, and acting. Take biological vision or touch, for example: when a photon hits our eye or our fingers touch a surface, there’s no clear division between sensing and processing. What’s happening is computation. Sensing is an exposed nervous system. Which means…
At ADI, we know this well, with billions of devices already sensing and acting on the physical world, we are in the front seat of the next revolution, building novel AI compute frameworks that merge sensing and intelligence at a fraction of the power and latency possible today. Without this shift, the move from server-bound AI to edge-native AI will not happen.
A practical example clarifies why. Take humanoid robots: much of their power is burned by AI algorithms running on GPUs just to keep them balanced, aware, and responsive. But nature does this better: in biology, intelligence is fused with sensors and actuators, it is fast and power-efficient, and this is incompatible with the dominant AI paradigm, which is offloading data to centralized, power-hungry processors.
This new AI is needed for the next trillion devices, for a new generation of artificial intelligence that does not simply classify pixels, but learns from them, in real time, on-device, and within the constraints of our physical world and its unforgiving laws of energy and latency.
Neurala was born from the belief that AI should be useful. At ADI, we’re taking that idea to the next level.
To the entire Neurala team, partners, and customers, thank you for believing in the mission before it was cool. We built something real that touched millions of lives. And of course, without our investors,
Benjamin Lambert,
Cesare Maifredi ,
Julien Mialaret ,
Tony Palcheck,
Tim Draper,
Katie Rae, and many more, Neurala could have not existed.
Now it’s time to build what comes next.
Max Versace