Damo4
Regular
Mmmh, nice article about event-based cameras and their potential use cases, but what to think of those last two paragraphs?
Human Vision Inspires a New Generation of CamerasâAnd More
October 11, 2023 Pat Brans
Thanks to a few lessons in biology, researchers have developed new sensor technology that opens up a world of new opportunitiesâincluding high-speed cameras that operate at low data rates.
In the broadest sense, the term neuromorphic applies to any computing system that borrows engineering concepts from biology. One set of notions that is particularly interesting for the development of electronic sensors is the spiking nature of neurons. Rather than fire right away, neurons build potential each time they receive a certain stimulus, firing only when a threshold is passed. The neurons are also leaky, losing membrane potential, which produces a filtering effect: If nothing new happens, the level goes down over time. âThese behaviors can be emulated by electronics,â said Ilja Ocket, program manager for Neuromorphic Computing at imec. âAnd this is the basis for a new generation of sensors.â
Ilja Ocket, program manager for Neuromorphic Computing at imec
The best illustration of how these ideas improve sensors is the event-based camera, also called the retinomorphic camera. Rather than accumulate photons in capacitive buckets and propagate them as images to a back-end system, these cameras treat each pixel autonomously. Each pixel can decide whether enough change has occurred in photon streams to convey that information downstream in the form of an event.
âImec gets involved when sensors do not produce classical arrays or tensors or matrices, but rather events,â Ilja Ocket said. âWe figure out how to adapt the AI to absorb event-based data and perform the necessary processing. Our spiking neural networks do not work with regular data. Instead, they take input from a time encoded stream.â
âOne of the important benefits of these techniques is the reduced energy consumptionâcompletely changing the game,â Ocket said. âWe do a lot of work on AI and application development in areas where this benefit is the greatestâ including robotics, smart sensors, wearables, AR/VR and automotive.â
One of the companies imec has been working with is Prophesee, a nine-year-old business based in Paris. Its 120 employees in France, China, Japan and the U.S. design vision sensors and develop software to overcome some of the challenges that plague traditional cameras.
Event-based vision sensors
âOur sensor is fundamentally different from a conventional image sensor,â said Luca Verre, CEO of Prophesee. âIt produces event changes in the scene, as opposed to a full frame, at a fixed point in time. A regular camera captures images one after the other at a fixed point in time, maybe 20 frames per second.â
Luca Verre, CEO of Prophesee
This method, which is as old as cinematographer, works fine if you just want to display an image or make a movie. But it has three major shortcomings for more modern use cases, especially when AI is involved. The first is, because entire frames are captured and propagated even when there is very little change to most of the scene, a lot of redundant data is sent for processing.
The second problem is that movement between frames is missed. Since snapshots are taken at regular intervals several times a second, anything that happens between data capture events doesnât get picked up.
The third problem is that traditional cameras have a fixed exposure time, which means each pixel could have a compromised acquisition depending on the lighting conditions. If there is bright light and dark areas in the same scene, you may end up with some pixels being overexposed or underexposedâoften at the same time.
âOur approach, which is inspired by the human eye, is to have the acquisition driven by the scene, rather than having a sensor that acquires frames regardless of whatâs changing,â Verre said. âOur pixels are independent and asynchronous, making for a very fast and efficient system. This suppresses data redundancy at the sensor level, and it captures movement, regardless of when it occursâwith microsecond precision.â
âWhile this is not time continuous, it is a very high time granularity for any natural phenomenon,â Verre said. âMost of the applications we target donât need such high time precision. We donât capture unnecessary data and we donât miss informationâtwo features that make a neuromorphic camera a high-speed camera, but at a low data rate.â
âBecause the pixels are independent, we donât have the problem of fixed exposure time,â Verre added. âPixels that look at the dark part of the scene are independent from the ones looking at bright parts of the scene, so we have a very wide dynamic range.â
Because less redundant data is transmitted to AI systems, less processing is needed and less power consumption too. It becomes much easier to implement edge AI, putting inference closer to the sensor.
The IMX 636 event-based camera module, developed with Sony, is a fourth-generation product. Last year, Prophesee released the EVK4 evaluation kit for the IMX 636 for industrial vision with a rugged housing but it will work for all applications. (Source: Prophesee)
Audio sensors and beyond neuromorphic
âAutomotive is an important market for companies like Prophesee, but itâs a long play,â Ocket said. âIf you want to develop a product for autonomous cars, youâll need to think seven to 10 years ahead. And youâll need the patience and deep pockets to sustain your company until the market really takes off.â
In the meantime, event-based cameras are meeting the needs of several other markets. These include industrial use cases that require ultra-high-speed counting, particle size monitoring and vibration monitoring for predictive maintenance. Other applications include eye tracking, visual odometry and gesture detection for AR and VR. And in China, there is a growing market for small cameras in toy animals. The cameras need to operate at low powerâand the most important thing for them to detect is movement. Neuromorphic cameras meet this need, operating on very little power, and fitting nicely into toys.
Neuromorphic principles can also be applied to audio sensors. Like the retina, the cochlea does not sample spectrograms at fixed intervals. It just conveys changes in sensory input. So far, there are not many examples of neuromorphic audio sensors, but thatâs likely to change soon since audio-based AI is now in high demand. Neuromorphic principles can also be applied to sensors with no biological counterpart, like radar or LiDAR.
But researchers are increasingly convinced that making a silicon version of the biological structures is not the best idea. The biggest impact may lie beyond neuromorphic, making the best use of both biology and electronics.
âIf you strip it down to its computational behavior you could improve biology,â Ocket said. âInstead of emulating spiking neurons with thresholds, you can just apply time-based computational behavior on very simple timing circuitsâtechnology from the 1950s and 1960s. If you hook them together and find a way to train them, you can go much lower in power consumption than if you simply emulate spike neurons in electronic form.â
Seems to me that he is just pointing out theres an un-developped/researched idea that could potentially use less power.
It also doesn't sound like he knows how it would work either, as he questions whether or not it can be trained.
I think he was just pointing out we may not need to naively replicate to our best ability the Brain, and instead look into a hybrid system.
Almost as if to say; stop assuming the brain is the best learning computer, there might be something better.
Either way I don't think it matters, his work at Imec is Neuromorphic focused, so I doubt he would act against his own interests or imply NM is a waste of effort.
Great post though @Frangipani, it's a great summary of how the technology is being adopted!
Last edited: