Thanks FF,Realistic Retinas Make Better Bionic Eyes
Following nature’s example more closely could lead to better visual sensors
EDD GENT
23 MAR 2022
ISTOCKPHOTO
New visual sensors inspired by the human eye could help the blind see again and provide powerful new ways for machines to sense the world around them. Recent research shows that more faithfully copying nature’s hardware could be the key to replicating its powerful capabilities.
Efforts to build bionic eyes have been underway for several decades, with much of the early research focused on creating visual prostheses that could replace damaged retinas in humans. But in recent years, there’s been growing recognition that the efficient and adaptable way in which the eye processes information could prove useful in applications where speed, flexibility, and power constraints are major concerns. Among these are robotics and the Internet of things.
Now, a pair of new research papers describe significant strides toward replicating some of the eye's capabilities by more closely imitating the function of the retina—the collection of photoreceptors and neurons at the back of the eye responsible for converting light into visual signals for the brain. “It’s a very exciting extension from where we were before,” says Hongrui Jiang, a professor of engineering at the University of Wisconsin–Madison. “These two papers [explain research aimed at] trying to mimic the natural visual system’s performance, but on the retina level right at the signal-filtering and signal-processing stage.”
One of the most compelling reasons to do this is the retina’s efficiency. Most image sensors rely on components called photodiodes, which convert light into electricity, says Khaled Salama, professor of electrical and computer engineering at King Abdullah University of Science and Technology (KAUST), but photodiodes constantly consume electricity, even when they’re on standby, which leads to high energy use. In contrast, the photoreceptors in the retina are passive devices that convert incoming light into electrical signals that are then sent to the brain. In an effort to recreate this kind of passive light-sensing capability, Salama’s KAUST team turned to an electrical component that doesn’t need a constant source of power—the capacitor.
“The problem is that capacitors are not sensitive to light,” says Salama. “So we decided to embed a light-sensitive material inside the capacitor.” The team sandwiched a layer of perovskite—a material prized for its electrical and optical properties—between two electrodes to create a capacitor whose ability to store energy, or capacitance, changed in proportion to the intensity of the light to which it is exposed. The researchers found that the resulting device mimicked the characteristics of the rod-cell photoreceptors found in the retina.
To see if the devices they created could be used to make a practical image sensor, the team fabricated a 100-by-100 array of them, then wired them up to simple circuits that converted the sensors’ change in capacitance into a string of electrical pulses, similar to the spikes of neural activity that rod cells use to transmit visual information to the brain. In a paper published in the February issue of Light: Science & Applications, they showed that a special kind of artificial neural network could learn how to process these spikes and recognize handwritten numbers with an accuracy of roughly 70 percent.
An array of the bio-inspired sensors produces strings of electrical pulses in response to light, which are then processed by a spiking neural network.DR. MANI TEJA VIJJAPU/KING ABDULLAH UNIVERSITY OF SCIENCE AND TECHNOLOGY
Thanks to its incredibly low energy requirements, Salama says future versions of the KAUST team’s bionic eye could be a promising solution for power-constrained applications like drones or remote camera systems. “The best application for something like this is security, because often nothing is happening,” he says. “You are wasting a lot of power to take images and take videos and process them to figure out that there is nothing happening.”
Another powerful capability of our eyes is the ability to rapidly adapt to changing light conditions. Image sensors can typically operate only within a limited range of illuminations, says Yang Chai, an associate professor of materials science at the Hong Kong Polytechnic University. Because of this, they require complex workarounds like optical apertures, adjustable exposure times, or complex postprocessing to deal with varying real-world light conditions. By contrast (pun intended), when you transition from a dark cinema hall to a brightly lit lobby, it takes only a short while for your eyes to adjust automatically. That’s thanks to a mechanism known as visual adaptation, in which the sensitivity of photoreceptors changes automatically depending on the level of illumination.
In an effort to mimic that adaptability, Chai and his colleagues designed a new kind of image sensor whose light sensitivity can be modulated by applying different voltages to it. In a paper in Nature Electronics, his team showed that an array of these sensors could operate over an even broader range of illuminations than the human eye. They also paired the array with a neural network and showed that the system’s ability to recognize handwritten numbers improved drastically as the sensors adapted, going from 9.5 percent to 96.1 percent accuracy as it adjusted to bright light and 38.6 percent to 96.9 percent as it adjusted to darkness. These capabilities could be very useful for machines that have to operate in a wide range of lighting conditions. One application for which it will be quite helpful, says Yang, is in a self-driving car, which has to keep track of its position with respect to other objects on the road as it enters and exits a dark tunnel.
While there’s still a long way before bionic eyes approach the capabilities of their biological cousins, Jiang says the kinds of in-sensor adaptation and signal processing achieved in these papers show why researchers should be paying more attention to the finer detail of how the retina achieves its impressive capabilities. “The retina is an amazing organ,” he says. “We’re only just scratching the surface.”
The capacitive perovskite light sensor is brilliant, but, despite citing Masqualier & Thorpe (69), by using spike rate coding they failed to take full advantage of STDP with N-out-of-M coding. Mind you, they only had a small sensor array and a similarly sparse (single layer) SNN.
https://www.nature.com/articles/s41377-021-00686-4#Sec9
Herein, we demonstrate the light intensity capacitive photoreceptor (CPR) that mimics the retina’s rod cells. The capacitance of CPRs is dependent on visible light illumination and can lead to the development of the artificial retina by integrating with peripheral electronics9. To fabricate CPRs with excellent light tunable properties, we require materials that are photosensitive and materials that have tendencies to tune the dielectric properties. In order to obtain the combination of exceptional optoelectronic and ferroelectric properties, we prepared a hybrid composite of methylammonium lead bromide perovskite (MAPbBr3) and the terpolymer polyvinylidene fluoride trifluoroethylene-chlorofluoroethylene (PVDF-TrFE-CFE). We demonstrated the fabrication and characterization of flexible CPR, which has a frequency-dependent capacitance within the range of 1−100 kHz.
The hybrid perovskites as fillers in the ferroelectric polymer attribute to modulate the dielectric properties proportional to the intensity and wavelength of the incident light. The capacitive change with respect to the wavelength of the incident light mimics the spectral sensitivity curve of human photopic vision with the maximum response in the greenish-yellow regime. The photoresponse of these CPRs is reproducible with negligible hysteresis. Furthermore, the fabricated device is resistive to humidity and oxygen due to the encapsulation of the hybrid perovskites in the hydrophobic ferroelectric terpolymer (FP). To the best of our knowledge, we report the longest stability measurement of hybrid perovskites (~129 weeks) owing to their PVDF-TrFE-CFE encapsulation.
The proposed device is modeled with an RC network and integrated with a novel low power spike oscillator to generate the spike train with a firing rate proportional to the incident light intensity and wavelength (color). Then, the functionality of the proposed CPR and sensing circuit is demonstrated through simulation to recognize the handwritten digits (MNIST) dataset using an unsupervised trained spiking neural network.
...
This SNN is a single-layer network with 100 output neurons employing a winner-take-all (WTA) mechanism followed by a statistical output classifier. The network was trained with simplified spike-timing-dependent plasticity, STDP, with a leaky integrate-and-fire neuron model69,70.