BRN Discussion Ongoing

Taproot

Regular
  • Like
  • Love
Reactions: 8 users
D

Deleted member 118

Guest
Hi FF, yes Astra Z has big pockets and they appear to be targeting AI in disease detection and drug manufacturing as a minimum. Interesting that they‘ve had a relationship for example since 2020 with Qure.ai, an Indian startup in targeting tech for detection of lung cancer, which is obviously right in the biotome nanose wheel house.

Hopefully others with longer attention spans than I can come up with something more solid


Going of the subject and I guess this is you, but I did just piss myself reading your reply

E1E870B5-5D9D-439D-B038-23EF93A7D8A9.png
 
  • Haha
  • Like
Reactions: 19 users

Quatrojos

Regular
LAGUNA HILLS, Calif., February 28, 2022--(BUSINESS WIRE)--BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of neuromorphic Al chips and IP, today announced the appointment of Antonio J. Viana as Chairman of the Board of Directors, effective immediately. Mr. Viana, who is currently a BrainChip non-executive director and chair of the Remuneration and Nomination Committee, replaces Emmanuel Hernandez who is retiring and stepping down from the board...

Mr. Viana is currently a non-executive director of Arteris Inc, a leading provider of network-on-chip (NoC) interconnect and has been on their board since 2016.
 
  • Like
  • Fire
  • Love
Reactions: 32 users

Realistic Retinas Make Better Bionic Eyes​

Following nature’s example more closely could lead to better visual sensors​

EDD GENT
23 MAR 2022
shape of eye with different colored dots
ISTOCKPHOTO
New visual sensors inspired by the human eye could help the blind see again and provide powerful new ways for machines to sense the world around them. Recent research shows that more faithfully copying nature’s hardware could be the key to replicating its powerful capabilities.
Efforts to build bionic eyes have been underway for several decades, with much of the early research focused on creating visual prostheses that could replace damaged retinas in humans. But in recent years, there’s been growing recognition that the efficient and adaptable way in which the eye processes information could prove useful in applications where speed, flexibility, and power constraints are major concerns. Among these are robotics and the Internet of things.
Now, a pair of new research papers describe significant strides toward replicating some of the eye's capabilities by more closely imitating the function of the retina—the collection of photoreceptors and neurons at the back of the eye responsible for converting light into visual signals for the brain. “It’s a very exciting extension from where we were before,” says Hongrui Jiang, a professor of engineering at the University of Wisconsin–Madison. “These two papers [explain research aimed at] trying to mimic the natural visual system’s performance, but on the retina level right at the signal-filtering and signal-processing stage.”
One of the most compelling reasons to do this is the retina’s efficiency. Most image sensors rely on components called photodiodes, which convert light into electricity, says Khaled Salama, professor of electrical and computer engineering at King Abdullah University of Science and Technology (KAUST), but photodiodes constantly consume electricity, even when they’re on standby, which leads to high energy use. In contrast, the photoreceptors in the retina are passive devices that convert incoming light into electrical signals that are then sent to the brain. In an effort to recreate this kind of passive light-sensing capability, Salama’s KAUST team turned to an electrical component that doesn’t need a constant source of power—the capacitor.
“The problem is that capacitors are not sensitive to light,” says Salama. “So we decided to embed a light-sensitive material inside the capacitor.” The team sandwiched a layer of perovskite—a material prized for its electrical and optical properties—between two electrodes to create a capacitor whose ability to store energy, or capacitance, changed in proportion to the intensity of the light to which it is exposed. The researchers found that the resulting device mimicked the characteristics of the rod-cell photoreceptors found in the retina.
To see if the devices they created could be used to make a practical image sensor, the team fabricated a 100-by-100 array of them, then wired them up to simple circuits that converted the sensors’ change in capacitance into a string of electrical pulses, similar to the spikes of neural activity that rod cells use to transmit visual information to the brain. In a paper published in the February issue of Light: Science & Applications, they showed that a special kind of artificial neural network could learn how to process these spikes and recognize handwritten numbers with an accuracy of roughly 70 percent.

An array of the bio-inspired sensors produces strings of electrical pulses in response to light, which are then processed by a spiking neural network.

An array of the bio-inspired sensors produces strings of electrical pulses in response to light, which are then processed by a spiking neural network.DR. MANI TEJA VIJJAPU/KING ABDULLAH UNIVERSITY OF SCIENCE AND TECHNOLOGY

Thanks to its incredibly low energy requirements, Salama says future versions of the KAUST team’s bionic eye could be a promising solution for power-constrained applications like drones or remote camera systems. “The best application for something like this is security, because often nothing is happening,” he says. “You are wasting a lot of power to take images and take videos and process them to figure out that there is nothing happening.”
Another powerful capability of our eyes is the ability to rapidly adapt to changing light conditions. Image sensors can typically operate only within a limited range of illuminations, says Yang Chai, an associate professor of materials science at the Hong Kong Polytechnic University. Because of this, they require complex workarounds like optical apertures, adjustable exposure times, or complex postprocessing to deal with varying real-world light conditions. By contrast (pun intended), when you transition from a dark cinema hall to a brightly lit lobby, it takes only a short while for your eyes to adjust automatically. That’s thanks to a mechanism known as visual adaptation, in which the sensitivity of photoreceptors changes automatically depending on the level of illumination.
In an effort to mimic that adaptability, Chai and his colleagues designed a new kind of image sensor whose light sensitivity can be modulated by applying different voltages to it. In a paper in Nature Electronics, his team showed that an array of these sensors could operate over an even broader range of illuminations than the human eye. They also paired the array with a neural network and showed that the system’s ability to recognize handwritten numbers improved drastically as the sensors adapted, going from 9.5 percent to 96.1 percent accuracy as it adjusted to bright light and 38.6 percent to 96.9 percent as it adjusted to darkness. These capabilities could be very useful for machines that have to operate in a wide range of lighting conditions. One application for which it will be quite helpful, says Yang, is in a self-driving car, which has to keep track of its position with respect to other objects on the road as it enters and exits a dark tunnel.
While there’s still a long way before bionic eyes approach the capabilities of their biological cousins, Jiang says the kinds of in-sensor adaptation and signal processing achieved in these papers show why researchers should be paying more attention to the finer detail of how the retina achieves its impressive capabilities. “The retina is an amazing organ,” he says. “We’re only just scratching the surface.”
 
  • Like
  • Fire
Reactions: 14 users

Diogenese

Top 20
  • Like
  • Haha
  • Love
Reactions: 7 users

alwaysgreen

Top 20
"The next generation computer technology is expected to solve problems at the exascale with 1018 calculations each second. "

This is what happens when your exascale computer has a prolapsed superscript.
I'd throw you a like but I have no idea what you are talking about 😂
 
  • Haha
  • Like
Reactions: 11 users
D

Deleted member 118

Guest
"The next generation computer technology is expected to solve problems at the exascale with 1018 calculations each second. "

This is what happens when your exascale computer has a prolapsed superscript.


I think I’ll give prolapse a miss on the gif front
 
  • Haha
  • Like
Reactions: 12 users
F

Filobeddo

Guest
  • Haha
  • Like
Reactions: 4 users

Diogenese

Top 20
Ps
probably just as important is the issue of overheating...less power, less heat.
That would be a consideration in using Akida in conjunction with the Huawei processor in the eX3 research project over in the Huawei thread created by @Fullmoonfever .

And this ties in with the potential to use Akida in the cloud, not a market we are targeting, but can we help it if they are chasing us?
 
  • Like
  • Love
  • Fire
Reactions: 16 users

MrNick

Regular
I'd throw you a like but I have no idea what you are talking about 😂
Mmm, is that something I need to see my doctor about?
 
  • Haha
  • Like
Reactions: 4 users
"The next generation computer technology is expected to solve problems at the exascale with 1018 calculations each second. "

This is what happens when your exascale computer has a prolapsed superscript.
Probably don’t need to dig any deeper than the following extract but I will:

“The next generation computer technology is expected to solve problems at the exascale with 1018 calculations each second. Even though these future computers will be incredibly powerful, if they are based on von Neumann type architectures, they will consume between 20 and 30 megawatts of power and will not have intrinsic physically built-in capabilities to learn or deal with complex data as our brain does. These needs can be addressed by neuromorphic computing systems which are inspired by the biological concepts of the human brain. This new generation of computers has the potential to be used for the storage and processing of large amounts of digital information with much lower power consumption than conventional processors. Among their potential future applications, an important niche is moving the control from data centers to edge devices”


"The next generation computer technology is expected to solve problems at the exascale with 1018 calculations each second. "

This is what happens when your exascale computer has a prolapsed superscript.


Abstract​

Modern computation based on the von Neumann architecture is today a mature cutting-edge science. In the Von Neumann architecture, processing and memory units are implemented as separate blocks interchanging data intensively and continuously. This data transfer is responsible for a large part of the power consumption. The next generation computer technology is expected to solve problems at the exascale with 1018 calculations each second. Even though these future computers will be incredibly powerful, if they are based on von Neumann type architectures, they will consume between 20 and 30 megawatts of power and will not have intrinsic physically built-in capabilities to learn or deal with complex data as our brain does. These needs can be addressed by neuromorphic computing systems which are inspired by the biological concepts of the human brain. This new generation of computers has the potential to be used for the storage and processing of large amounts of digital information with much lower power consumption than conventional processors. Among their potential future applications, an important niche is moving the control from data centers to edge devices”

20 to 30 megawatts of power - hold this thought -
1650192366513.png https://www.quora.com › How-man...
How many houses does 1MW-hr of electricity power ...

28 Mar 2015 · 10 answers
The short answer is about 725,000 homes. But this assumes that consumption is steady, without peaks in the day time or during air”

This number of houses for just one megawatt varies on a Google search down to 650 homes being powered.

Suffice to say 30 to 40 megawatts of power would fry enough eggs to feed the world for centuries all so we do not have to place a key in a lock and a host of other menial tasks that we are too advanced to do anymore.

Clearly this is beyond unsustainable and AKIDA at the EDGE is compulsory.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Love
  • Fire
Reactions: 18 users
Probably don’t need to dig any deeper than the following extract but I will:

“The next generation computer technology is expected to solve problems at the exascale with 1018 calculations each second. Even though these future computers will be incredibly powerful, if they are based on von Neumann type architectures, they will consume between 20 and 30 megawatts of power and will not have intrinsic physically built-in capabilities to learn or deal with complex data as our brain does. These needs can be addressed by neuromorphic computing systems which are inspired by the biological concepts of the human brain. This new generation of computers has the potential to be used for the storage and processing of large amounts of digital information with much lower power consumption than conventional processors. Among their potential future applications, an important niche is moving the control from data centers to edge devices”





Abstract​

Modern computation based on the von Neumann architecture is today a mature cutting-edge science. In the Von Neumann architecture, processing and memory units are implemented as separate blocks interchanging data intensively and continuously. This data transfer is responsible for a large part of the power consumption. The next generation computer technology is expected to solve problems at the exascale with 1018 calculations each second. Even though these future computers will be incredibly powerful, if they are based on von Neumann type architectures, they will consume between 20 and 30 megawatts of power and will not have intrinsic physically built-in capabilities to learn or deal with complex data as our brain does. These needs can be addressed by neuromorphic computing systems which are inspired by the biological concepts of the human brain. This new generation of computers has the potential to be used for the storage and processing of large amounts of digital information with much lower power consumption than conventional processors. Among their potential future applications, an important niche is moving the control from data centers to edge devices”

20 to 30 megawatts of power - hold this thought -
View attachment 4387 https://www.quora.com › How-man...
How many houses does 1MW-hr of electricity power ...

28 Mar 2015 · 10 answers
The short answer is about 725,000 homes. But this assumes that consumption is steady, without peaks in the day time or during air”

This number of houses for just one megawatt varies on a Google search down to 650 homes being powered.

Suffice to say 30 to 40 megawatts of power would fry enough eggs to feed the world for centuries all so we do not have to place a key in a lock and a host of other menial tasks that we are too advanced to do anymore.

Clearly this is beyond unsustainable and AKIDA at the EDGE is compulsory.

My opinion only DYOR
FF

AKIDA BALLISTA
Something went wrong with posting and cannot get edit to work on it but I think it can be understood. FF
 
  • Like
Reactions: 2 users

chapman89

Founding Member
Rocket has really been putting you to work lately FF 😂
 
  • Haha
  • Like
Reactions: 11 users

Diogenese

Top 20
LAGUNA HILLS, Calif., February 28, 2022--(BUSINESS WIRE)--BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of neuromorphic Al chips and IP, today announced the appointment of Antonio J. Viana as Chairman of the Board of Directors, effective immediately. Mr. Viana, who is currently a BrainChip non-executive director and chair of the Remuneration and Nomination Committee, replaces Emmanuel Hernandez who is retiring and stepping down from the board...

Mr. Viana is currently a non-executive director of Arteris Inc, a leading provider of network-on-chip (NoC) interconnect and has been on their board since 2016.
Hi Quattojos,

I think there are some synergies between Akida and Arteris.

They have a system for designing interconnexions between nodes which is similar to the interconnexion of nodes in Akida.

Arteris: US11082327B2 System and method for computational transport network-on-chip (NoC)


1650193627835.png


A system and method are disclosed for performing operations on data passing through the network to reduce latency. The overall system allows data transport to become an active component in the computation, thereby improving the overall system latency, bandwidth, and/or power.

I wonder if Arteris was used in the design of the improved commercial version of Akida 1000.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 11 users
Second that @robsmark , I wasn’t initially sold as well on Sean but early signs are very promising, articulate, intelligent, polished, passionate and driven. But still needs to get a decent haircut 😂

George through good
Get a haircut and get a real job
 
  • Haha
  • Like
Reactions: 4 users
I hope, indeed I pray, I never see the day when a parent has the temerity to say to police ‘I know I forgot to bring in my infant child a week ago from the car but the Ai did not remind me.’

My opinion only DYOR
FF

AKIDA BALLISTA
It might be helpful for the gamblers that leave the children in the car when visiting the casino $$$$ Lol
 
  • Like
  • Sad
Reactions: 4 users

Diogenese

Top 20
Hi Quattojos,

I think there are some synergies between Akida and Arteris.

They have a system for designing interconnexions between nodes which is similar to the interconnexion of nodes in Akida.

Arteris: US11082327B2 System and method for computational transport network-on-chip (NoC)


View attachment 4388

A system and method are disclosed for performing operations on data passing through the network to reduce latency. The overall system allows data transport to become an active component in the computation, thereby improving the overall system latency, bandwidth, and/or power.

I wonder if Arteris was used in the design of the improved commercial version of Akida 1000.

Took me a while to find this Akida node array for comparison with Arteris.

'https://brainchipinc.com/akida-neural-processor-ip/
Neural-Processors-Mesh@2x.png
 
  • Like
  • Fire
  • Love
Reactions: 15 users

Realistic Retinas Make Better Bionic Eyes​

Following nature’s example more closely could lead to better visual sensors​

EDD GENT
23 MAR 2022
shape of eye with different colored dots
ISTOCKPHOTO
New visual sensors inspired by the human eye could help the blind see again and provide powerful new ways for machines to sense the world around them. Recent research shows that more faithfully copying nature’s hardware could be the key to replicating its powerful capabilities.
Efforts to build bionic eyes have been underway for several decades, with much of the early research focused on creating visual prostheses that could replace damaged retinas in humans. But in recent years, there’s been growing recognition that the efficient and adaptable way in which the eye processes information could prove useful in applications where speed, flexibility, and power constraints are major concerns. Among these are robotics and the Internet of things.
Now, a pair of new research papers describe significant strides toward replicating some of the eye's capabilities by more closely imitating the function of the retina—the collection of photoreceptors and neurons at the back of the eye responsible for converting light into visual signals for the brain. “It’s a very exciting extension from where we were before,” says Hongrui Jiang, a professor of engineering at the University of Wisconsin–Madison. “These two papers [explain research aimed at] trying to mimic the natural visual system’s performance, but on the retina level right at the signal-filtering and signal-processing stage.”
One of the most compelling reasons to do this is the retina’s efficiency. Most image sensors rely on components called photodiodes, which convert light into electricity, says Khaled Salama, professor of electrical and computer engineering at King Abdullah University of Science and Technology (KAUST), but photodiodes constantly consume electricity, even when they’re on standby, which leads to high energy use. In contrast, the photoreceptors in the retina are passive devices that convert incoming light into electrical signals that are then sent to the brain. In an effort to recreate this kind of passive light-sensing capability, Salama’s KAUST team turned to an electrical component that doesn’t need a constant source of power—the capacitor.
“The problem is that capacitors are not sensitive to light,” says Salama. “So we decided to embed a light-sensitive material inside the capacitor.” The team sandwiched a layer of perovskite—a material prized for its electrical and optical properties—between two electrodes to create a capacitor whose ability to store energy, or capacitance, changed in proportion to the intensity of the light to which it is exposed. The researchers found that the resulting device mimicked the characteristics of the rod-cell photoreceptors found in the retina.
To see if the devices they created could be used to make a practical image sensor, the team fabricated a 100-by-100 array of them, then wired them up to simple circuits that converted the sensors’ change in capacitance into a string of electrical pulses, similar to the spikes of neural activity that rod cells use to transmit visual information to the brain. In a paper published in the February issue of Light: Science & Applications, they showed that a special kind of artificial neural network could learn how to process these spikes and recognize handwritten numbers with an accuracy of roughly 70 percent.

An array of the bio-inspired sensors produces strings of electrical pulses in response to light, which are then processed by a spiking neural network.

An array of the bio-inspired sensors produces strings of electrical pulses in response to light, which are then processed by a spiking neural network.DR. MANI TEJA VIJJAPU/KING ABDULLAH UNIVERSITY OF SCIENCE AND TECHNOLOGY

Thanks to its incredibly low energy requirements, Salama says future versions of the KAUST team’s bionic eye could be a promising solution for power-constrained applications like drones or remote camera systems. “The best application for something like this is security, because often nothing is happening,” he says. “You are wasting a lot of power to take images and take videos and process them to figure out that there is nothing happening.”
Another powerful capability of our eyes is the ability to rapidly adapt to changing light conditions. Image sensors can typically operate only within a limited range of illuminations, says Yang Chai, an associate professor of materials science at the Hong Kong Polytechnic University. Because of this, they require complex workarounds like optical apertures, adjustable exposure times, or complex postprocessing to deal with varying real-world light conditions. By contrast (pun intended), when you transition from a dark cinema hall to a brightly lit lobby, it takes only a short while for your eyes to adjust automatically. That’s thanks to a mechanism known as visual adaptation, in which the sensitivity of photoreceptors changes automatically depending on the level of illumination.
In an effort to mimic that adaptability, Chai and his colleagues designed a new kind of image sensor whose light sensitivity can be modulated by applying different voltages to it. In a paper in Nature Electronics, his team showed that an array of these sensors could operate over an even broader range of illuminations than the human eye. They also paired the array with a neural network and showed that the system’s ability to recognize handwritten numbers improved drastically as the sensors adapted, going from 9.5 percent to 96.1 percent accuracy as it adjusted to bright light and 38.6 percent to 96.9 percent as it adjusted to darkness. These capabilities could be very useful for machines that have to operate in a wide range of lighting conditions. One application for which it will be quite helpful, says Yang, is in a self-driving car, which has to keep track of its position with respect to other objects on the road as it enters and exits a dark tunnel.
While there’s still a long way before bionic eyes approach the capabilities of their biological cousins, Jiang says the kinds of in-sensor adaptation and signal processing achieved in these papers show why researchers should be paying more attention to the finer detail of how the retina achieves its impressive capabilities. “The retina is an amazing organ,” he says. “We’re only just scratching the surface.”

FF

On the vision side....Dec 21 document by Simon Thorpe I've just been trying wade through and understand haha.

Can't attach as too big apparently but link below for anyone interested.

Neural Bases of Rapid Visual Information Processing Simon Thorpe Brain and Cognition Research Centre (CerCo) BrainChip Inc Toulouse, France

Speaking about Terabrain Vision? Next? BRN? From the last 2 pages.

1650198664394.png

1650198874770.png




Presentation PDF HERE
 
  • Like
  • Fire
  • Love
Reactions: 19 users

Quatrojos

Regular

Radio frequency (RF) was recently tacked onto the five senses, no?
 
  • Like
  • Fire
Reactions: 10 users

TechGirl

Founding Member

Realistic Retinas Make Better Bionic Eyes​

Following nature’s example more closely could lead to better visual sensors​

EDD GENT
23 MAR 2022
shape of eye with different colored dots
ISTOCKPHOTO
New visual sensors inspired by the human eye could help the blind see again and provide powerful new ways for machines to sense the world around them. Recent research shows that more faithfully copying nature’s hardware could be the key to replicating its powerful capabilities.
Efforts to build bionic eyes have been underway for several decades, with much of the early research focused on creating visual prostheses that could replace damaged retinas in humans. But in recent years, there’s been growing recognition that the efficient and adaptable way in which the eye processes information could prove useful in applications where speed, flexibility, and power constraints are major concerns. Among these are robotics and the Internet of things.
Now, a pair of new research papers describe significant strides toward replicating some of the eye's capabilities by more closely imitating the function of the retina—the collection of photoreceptors and neurons at the back of the eye responsible for converting light into visual signals for the brain. “It’s a very exciting extension from where we were before,” says Hongrui Jiang, a professor of engineering at the University of Wisconsin–Madison. “These two papers [explain research aimed at] trying to mimic the natural visual system’s performance, but on the retina level right at the signal-filtering and signal-processing stage.”
One of the most compelling reasons to do this is the retina’s efficiency. Most image sensors rely on components called photodiodes, which convert light into electricity, says Khaled Salama, professor of electrical and computer engineering at King Abdullah University of Science and Technology (KAUST), but photodiodes constantly consume electricity, even when they’re on standby, which leads to high energy use. In contrast, the photoreceptors in the retina are passive devices that convert incoming light into electrical signals that are then sent to the brain. In an effort to recreate this kind of passive light-sensing capability, Salama’s KAUST team turned to an electrical component that doesn’t need a constant source of power—the capacitor.
“The problem is that capacitors are not sensitive to light,” says Salama. “So we decided to embed a light-sensitive material inside the capacitor.” The team sandwiched a layer of perovskite—a material prized for its electrical and optical properties—between two electrodes to create a capacitor whose ability to store energy, or capacitance, changed in proportion to the intensity of the light to which it is exposed. The researchers found that the resulting device mimicked the characteristics of the rod-cell photoreceptors found in the retina.
To see if the devices they created could be used to make a practical image sensor, the team fabricated a 100-by-100 array of them, then wired them up to simple circuits that converted the sensors’ change in capacitance into a string of electrical pulses, similar to the spikes of neural activity that rod cells use to transmit visual information to the brain. In a paper published in the February issue of Light: Science & Applications, they showed that a special kind of artificial neural network could learn how to process these spikes and recognize handwritten numbers with an accuracy of roughly 70 percent.

An array of the bio-inspired sensors produces strings of electrical pulses in response to light, which are then processed by a spiking neural network.

An array of the bio-inspired sensors produces strings of electrical pulses in response to light, which are then processed by a spiking neural network.DR. MANI TEJA VIJJAPU/KING ABDULLAH UNIVERSITY OF SCIENCE AND TECHNOLOGY

Thanks to its incredibly low energy requirements, Salama says future versions of the KAUST team’s bionic eye could be a promising solution for power-constrained applications like drones or remote camera systems. “The best application for something like this is security, because often nothing is happening,” he says. “You are wasting a lot of power to take images and take videos and process them to figure out that there is nothing happening.”
Another powerful capability of our eyes is the ability to rapidly adapt to changing light conditions. Image sensors can typically operate only within a limited range of illuminations, says Yang Chai, an associate professor of materials science at the Hong Kong Polytechnic University. Because of this, they require complex workarounds like optical apertures, adjustable exposure times, or complex postprocessing to deal with varying real-world light conditions. By contrast (pun intended), when you transition from a dark cinema hall to a brightly lit lobby, it takes only a short while for your eyes to adjust automatically. That’s thanks to a mechanism known as visual adaptation, in which the sensitivity of photoreceptors changes automatically depending on the level of illumination.
In an effort to mimic that adaptability, Chai and his colleagues designed a new kind of image sensor whose light sensitivity can be modulated by applying different voltages to it. In a paper in Nature Electronics, his team showed that an array of these sensors could operate over an even broader range of illuminations than the human eye. They also paired the array with a neural network and showed that the system’s ability to recognize handwritten numbers improved drastically as the sensors adapted, going from 9.5 percent to 96.1 percent accuracy as it adjusted to bright light and 38.6 percent to 96.9 percent as it adjusted to darkness. These capabilities could be very useful for machines that have to operate in a wide range of lighting conditions. One application for which it will be quite helpful, says Yang, is in a self-driving car, which has to keep track of its position with respect to other objects on the road as it enters and exits a dark tunnel.
While there’s still a long way before bionic eyes approach the capabilities of their biological cousins, Jiang says the kinds of in-sensor adaptation and signal processing achieved in these papers show why researchers should be paying more attention to the finer detail of how the retina achieves its impressive capabilities. “The retina is an amazing organ,” he says. “We’re only just scratching the surface.”

Thanks FF

I really enjoyed reading that

Sooooo many advancements in so many areas all seem to be happening right now
 
  • Like
  • Fire
Reactions: 15 users
Top Bottom