Hi FF, yes Astra Z has big pockets and they appear to be targeting AI in disease detection and drug manufacturing as a minimum. Interesting that they‘ve had a relationship for example since 2020 with Qure.ai, an Indian startup in targeting tech for detection of lung cancer, which is obviously right in the biotome nanose wheel house.
Hopefully others with longer attention spans than I can come up with something more solid
LAGUNA HILLS, Calif., February 28, 2022--(BUSINESS WIRE)--BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of neuromorphic Al chips and IP, today announced the appointment of Antonio J. Viana as Chairman of the Board of Directors, effective immediately. Mr. Viana, who is currently a BrainChip non-executive director and chair of the Remuneration and Nomination Committee, replaces Emmanuel Hernandez who is retiring and stepping down from the board...
I'd throw you a like but I have no idea what you are talking about"The next generation computer technology is expected to solve problems at the exascale with 1018 calculations each second. "
This is what happens when your exascale computer has a prolapsed superscript.
"The next generation computer technology is expected to solve problems at the exascale with 1018 calculations each second. "
This is what happens when your exascale computer has a prolapsed superscript.
HahaGoing of the subject and I guess this is you, but I did just piss myself reading your reply
View attachment 4386
HahaGoing of the subject and I guess this is you, but I did just piss myself reading your reply
View attachment 4386
Going of the subject and I guess this is you, but I did just piss myself reading your reply
View attachment 4386
That would be a consideration in using Akida in conjunction with the Huawei processor in the eX3 research project over in the Huawei thread created by @Fullmoonfever .Ps
probably just as important is the issue of overheating...less power, less heat.
Mmm, is that something I need to see my doctor about?I'd throw you a like but I have no idea what you are talking about
Probably don’t need to dig any deeper than the following extract but I will:"The next generation computer technology is expected to solve problems at the exascale with 1018 calculations each second. "
This is what happens when your exascale computer has a prolapsed superscript.
"The next generation computer technology is expected to solve problems at the exascale with 1018 calculations each second. "
This is what happens when your exascale computer has a prolapsed superscript.
Something went wrong with posting and cannot get edit to work on it but I think it can be understood. FFProbably don’t need to dig any deeper than the following extract but I will:
“The next generation computer technology is expected to solve problems at the exascale with 1018 calculations each second. Even though these future computers will be incredibly powerful, if they are based on von Neumann type architectures, they will consume between 20 and 30 megawatts of power and will not have intrinsic physically built-in capabilities to learn or deal with complex data as our brain does. These needs can be addressed by neuromorphic computing systems which are inspired by the biological concepts of the human brain. This new generation of computers has the potential to be used for the storage and processing of large amounts of digital information with much lower power consumption than conventional processors. Among their potential future applications, an important niche is moving the control from data centers to edge devices”
Abstract
Modern computation based on the von Neumann architecture is today a mature cutting-edge science. In the Von Neumann architecture, processing and memory units are implemented as separate blocks interchanging data intensively and continuously. This data transfer is responsible for a large part of the power consumption. The next generation computer technology is expected to solve problems at the exascale with 1018 calculations each second. Even though these future computers will be incredibly powerful, if they are based on von Neumann type architectures, they will consume between 20 and 30 megawatts of power and will not have intrinsic physically built-in capabilities to learn or deal with complex data as our brain does. These needs can be addressed by neuromorphic computing systems which are inspired by the biological concepts of the human brain. This new generation of computers has the potential to be used for the storage and processing of large amounts of digital information with much lower power consumption than conventional processors. Among their potential future applications, an important niche is moving the control from data centers to edge devices”
20 to 30 megawatts of power - hold this thought -
View attachment 4387 https://www.quora.com › How-man...
How many houses does 1MW-hr of electricity power ...
28 Mar 2015 · 10 answers
The short answer is about 725,000 homes. But this assumes that consumption is steady, without peaks in the day time or during air”
This number of houses for just one megawatt varies on a Google search down to 650 homes being powered.
Suffice to say 30 to 40 megawatts of power would fry enough eggs to feed the world for centuries all so we do not have to place a key in a lock and a host of other menial tasks that we are too advanced to do anymore.
Clearly this is beyond unsustainable and AKIDA at the EDGE is compulsory.
My opinion only DYOR
FF
AKIDA BALLISTA
Hi Quattojos,LAGUNA HILLS, Calif., February 28, 2022--(BUSINESS WIRE)--BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of neuromorphic Al chips and IP, today announced the appointment of Antonio J. Viana as Chairman of the Board of Directors, effective immediately. Mr. Viana, who is currently a BrainChip non-executive director and chair of the Remuneration and Nomination Committee, replaces Emmanuel Hernandez who is retiring and stepping down from the board...
Mr. Viana is currently a non-executive director of Arteris Inc, a leading provider of network-on-chip (NoC) interconnect and has been on their board since 2016.
Get a haircut and get a real jobSecond that @robsmark , I wasn’t initially sold as well on Sean but early signs are very promising, articulate, intelligent, polished, passionate and driven. But still needs to get a decent haircut
George through good
It might be helpful for the gamblers that leave the children in the car when visiting the casino $$$$ LolI hope, indeed I pray, I never see the day when a parent has the temerity to say to police ‘I know I forgot to bring in my infant child a week ago from the car but the Ai did not remind me.’
My opinion only DYOR
FF
AKIDA BALLISTA
Hi Quattojos,
I think there are some synergies between Akida and Arteris.
They have a system for designing interconnexions between nodes which is similar to the interconnexion of nodes in Akida.
Arteris: US11082327B2 System and method for computational transport network-on-chip (NoC)
View attachment 4388
A system and method are disclosed for performing operations on data passing through the network to reduce latency. The overall system allows data transport to become an active component in the computation, thereby improving the overall system latency, bandwidth, and/or power.
I wonder if Arteris was used in the design of the improved commercial version of Akida 1000.
Realistic Retinas Make Better Bionic Eyes
Following nature’s example more closely could lead to better visual sensors
EDD GENT
23 MAR 2022
ISTOCKPHOTO
New visual sensors inspired by the human eye could help the blind see again and provide powerful new ways for machines to sense the world around them. Recent research shows that more faithfully copying nature’s hardware could be the key to replicating its powerful capabilities.
Efforts to build bionic eyes have been underway for several decades, with much of the early research focused on creating visual prostheses that could replace damaged retinas in humans. But in recent years, there’s been growing recognition that the efficient and adaptable way in which the eye processes information could prove useful in applications where speed, flexibility, and power constraints are major concerns. Among these are robotics and the Internet of things.
Now, a pair of new research papers describe significant strides toward replicating some of the eye's capabilities by more closely imitating the function of the retina—the collection of photoreceptors and neurons at the back of the eye responsible for converting light into visual signals for the brain. “It’s a very exciting extension from where we were before,” says Hongrui Jiang, a professor of engineering at the University of Wisconsin–Madison. “These two papers [explain research aimed at] trying to mimic the natural visual system’s performance, but on the retina level right at the signal-filtering and signal-processing stage.”
One of the most compelling reasons to do this is the retina’s efficiency. Most image sensors rely on components called photodiodes, which convert light into electricity, says Khaled Salama, professor of electrical and computer engineering at King Abdullah University of Science and Technology (KAUST), but photodiodes constantly consume electricity, even when they’re on standby, which leads to high energy use. In contrast, the photoreceptors in the retina are passive devices that convert incoming light into electrical signals that are then sent to the brain. In an effort to recreate this kind of passive light-sensing capability, Salama’s KAUST team turned to an electrical component that doesn’t need a constant source of power—the capacitor.
“The problem is that capacitors are not sensitive to light,” says Salama. “So we decided to embed a light-sensitive material inside the capacitor.” The team sandwiched a layer of perovskite—a material prized for its electrical and optical properties—between two electrodes to create a capacitor whose ability to store energy, or capacitance, changed in proportion to the intensity of the light to which it is exposed. The researchers found that the resulting device mimicked the characteristics of the rod-cell photoreceptors found in the retina.
To see if the devices they created could be used to make a practical image sensor, the team fabricated a 100-by-100 array of them, then wired them up to simple circuits that converted the sensors’ change in capacitance into a string of electrical pulses, similar to the spikes of neural activity that rod cells use to transmit visual information to the brain. In a paper published in the February issue of Light: Science & Applications, they showed that a special kind of artificial neural network could learn how to process these spikes and recognize handwritten numbers with an accuracy of roughly 70 percent.
An array of the bio-inspired sensors produces strings of electrical pulses in response to light, which are then processed by a spiking neural network.DR. MANI TEJA VIJJAPU/KING ABDULLAH UNIVERSITY OF SCIENCE AND TECHNOLOGY
Thanks to its incredibly low energy requirements, Salama says future versions of the KAUST team’s bionic eye could be a promising solution for power-constrained applications like drones or remote camera systems. “The best application for something like this is security, because often nothing is happening,” he says. “You are wasting a lot of power to take images and take videos and process them to figure out that there is nothing happening.”
Another powerful capability of our eyes is the ability to rapidly adapt to changing light conditions. Image sensors can typically operate only within a limited range of illuminations, says Yang Chai, an associate professor of materials science at the Hong Kong Polytechnic University. Because of this, they require complex workarounds like optical apertures, adjustable exposure times, or complex postprocessing to deal with varying real-world light conditions. By contrast (pun intended), when you transition from a dark cinema hall to a brightly lit lobby, it takes only a short while for your eyes to adjust automatically. That’s thanks to a mechanism known as visual adaptation, in which the sensitivity of photoreceptors changes automatically depending on the level of illumination.
In an effort to mimic that adaptability, Chai and his colleagues designed a new kind of image sensor whose light sensitivity can be modulated by applying different voltages to it. In a paper in Nature Electronics, his team showed that an array of these sensors could operate over an even broader range of illuminations than the human eye. They also paired the array with a neural network and showed that the system’s ability to recognize handwritten numbers improved drastically as the sensors adapted, going from 9.5 percent to 96.1 percent accuracy as it adjusted to bright light and 38.6 percent to 96.9 percent as it adjusted to darkness. These capabilities could be very useful for machines that have to operate in a wide range of lighting conditions. One application for which it will be quite helpful, says Yang, is in a self-driving car, which has to keep track of its position with respect to other objects on the road as it enters and exits a dark tunnel.
While there’s still a long way before bionic eyes approach the capabilities of their biological cousins, Jiang says the kinds of in-sensor adaptation and signal processing achieved in these papers show why researchers should be paying more attention to the finer detail of how the retina achieves its impressive capabilities. “The retina is an amazing organ,” he says. “We’re only just scratching the surface.”
Realistic Retinas Make Better Bionic Eyes
Following nature’s example more closely could lead to better visual sensors
EDD GENT
23 MAR 2022
ISTOCKPHOTO
New visual sensors inspired by the human eye could help the blind see again and provide powerful new ways for machines to sense the world around them. Recent research shows that more faithfully copying nature’s hardware could be the key to replicating its powerful capabilities.
Efforts to build bionic eyes have been underway for several decades, with much of the early research focused on creating visual prostheses that could replace damaged retinas in humans. But in recent years, there’s been growing recognition that the efficient and adaptable way in which the eye processes information could prove useful in applications where speed, flexibility, and power constraints are major concerns. Among these are robotics and the Internet of things.
Now, a pair of new research papers describe significant strides toward replicating some of the eye's capabilities by more closely imitating the function of the retina—the collection of photoreceptors and neurons at the back of the eye responsible for converting light into visual signals for the brain. “It’s a very exciting extension from where we were before,” says Hongrui Jiang, a professor of engineering at the University of Wisconsin–Madison. “These two papers [explain research aimed at] trying to mimic the natural visual system’s performance, but on the retina level right at the signal-filtering and signal-processing stage.”
One of the most compelling reasons to do this is the retina’s efficiency. Most image sensors rely on components called photodiodes, which convert light into electricity, says Khaled Salama, professor of electrical and computer engineering at King Abdullah University of Science and Technology (KAUST), but photodiodes constantly consume electricity, even when they’re on standby, which leads to high energy use. In contrast, the photoreceptors in the retina are passive devices that convert incoming light into electrical signals that are then sent to the brain. In an effort to recreate this kind of passive light-sensing capability, Salama’s KAUST team turned to an electrical component that doesn’t need a constant source of power—the capacitor.
“The problem is that capacitors are not sensitive to light,” says Salama. “So we decided to embed a light-sensitive material inside the capacitor.” The team sandwiched a layer of perovskite—a material prized for its electrical and optical properties—between two electrodes to create a capacitor whose ability to store energy, or capacitance, changed in proportion to the intensity of the light to which it is exposed. The researchers found that the resulting device mimicked the characteristics of the rod-cell photoreceptors found in the retina.
To see if the devices they created could be used to make a practical image sensor, the team fabricated a 100-by-100 array of them, then wired them up to simple circuits that converted the sensors’ change in capacitance into a string of electrical pulses, similar to the spikes of neural activity that rod cells use to transmit visual information to the brain. In a paper published in the February issue of Light: Science & Applications, they showed that a special kind of artificial neural network could learn how to process these spikes and recognize handwritten numbers with an accuracy of roughly 70 percent.
An array of the bio-inspired sensors produces strings of electrical pulses in response to light, which are then processed by a spiking neural network.DR. MANI TEJA VIJJAPU/KING ABDULLAH UNIVERSITY OF SCIENCE AND TECHNOLOGY
Thanks to its incredibly low energy requirements, Salama says future versions of the KAUST team’s bionic eye could be a promising solution for power-constrained applications like drones or remote camera systems. “The best application for something like this is security, because often nothing is happening,” he says. “You are wasting a lot of power to take images and take videos and process them to figure out that there is nothing happening.”
Another powerful capability of our eyes is the ability to rapidly adapt to changing light conditions. Image sensors can typically operate only within a limited range of illuminations, says Yang Chai, an associate professor of materials science at the Hong Kong Polytechnic University. Because of this, they require complex workarounds like optical apertures, adjustable exposure times, or complex postprocessing to deal with varying real-world light conditions. By contrast (pun intended), when you transition from a dark cinema hall to a brightly lit lobby, it takes only a short while for your eyes to adjust automatically. That’s thanks to a mechanism known as visual adaptation, in which the sensitivity of photoreceptors changes automatically depending on the level of illumination.
In an effort to mimic that adaptability, Chai and his colleagues designed a new kind of image sensor whose light sensitivity can be modulated by applying different voltages to it. In a paper in Nature Electronics, his team showed that an array of these sensors could operate over an even broader range of illuminations than the human eye. They also paired the array with a neural network and showed that the system’s ability to recognize handwritten numbers improved drastically as the sensors adapted, going from 9.5 percent to 96.1 percent accuracy as it adjusted to bright light and 38.6 percent to 96.9 percent as it adjusted to darkness. These capabilities could be very useful for machines that have to operate in a wide range of lighting conditions. One application for which it will be quite helpful, says Yang, is in a self-driving car, which has to keep track of its position with respect to other objects on the road as it enters and exits a dark tunnel.
While there’s still a long way before bionic eyes approach the capabilities of their biological cousins, Jiang says the kinds of in-sensor adaptation and signal processing achieved in these papers show why researchers should be paying more attention to the finer detail of how the retina achieves its impressive capabilities. “The retina is an amazing organ,” he says. “We’re only just scratching the surface.”