What surprises me is that Mercedes states they are using AKIDA in the EQXX and they are believed.Why anybody here thought that the eqs contained AKIDA baffles me.
It has just been announced in the eqxx, a concept car. Features from concept cars take time to trickle down. The eqs is ready for mass production and would have been designed and engineered well before the eqxx was even a thing.
We need to be realistic here.
Live link for EQS SUV.
FF stick with yr computer as you dish out incredible stuff like this!! On yr mobile phone u may be distractedIf you want to stop worrying about competitors and whether Brainchip is moving fast enough I suggest reading the following article:
Toward Optoelectronic Chips That Mimic the Human Brain
Mon, 18 Apr 2022 19:00:01 +0000
The human brain, which is made up of some 86 billion neuronsconnected in a neural network, can perform remarkable feats of computing. Yet it consumes just a dozen or so watts. How does it do it?
IEEE Spectrum recently spoke with Jeffrey Shainline, a physicist at the National Institute of Standards and Technology in Boulder, Colo., whose work may shine some light on this question. Shainline is pursuing an approach to computing that can power advanced forms of artificial intelligence—so-called spiking neural networks, which more closely mimic the way the brain works compared with the kind of artificial neural networks that are widely deployed now. Today, the dominant paradigm uses software running on digital computers to create artificial neural networks that have multiple layers of neurons. These “deep” artificial neural networks have proved immensely successful, but they require enormous computing resources and energy to run. And those energy requirements are growing quickly: in particular, the calculations involved in training deep neural networks are becoming unsustainable.
Researchers have long been tantalized by the prospect of creating artificial neural networks that more closely reflect what goes on in networks of biological neurons, where, as one neuron accepts signals from multiple other neurons, it may reach a threshold level of activation that causes it to “fire,” meaning that it produces an output signal spike that is sent to other neurons, perhaps inducing some of them to fire as well.
“Compared with semiconductors, you can fit many more neurons and synapses on a wafer because you can stack in the third dimension. You can have maybe 10 layers, and that’s a big advantage.”
—Jeffrey Shainline, NIST
A few companies have produced chips for implementing electronic spiking neural networks. Shainline’s research focuses on using superconducting optoelectronic elements in such networks. His work has recently advanced from investigating theoretical possibilities to performing hardware experiments. He tells Spectrumabout these latest developments in his lab.
I’ve heard for years about neuromorphic processing chips from IBM and elsewhere, but I don’t get a sense that they have gained traction in the practical world. Am I wrong?
Jeffrey Shainline: Good question: Spiking neural networks—what are they good for?
IBM’s True North chip from 2014 made a big splash because it was new and different and exciting. More recently, Intel has been doing great things with its Loihi chip. Intel now has its second generation of that. But whether these chips will solve real problems remains a big question.
We know that biological brains can do things that are unmatched by digital computers. Yet these spiking neuromorphic chips don’t immediately knock our socks off. Why not? I don’t think that’s an easy question to answer.
One thing that I’ll point out is that one of these chips doesn’t have 10 billion neurons (roughly the number of neurons in a person’s brain). Even a fruit-fly brain has about 150,000 neurons. Intel’s most recent Loihi chip doesn’t even have that.
Knowing that they are struggling with what they’re going to do with this chip, the folks at Intel have done something clever: They’re giving academics and startups cheap access to their chip—for free in a lot of cases. They’re crowdsourcing creativity in hopes that somebody will find a killer app.
What would you guess the first killer app will be?
Shainline: Maybe a smart speaker, a speaker needs to be always on waiting for you to say some keyword or phrase. That normally requires a lot of power. But studies have shown that a very simple spiking neural algorithm running in one simple chip can do this while using almost no power.
Tell me about the optoelectronic devices that you and your NIST colleagues are working on and how they might improve spiking neural networks.
Shainline: First, you need to understand that light is going to be the best way that you can communicate between neurons in a spiking neural system. That’s because nothing can go faster than light. So using light for communication will allow you to have the biggest spiking neural network possible.
But it’s not enough to just send signals fast. You also need to do it in an energy-efficient manner. So once you’ve chosen to send signals in the form of light, the best energy efficiency you can achieve is if you send just one photon from a neuron to each of its synaptic connections. You can’t make an amount of light any less.
And the superconducting detectorswe are investigating are the best there is when it comes to detecting single photons of light—best in terms of how much energy they dissipate and how fast they can operate.
You could also build a spiking neural network that uses room-temperature semiconductors to send and receive optical signals, though. And right now, it isn’t obvious which strategy is best. But because I’m biased, let me share some reasons to pursue the superconducting approach.
Admittedly, using superconducting elements imposes a lot of overhead—you have to build everything in a cryogenic environment so that your devices remain cold enough to superconduct. But once you’ve done that, you can easily add another crucial element: something called a Josephson junction.
Josephson junctions are the key building block for superconducting computing hardware, whether they’re for superconducting qubits in a quantum computer, superconducting digital logic gates, or superconducting neurons.
Once you’ve decided to use light for communication and superconducting single-photon detectors to sense that light, you will have to build your computer in a cryogenic environment. So without further overhead, you now have Josephson junctions at your disposal.
And this brings a benefit that’s not obvious: It turns out that it is easier to integrate Josephson junctions in three dimensions than it is to integrate [MOSFETs—metal oxide semiconductor field-effect transistors] in three dimensions. That’s because with semiconductors, you fabricate MOSFETs on the lower plane of a silicon wafer. Then you put all your wiring layers up on top. And it becomes essentially impossible to put another layer of MOSFETs on top of that with standard processing techniques.
In contrast, it’s not hard to fabricate Josephson junctions on multiple planes. Two different research groups have demonstrated that. The same is true for the single-photon detectors that we’ve been talking about.
This is a key benefit when you consider what will be needed to allow these networks to scale into something resembling a brain in complexity. Compared with semiconductors, you can fit many more neurons and synapses on a wafer because you can stack in the third dimension. You can have maybe 10 layers, and that’s a big advantage.
The theoretical implications of this approach to computing are impressive. But what kind of hardware have you and your colleagues actually built?
Shainline: One of our most exciting recent results is the demonstration of the integration of superconducting single-photon detectors with Josephson junctions. What that allows us to do is receive single photons of light and use that to switch a Josephson junction and produce an electrical signal and then integrate the signals from many photon pulses.
We’ve recently demonstrated that technology here in our lab. We have also fabricated on a chip light sources that work at low temperatures. And we’ve spent a lot of time on the waveguides needed to carry light signals around on a chip, too.
I mentioned the 3D-integration—the stacking—that’s possible with this kind of computing technology. But if you’re going to have each neuron communicate to a few thousand other neurons, you would also need some way for optical signals to transition without loss from a waveguide in one layer to a waveguide in another layer. We’ve demonstrated that with as many as three stacked planes of these waveguides and believe we could extend that to 10 or so layers.
When you say “integration,” do you just mean that you’ve wired these components together, or do you have everything working on one chip?
Shainline: We have indeed combined superconducting single-photon detectors with Josephson junctions on one chip. That chip gets mounted on a little printed circuit board that we put inside a cryostat to keep it cold enough to remain superconducting. And we use fiber optics for communication from room temperature to low temperature.
Why are you so keen to pursue this approach, and why aren’t others doing the same?
Shainline: There are some pretty strong theoretical arguments as to why this approach to neuromorphic computing could be quite a game changer. But it requires interdisciplinary thinking and collaboration, and right now, we’re really the only group doing specifically this.
I would love it if more people got into the mix. My goal as a researcher is not to be the one that does all this stuff first. I'd be very pleased if researchers from different backgrounds contributed to the development of this technology”
My opinion only DYOR
FF
AKIDA BALLISTA
FF, I see no mention, yet again, of our exciting Akida chip. Maybe I will send off a friendly email to jeff Shainline asking "please explain"If you want to stop worrying about competitors and whether Brainchip is moving fast enough I suggest reading the following article:
Toward Optoelectronic Chips That Mimic the Human Brain
Mon, 18 Apr 2022 19:00:01 +0000
The human brain, which is made up of some 86 billion neuronsconnected in a neural network, can perform remarkable feats of computing. Yet it consumes just a dozen or so watts. How does it do it?
IEEE Spectrum recently spoke with Jeffrey Shainline, a physicist at the National Institute of Standards and Technology in Boulder, Colo., whose work may shine some light on this question. Shainline is pursuing an approach to computing that can power advanced forms of artificial intelligence—so-called spiking neural networks, which more closely mimic the way the brain works compared with the kind of artificial neural networks that are widely deployed now. Today, the dominant paradigm uses software running on digital computers to create artificial neural networks that have multiple layers of neurons. These “deep” artificial neural networks have proved immensely successful, but they require enormous computing resources and energy to run. And those energy requirements are growing quickly: in particular, the calculations involved in training deep neural networks are becoming unsustainable.
Researchers have long been tantalized by the prospect of creating artificial neural networks that more closely reflect what goes on in networks of biological neurons, where, as one neuron accepts signals from multiple other neurons, it may reach a threshold level of activation that causes it to “fire,” meaning that it produces an output signal spike that is sent to other neurons, perhaps inducing some of them to fire as well.
“Compared with semiconductors, you can fit many more neurons and synapses on a wafer because you can stack in the third dimension. You can have maybe 10 layers, and that’s a big advantage.”
—Jeffrey Shainline, NIST
A few companies have produced chips for implementing electronic spiking neural networks. Shainline’s research focuses on using superconducting optoelectronic elements in such networks. His work has recently advanced from investigating theoretical possibilities to performing hardware experiments. He tells Spectrumabout these latest developments in his lab.
I’ve heard for years about neuromorphic processing chips from IBM and elsewhere, but I don’t get a sense that they have gained traction in the practical world. Am I wrong?
Jeffrey Shainline: Good question: Spiking neural networks—what are they good for?
IBM’s True North chip from 2014 made a big splash because it was new and different and exciting. More recently, Intel has been doing great things with its Loihi chip. Intel now has its second generation of that. But whether these chips will solve real problems remains a big question.
We know that biological brains can do things that are unmatched by digital computers. Yet these spiking neuromorphic chips don’t immediately knock our socks off. Why not? I don’t think that’s an easy question to answer.
One thing that I’ll point out is that one of these chips doesn’t have 10 billion neurons (roughly the number of neurons in a person’s brain). Even a fruit-fly brain has about 150,000 neurons. Intel’s most recent Loihi chip doesn’t even have that.
Knowing that they are struggling with what they’re going to do with this chip, the folks at Intel have done something clever: They’re giving academics and startups cheap access to their chip—for free in a lot of cases. They’re crowdsourcing creativity in hopes that somebody will find a killer app.
What would you guess the first killer app will be?
Shainline: Maybe a smart speaker, a speaker needs to be always on waiting for you to say some keyword or phrase. That normally requires a lot of power. But studies have shown that a very simple spiking neural algorithm running in one simple chip can do this while using almost no power.
Tell me about the optoelectronic devices that you and your NIST colleagues are working on and how they might improve spiking neural networks.
Shainline: First, you need to understand that light is going to be the best way that you can communicate between neurons in a spiking neural system. That’s because nothing can go faster than light. So using light for communication will allow you to have the biggest spiking neural network possible.
But it’s not enough to just send signals fast. You also need to do it in an energy-efficient manner. So once you’ve chosen to send signals in the form of light, the best energy efficiency you can achieve is if you send just one photon from a neuron to each of its synaptic connections. You can’t make an amount of light any less.
And the superconducting detectorswe are investigating are the best there is when it comes to detecting single photons of light—best in terms of how much energy they dissipate and how fast they can operate.
You could also build a spiking neural network that uses room-temperature semiconductors to send and receive optical signals, though. And right now, it isn’t obvious which strategy is best. But because I’m biased, let me share some reasons to pursue the superconducting approach.
Admittedly, using superconducting elements imposes a lot of overhead—you have to build everything in a cryogenic environment so that your devices remain cold enough to superconduct. But once you’ve done that, you can easily add another crucial element: something called a Josephson junction.
Josephson junctions are the key building block for superconducting computing hardware, whether they’re for superconducting qubits in a quantum computer, superconducting digital logic gates, or superconducting neurons.
Once you’ve decided to use light for communication and superconducting single-photon detectors to sense that light, you will have to build your computer in a cryogenic environment. So without further overhead, you now have Josephson junctions at your disposal.
And this brings a benefit that’s not obvious: It turns out that it is easier to integrate Josephson junctions in three dimensions than it is to integrate [MOSFETs—metal oxide semiconductor field-effect transistors] in three dimensions. That’s because with semiconductors, you fabricate MOSFETs on the lower plane of a silicon wafer. Then you put all your wiring layers up on top. And it becomes essentially impossible to put another layer of MOSFETs on top of that with standard processing techniques.
In contrast, it’s not hard to fabricate Josephson junctions on multiple planes. Two different research groups have demonstrated that. The same is true for the single-photon detectors that we’ve been talking about.
This is a key benefit when you consider what will be needed to allow these networks to scale into something resembling a brain in complexity. Compared with semiconductors, you can fit many more neurons and synapses on a wafer because you can stack in the third dimension. You can have maybe 10 layers, and that’s a big advantage.
The theoretical implications of this approach to computing are impressive. But what kind of hardware have you and your colleagues actually built?
Shainline: One of our most exciting recent results is the demonstration of the integration of superconducting single-photon detectors with Josephson junctions. What that allows us to do is receive single photons of light and use that to switch a Josephson junction and produce an electrical signal and then integrate the signals from many photon pulses.
We’ve recently demonstrated that technology here in our lab. We have also fabricated on a chip light sources that work at low temperatures. And we’ve spent a lot of time on the waveguides needed to carry light signals around on a chip, too.
I mentioned the 3D-integration—the stacking—that’s possible with this kind of computing technology. But if you’re going to have each neuron communicate to a few thousand other neurons, you would also need some way for optical signals to transition without loss from a waveguide in one layer to a waveguide in another layer. We’ve demonstrated that with as many as three stacked planes of these waveguides and believe we could extend that to 10 or so layers.
When you say “integration,” do you just mean that you’ve wired these components together, or do you have everything working on one chip?
Shainline: We have indeed combined superconducting single-photon detectors with Josephson junctions on one chip. That chip gets mounted on a little printed circuit board that we put inside a cryostat to keep it cold enough to remain superconducting. And we use fiber optics for communication from room temperature to low temperature.
Why are you so keen to pursue this approach, and why aren’t others doing the same?
Shainline: There are some pretty strong theoretical arguments as to why this approach to neuromorphic computing could be quite a game changer. But it requires interdisciplinary thinking and collaboration, and right now, we’re really the only group doing specifically this.
I would love it if more people got into the mix. My goal as a researcher is not to be the one that does all this stuff first. I'd be very pleased if researchers from different backgrounds contributed to the development of this technology”
My opinion only DYOR
FF
AKIDA BALLISTA
About NVISO
NVISO is an Artificial Intelligence company founded in 2009 and headquartered at the Innovation Park of the École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland. Its mission is to help teach machines to understand people and their behavior to make autonomous machines safe, secure, and personalized for humans. As leader in human behavioral AI, it provides robust embedded software solutions that can sense, comprehend, and act upon human behavior in real-world environments deployed at the deep edge. It achieves this through real-time perception and observation of people and objects in contextual situations combined with the reasoning and semantics of human behavior based on trusted scientific research. NVISO’s technology is made accessible through ready-to-use AI solutions addressing Smart Mobility and Smart Health and Living applications (in-cabin monitoring systems, health assessments, and companion robot sensing) with a key focus on the deep and extreme edge with ultra-low power processing capabilities. With a singular focus on how to apply the most advanced and robust technology to industry and societal problems that matter, NVISO’s solutions help advance human potential through more robust and rich human machine interactions. ir.nviso.ai
HI DhmFF, I see no mention, yet again, of our exciting Akida chip. Maybe I will send off a friendly email to jeff Shainline asking "please explain"
About NVISO
NVISO is an Artificial Intelligence company founded in 2009 and headquartered at the Innovation Park of the École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland. Its mission is to help teach machines to understand people and their behavior to make autonomous machines safe, secure, and personalized for humans. As leader in human behavioral AI, it provides robust embedded software solutions that can sense, comprehend, and act upon human behavior in real-world environments deployed at the deep edge. It achieves this through real-time perception and observation of people and objects in contextual situations combined with the reasoning and semantics of human behavior based on trusted scientific research. NVISO’s technology is made accessible through ready-to-use AI solutions addressing Smart Mobility and Smart Health and Living applications (in-cabin monitoring systems, health assessments, and companion robot sensing) with a key focus on the deep and extreme edge with ultra-low power processing capabilities. With a singular focus on how to apply the most advanced and robust technology to industry and societal problems that matter, NVISO’s solutions help advance human potential through more robust and rich human machine interactions. ir.nviso.ai
This reminds me of the story of Archer Materials (AXE). They have figured out a way of doing quantum computing at room temperature and like BRN Mr Shainline may not be aware of this?If you want to stop worrying about competitors and whether Brainchip is moving fast enough I suggest reading the following article:
Toward Optoelectronic Chips That Mimic the Human Brain
Mon, 18 Apr 2022 19:00:01 +0000
The human brain, which is made up of some 86 billion neuronsconnected in a neural network, can perform remarkable feats of computing. Yet it consumes just a dozen or so watts. How does it do it?
IEEE Spectrum recently spoke with Jeffrey Shainline, a physicist at the National Institute of Standards and Technology in Boulder, Colo., whose work may shine some light on this question. Shainline is pursuing an approach to computing that can power advanced forms of artificial intelligence—so-called spiking neural networks, which more closely mimic the way the brain works compared with the kind of artificial neural networks that are widely deployed now. Today, the dominant paradigm uses software running on digital computers to create artificial neural networks that have multiple layers of neurons. These “deep” artificial neural networks have proved immensely successful, but they require enormous computing resources and energy to run. And those energy requirements are growing quickly: in particular, the calculations involved in training deep neural networks are becoming unsustainable.
Researchers have long been tantalized by the prospect of creating artificial neural networks that more closely reflect what goes on in networks of biological neurons, where, as one neuron accepts signals from multiple other neurons, it may reach a threshold level of activation that causes it to “fire,” meaning that it produces an output signal spike that is sent to other neurons, perhaps inducing some of them to fire as well.
“Compared with semiconductors, you can fit many more neurons and synapses on a wafer because you can stack in the third dimension. You can have maybe 10 layers, and that’s a big advantage.”
—Jeffrey Shainline, NIST
A few companies have produced chips for implementing electronic spiking neural networks. Shainline’s research focuses on using superconducting optoelectronic elements in such networks. His work has recently advanced from investigating theoretical possibilities to performing hardware experiments. He tells Spectrumabout these latest developments in his lab.
I’ve heard for years about neuromorphic processing chips from IBM and elsewhere, but I don’t get a sense that they have gained traction in the practical world. Am I wrong?
Jeffrey Shainline: Good question: Spiking neural networks—what are they good for?
IBM’s True North chip from 2014 made a big splash because it was new and different and exciting. More recently, Intel has been doing great things with its Loihi chip. Intel now has its second generation of that. But whether these chips will solve real problems remains a big question.
We know that biological brains can do things that are unmatched by digital computers. Yet these spiking neuromorphic chips don’t immediately knock our socks off. Why not? I don’t think that’s an easy question to answer.
One thing that I’ll point out is that one of these chips doesn’t have 10 billion neurons (roughly the number of neurons in a person’s brain). Even a fruit-fly brain has about 150,000 neurons. Intel’s most recent Loihi chip doesn’t even have that.
Knowing that they are struggling with what they’re going to do with this chip, the folks at Intel have done something clever: They’re giving academics and startups cheap access to their chip—for free in a lot of cases. They’re crowdsourcing creativity in hopes that somebody will find a killer app.
What would you guess the first killer app will be?
Shainline: Maybe a smart speaker, a speaker needs to be always on waiting for you to say some keyword or phrase. That normally requires a lot of power. But studies have shown that a very simple spiking neural algorithm running in one simple chip can do this while using almost no power.
Tell me about the optoelectronic devices that you and your NIST colleagues are working on and how they might improve spiking neural networks.
Shainline: First, you need to understand that light is going to be the best way that you can communicate between neurons in a spiking neural system. That’s because nothing can go faster than light. So using light for communication will allow you to have the biggest spiking neural network possible.
But it’s not enough to just send signals fast. You also need to do it in an energy-efficient manner. So once you’ve chosen to send signals in the form of light, the best energy efficiency you can achieve is if you send just one photon from a neuron to each of its synaptic connections. You can’t make an amount of light any less.
And the superconducting detectorswe are investigating are the best there is when it comes to detecting single photons of light—best in terms of how much energy they dissipate and how fast they can operate.
You could also build a spiking neural network that uses room-temperature semiconductors to send and receive optical signals, though. And right now, it isn’t obvious which strategy is best. But because I’m biased, let me share some reasons to pursue the superconducting approach.
Admittedly, using superconducting elements imposes a lot of overhead—you have to build everything in a cryogenic environment so that your devices remain cold enough to superconduct. But once you’ve done that, you can easily add another crucial element: something called a Josephson junction.
Josephson junctions are the key building block for superconducting computing hardware, whether they’re for superconducting qubits in a quantum computer, superconducting digital logic gates, or superconducting neurons.
Once you’ve decided to use light for communication and superconducting single-photon detectors to sense that light, you will have to build your computer in a cryogenic environment. So without further overhead, you now have Josephson junctions at your disposal.
And this brings a benefit that’s not obvious: It turns out that it is easier to integrate Josephson junctions in three dimensions than it is to integrate [MOSFETs—metal oxide semiconductor field-effect transistors] in three dimensions. That’s because with semiconductors, you fabricate MOSFETs on the lower plane of a silicon wafer. Then you put all your wiring layers up on top. And it becomes essentially impossible to put another layer of MOSFETs on top of that with standard processing techniques.
In contrast, it’s not hard to fabricate Josephson junctions on multiple planes. Two different research groups have demonstrated that. The same is true for the single-photon detectors that we’ve been talking about.
This is a key benefit when you consider what will be needed to allow these networks to scale into something resembling a brain in complexity. Compared with semiconductors, you can fit many more neurons and synapses on a wafer because you can stack in the third dimension. You can have maybe 10 layers, and that’s a big advantage.
The theoretical implications of this approach to computing are impressive. But what kind of hardware have you and your colleagues actually built?
Shainline: One of our most exciting recent results is the demonstration of the integration of superconducting single-photon detectors with Josephson junctions. What that allows us to do is receive single photons of light and use that to switch a Josephson junction and produce an electrical signal and then integrate the signals from many photon pulses.
We’ve recently demonstrated that technology here in our lab. We have also fabricated on a chip light sources that work at low temperatures. And we’ve spent a lot of time on the waveguides needed to carry light signals around on a chip, too.
I mentioned the 3D-integration—the stacking—that’s possible with this kind of computing technology. But if you’re going to have each neuron communicate to a few thousand other neurons, you would also need some way for optical signals to transition without loss from a waveguide in one layer to a waveguide in another layer. We’ve demonstrated that with as many as three stacked planes of these waveguides and believe we could extend that to 10 or so layers.
When you say “integration,” do you just mean that you’ve wired these components together, or do you have everything working on one chip?
Shainline: We have indeed combined superconducting single-photon detectors with Josephson junctions on one chip. That chip gets mounted on a little printed circuit board that we put inside a cryostat to keep it cold enough to remain superconducting. And we use fiber optics for communication from room temperature to low temperature.
Why are you so keen to pursue this approach, and why aren’t others doing the same?
Shainline: There are some pretty strong theoretical arguments as to why this approach to neuromorphic computing could be quite a game changer. But it requires interdisciplinary thinking and collaboration, and right now, we’re really the only group doing specifically this.
I would love it if more people got into the mix. My goal as a researcher is not to be the one that does all this stuff first. I'd be very pleased if researchers from different backgrounds contributed to the development of this technology”
My opinion only DYOR
FF
AKIDA BALLISTA
SMART WEALTH how appropriate is that beneficial use for all visionary Brainchip investors. I mostly hate marketing spin but I must say this is one for a T Shirt.involved in Smart Health, Smart Mobility, Smart Wealth
Human Behaviour Artificial Intelligence | NVISO
NVISO is a global leader in human behaviour artificial intelligence (AI) software serving manufacturers of user-centric products and services worldwide.www.nviso.ai