BRN Discussion Ongoing

Sirod69

bavarian girl ;-)
1650401325990.png
 
  • Like
Reactions: 9 users

Sirod69

bavarian girl ;-)
I found this post and a next one, I think thats no fake, because this man posts many thinks about BRN he is really interested in BRN!!! Now I go to bed :sleep::sleep:
 
  • Like
Reactions: 7 users
Why anybody here thought that the eqs contained AKIDA baffles me.

It has just been announced in the eqxx, a concept car. Features from concept cars take time to trickle down. The eqs is ready for mass production and would have been designed and engineered well before the eqxx was even a thing.

We need to be realistic here.
What surprises me is that Mercedes states they are using AKIDA in the EQXX and they are believed.

Mercedes states that the features developed for EQXX will be available in 2024 and they are thought not to be telling the truth.

The Revolution will commence 2024.

The Revolution when you read the Mercedes fine print will maximise the AKIDA technology across every aspect of the power train where power savings and intelligence is required.

It will be far more than just Hey Mercedes in 2024 and it will be an integrated system with Nvidia.

My opinion only DYOR - it is all here if you just look across these threads. The 1,000 Eyes have it.
FF

AKIDA BALLISTA
 
  • Like
  • Love
  • Fire
Reactions: 42 users

MDhere

Regular
Live link for EQS SUV.


Yes yes yes although i think if Akida was in it they would have announced it like they did with the EQXX.
 
Last edited:
  • Like
Reactions: 14 users
If you want to stop worrying about competitors and whether Brainchip is moving fast enough I suggest reading the following article:

Toward Optoelectronic Chips That Mimic the Human Brain
Mon, 18 Apr 2022 19:00:01 +0000
a-glowing-chip-sits-in-the-middle-of-a-colorfully-rendered-illustration-of-a-brain-made-of-connecting-lines.jpg


The human brain, which is made up of some 86 billion neuronsconnected in a neural network, can perform remarkable feats of computing. Yet it consumes just a dozen or so watts. How does it do it?

IEEE Spectrum recently spoke with Jeffrey Shainline, a physicist at the National Institute of Standards and Technology in Boulder, Colo., whose work may shine some light on this question. Shainline is pursuing an approach to computing that can power advanced forms of artificial intelligence—so-called spiking neural networks, which more closely mimic the way the brain works compared with the kind of artificial neural networks that are widely deployed now. Today, the dominant paradigm uses software running on digital computers to create artificial neural networks that have multiple layers of neurons. These “deep” artificial neural networks have proved immensely successful, but they require enormous computing resources and energy to run. And those energy requirements are growing quickly: in particular, the calculations involved in training deep neural networks are becoming unsustainable.

Researchers have long been tantalized by the prospect of creating artificial neural networks that more closely reflect what goes on in networks of biological neurons, where, as one neuron accepts signals from multiple other neurons, it may reach a threshold level of activation that causes it to “fire,” meaning that it produces an output signal spike that is sent to other neurons, perhaps inducing some of them to fire as well.

“Compared with semiconductors, you can fit many more neurons and synapses on a wafer because you can stack in the third dimension. You can have maybe 10 layers, and that’s a big advantage.”
—Jeffrey Shainline, NIST

A few companies have produced chips for implementing electronic spiking neural networks. Shainline’s research focuses on using superconducting optoelectronic elements in such networks. His work has recently advanced from investigating theoretical possibilities to performing hardware experiments. He tells Spectrumabout these latest developments in his lab.

I’ve heard for years about neuromorphic processing chips from IBM and elsewhere, but I don’t get a sense that they have gained traction in the practical world. Am I wrong?

Jeffrey Shainline: Good question: Spiking neural networks—what are they good for?

IBM’s True North chip from 2014 made a big splash because it was new and different and exciting. More recently, Intel has been doing great things with its Loihi chip. Intel now has its second generation of that. But whether these chips will solve real problems remains a big question.

We know that biological brains can do things that are unmatched by digital computers. Yet these spiking neuromorphic chips don’t immediately knock our socks off. Why not? I don’t think that’s an easy question to answer.

One thing that I’ll point out is that one of these chips doesn’t have 10 billion neurons (roughly the number of neurons in a person’s brain). Even a fruit-fly brain has about 150,000 neurons. Intel’s most recent Loihi chip doesn’t even have that.

Knowing that they are struggling with what they’re going to do with this chip, the folks at Intel have done something clever: They’re giving academics and startups cheap access to their chip—for free in a lot of cases. They’re crowdsourcing creativity in hopes that somebody will find a killer app.

What would you guess the first killer app will be?

Shainline: Maybe a smart speaker, a speaker needs to be always on waiting for you to say some keyword or phrase. That normally requires a lot of power. But studies have shown that a very simple spiking neural algorithm running in one simple chip can do this while using almost no power.

Tell me about the optoelectronic devices that you and your NIST colleagues are working on and how they might improve spiking neural networks.

Shainline: First, you need to understand that light is going to be the best way that you can communicate between neurons in a spiking neural system. That’s because nothing can go faster than light. So using light for communication will allow you to have the biggest spiking neural network possible.

But it’s not enough to just send signals fast. You also need to do it in an energy-efficient manner. So once you’ve chosen to send signals in the form of light, the best energy efficiency you can achieve is if you send just one photon from a neuron to each of its synaptic connections. You can’t make an amount of light any less.

And the superconducting detectorswe are investigating are the best there is when it comes to detecting single photons of light—best in terms of how much energy they dissipate and how fast they can operate.

You could also build a spiking neural network that uses room-temperature semiconductors to send and receive optical signals, though. And right now, it isn’t obvious which strategy is best. But because I’m biased, let me share some reasons to pursue the superconducting approach.

Admittedly, using superconducting elements imposes a lot of overhead—you have to build everything in a cryogenic environment so that your devices remain cold enough to superconduct. But once you’ve done that, you can easily add another crucial element: something called a Josephson junction.

Josephson junctions are the key building block for superconducting computing hardware, whether they’re for superconducting qubits in a quantum computer, superconducting digital logic gates, or superconducting neurons.

Once you’ve decided to use light for communication and superconducting single-photon detectors to sense that light, you will have to build your computer in a cryogenic environment. So without further overhead, you now have Josephson junctions at your disposal.

And this brings a benefit that’s not obvious: It turns out that it is easier to integrate Josephson junctions in three dimensions than it is to integrate [MOSFETs—metal oxide semiconductor field-effect transistors] in three dimensions. That’s because with semiconductors, you fabricate MOSFETs on the lower plane of a silicon wafer. Then you put all your wiring layers up on top. And it becomes essentially impossible to put another layer of MOSFETs on top of that with standard processing techniques.

In contrast, it’s not hard to fabricate Josephson junctions on multiple planes. Two different research groups have demonstrated that. The same is true for the single-photon detectors that we’ve been talking about.

This is a key benefit when you consider what will be needed to allow these networks to scale into something resembling a brain in complexity. Compared with semiconductors, you can fit many more neurons and synapses on a wafer because you can stack in the third dimension. You can have maybe 10 layers, and that’s a big advantage.

The theoretical implications of this approach to computing are impressive. But what kind of hardware have you and your colleagues actually built?

Shainline: One of our most exciting recent results is the demonstration of the integration of superconducting single-photon detectors with Josephson junctions. What that allows us to do is receive single photons of light and use that to switch a Josephson junction and produce an electrical signal and then integrate the signals from many photon pulses.

We’ve recently demonstrated that technology here in our lab. We have also fabricated on a chip light sources that work at low temperatures. And we’ve spent a lot of time on the waveguides needed to carry light signals around on a chip, too.

I mentioned the 3D-integration—the stacking—that’s possible with this kind of computing technology. But if you’re going to have each neuron communicate to a few thousand other neurons, you would also need some way for optical signals to transition without loss from a waveguide in one layer to a waveguide in another layer. We’ve demonstrated that with as many as three stacked planes of these waveguides and believe we could extend that to 10 or so layers.

When you say “integration,” do you just mean that you’ve wired these components together, or do you have everything working on one chip?

Shainline: We have indeed combined superconducting single-photon detectors with Josephson junctions on one chip. That chip gets mounted on a little printed circuit board that we put inside a cryostat to keep it cold enough to remain superconducting. And we use fiber optics for communication from room temperature to low temperature.

Why are you so keen to pursue this approach, and why aren’t others doing the same?

Shainline: There are some pretty strong theoretical arguments as to why this approach to neuromorphic computing could be quite a game changer. But it requires interdisciplinary thinking and collaboration, and right now, we’re really the only group doing specifically this.

I would love it if more people got into the mix. My goal as a researcher is not to be the one that does all this stuff first. I'd be very pleased if researchers from different backgrounds contributed to the development of this technology”

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
Reactions: 30 users

MDhere

Regular
If you want to stop worrying about competitors and whether Brainchip is moving fast enough I suggest reading the following article:

Toward Optoelectronic Chips That Mimic the Human Brain
Mon, 18 Apr 2022 19:00:01 +0000
a-glowing-chip-sits-in-the-middle-of-a-colorfully-rendered-illustration-of-a-brain-made-of-connecting-lines.jpg


The human brain, which is made up of some 86 billion neuronsconnected in a neural network, can perform remarkable feats of computing. Yet it consumes just a dozen or so watts. How does it do it?

IEEE Spectrum recently spoke with Jeffrey Shainline, a physicist at the National Institute of Standards and Technology in Boulder, Colo., whose work may shine some light on this question. Shainline is pursuing an approach to computing that can power advanced forms of artificial intelligence—so-called spiking neural networks, which more closely mimic the way the brain works compared with the kind of artificial neural networks that are widely deployed now. Today, the dominant paradigm uses software running on digital computers to create artificial neural networks that have multiple layers of neurons. These “deep” artificial neural networks have proved immensely successful, but they require enormous computing resources and energy to run. And those energy requirements are growing quickly: in particular, the calculations involved in training deep neural networks are becoming unsustainable.

Researchers have long been tantalized by the prospect of creating artificial neural networks that more closely reflect what goes on in networks of biological neurons, where, as one neuron accepts signals from multiple other neurons, it may reach a threshold level of activation that causes it to “fire,” meaning that it produces an output signal spike that is sent to other neurons, perhaps inducing some of them to fire as well.

“Compared with semiconductors, you can fit many more neurons and synapses on a wafer because you can stack in the third dimension. You can have maybe 10 layers, and that’s a big advantage.”
—Jeffrey Shainline, NIST

A few companies have produced chips for implementing electronic spiking neural networks. Shainline’s research focuses on using superconducting optoelectronic elements in such networks. His work has recently advanced from investigating theoretical possibilities to performing hardware experiments. He tells Spectrumabout these latest developments in his lab.

I’ve heard for years about neuromorphic processing chips from IBM and elsewhere, but I don’t get a sense that they have gained traction in the practical world. Am I wrong?

Jeffrey Shainline: Good question: Spiking neural networks—what are they good for?

IBM’s True North chip from 2014 made a big splash because it was new and different and exciting. More recently, Intel has been doing great things with its Loihi chip. Intel now has its second generation of that. But whether these chips will solve real problems remains a big question.

We know that biological brains can do things that are unmatched by digital computers. Yet these spiking neuromorphic chips don’t immediately knock our socks off. Why not? I don’t think that’s an easy question to answer.

One thing that I’ll point out is that one of these chips doesn’t have 10 billion neurons (roughly the number of neurons in a person’s brain). Even a fruit-fly brain has about 150,000 neurons. Intel’s most recent Loihi chip doesn’t even have that.

Knowing that they are struggling with what they’re going to do with this chip, the folks at Intel have done something clever: They’re giving academics and startups cheap access to their chip—for free in a lot of cases. They’re crowdsourcing creativity in hopes that somebody will find a killer app.

What would you guess the first killer app will be?

Shainline: Maybe a smart speaker, a speaker needs to be always on waiting for you to say some keyword or phrase. That normally requires a lot of power. But studies have shown that a very simple spiking neural algorithm running in one simple chip can do this while using almost no power.

Tell me about the optoelectronic devices that you and your NIST colleagues are working on and how they might improve spiking neural networks.

Shainline: First, you need to understand that light is going to be the best way that you can communicate between neurons in a spiking neural system. That’s because nothing can go faster than light. So using light for communication will allow you to have the biggest spiking neural network possible.

But it’s not enough to just send signals fast. You also need to do it in an energy-efficient manner. So once you’ve chosen to send signals in the form of light, the best energy efficiency you can achieve is if you send just one photon from a neuron to each of its synaptic connections. You can’t make an amount of light any less.

And the superconducting detectorswe are investigating are the best there is when it comes to detecting single photons of light—best in terms of how much energy they dissipate and how fast they can operate.

You could also build a spiking neural network that uses room-temperature semiconductors to send and receive optical signals, though. And right now, it isn’t obvious which strategy is best. But because I’m biased, let me share some reasons to pursue the superconducting approach.

Admittedly, using superconducting elements imposes a lot of overhead—you have to build everything in a cryogenic environment so that your devices remain cold enough to superconduct. But once you’ve done that, you can easily add another crucial element: something called a Josephson junction.

Josephson junctions are the key building block for superconducting computing hardware, whether they’re for superconducting qubits in a quantum computer, superconducting digital logic gates, or superconducting neurons.

Once you’ve decided to use light for communication and superconducting single-photon detectors to sense that light, you will have to build your computer in a cryogenic environment. So without further overhead, you now have Josephson junctions at your disposal.

And this brings a benefit that’s not obvious: It turns out that it is easier to integrate Josephson junctions in three dimensions than it is to integrate [MOSFETs—metal oxide semiconductor field-effect transistors] in three dimensions. That’s because with semiconductors, you fabricate MOSFETs on the lower plane of a silicon wafer. Then you put all your wiring layers up on top. And it becomes essentially impossible to put another layer of MOSFETs on top of that with standard processing techniques.

In contrast, it’s not hard to fabricate Josephson junctions on multiple planes. Two different research groups have demonstrated that. The same is true for the single-photon detectors that we’ve been talking about.

This is a key benefit when you consider what will be needed to allow these networks to scale into something resembling a brain in complexity. Compared with semiconductors, you can fit many more neurons and synapses on a wafer because you can stack in the third dimension. You can have maybe 10 layers, and that’s a big advantage.

The theoretical implications of this approach to computing are impressive. But what kind of hardware have you and your colleagues actually built?

Shainline: One of our most exciting recent results is the demonstration of the integration of superconducting single-photon detectors with Josephson junctions. What that allows us to do is receive single photons of light and use that to switch a Josephson junction and produce an electrical signal and then integrate the signals from many photon pulses.

We’ve recently demonstrated that technology here in our lab. We have also fabricated on a chip light sources that work at low temperatures. And we’ve spent a lot of time on the waveguides needed to carry light signals around on a chip, too.

I mentioned the 3D-integration—the stacking—that’s possible with this kind of computing technology. But if you’re going to have each neuron communicate to a few thousand other neurons, you would also need some way for optical signals to transition without loss from a waveguide in one layer to a waveguide in another layer. We’ve demonstrated that with as many as three stacked planes of these waveguides and believe we could extend that to 10 or so layers.

When you say “integration,” do you just mean that you’ve wired these components together, or do you have everything working on one chip?

Shainline: We have indeed combined superconducting single-photon detectors with Josephson junctions on one chip. That chip gets mounted on a little printed circuit board that we put inside a cryostat to keep it cold enough to remain superconducting. And we use fiber optics for communication from room temperature to low temperature.

Why are you so keen to pursue this approach, and why aren’t others doing the same?

Shainline: There are some pretty strong theoretical arguments as to why this approach to neuromorphic computing could be quite a game changer. But it requires interdisciplinary thinking and collaboration, and right now, we’re really the only group doing specifically this.

I would love it if more people got into the mix. My goal as a researcher is not to be the one that does all this stuff first. I'd be very pleased if researchers from different backgrounds contributed to the development of this technology”

My opinion only DYOR
FF

AKIDA BALLISTA
FF stick with yr computer as you dish out incredible stuff like this!! On yr mobile phone u may be distracted 😀
AKIDA BALLISTA
 
  • Like
  • Haha
  • Love
Reactions: 7 users

Bobbygant

Regular
Mercedes’ Benz - “We want to lead in car software because we want Mercedes’ to constantly learn”
22D0E972-B52D-41AA-9BDF-2C71539172BD.png
 
  • Like
  • Fire
Reactions: 15 users
 
  • Like
  • Fire
  • Love
Reactions: 53 users

chapman89

Founding Member
CD01B5C9-DD9D-4AF7-B80F-3AD153CD0BC0.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 46 users

Dhm

Regular
If you want to stop worrying about competitors and whether Brainchip is moving fast enough I suggest reading the following article:

Toward Optoelectronic Chips That Mimic the Human Brain
Mon, 18 Apr 2022 19:00:01 +0000
a-glowing-chip-sits-in-the-middle-of-a-colorfully-rendered-illustration-of-a-brain-made-of-connecting-lines.jpg


The human brain, which is made up of some 86 billion neuronsconnected in a neural network, can perform remarkable feats of computing. Yet it consumes just a dozen or so watts. How does it do it?

IEEE Spectrum recently spoke with Jeffrey Shainline, a physicist at the National Institute of Standards and Technology in Boulder, Colo., whose work may shine some light on this question. Shainline is pursuing an approach to computing that can power advanced forms of artificial intelligence—so-called spiking neural networks, which more closely mimic the way the brain works compared with the kind of artificial neural networks that are widely deployed now. Today, the dominant paradigm uses software running on digital computers to create artificial neural networks that have multiple layers of neurons. These “deep” artificial neural networks have proved immensely successful, but they require enormous computing resources and energy to run. And those energy requirements are growing quickly: in particular, the calculations involved in training deep neural networks are becoming unsustainable.

Researchers have long been tantalized by the prospect of creating artificial neural networks that more closely reflect what goes on in networks of biological neurons, where, as one neuron accepts signals from multiple other neurons, it may reach a threshold level of activation that causes it to “fire,” meaning that it produces an output signal spike that is sent to other neurons, perhaps inducing some of them to fire as well.

“Compared with semiconductors, you can fit many more neurons and synapses on a wafer because you can stack in the third dimension. You can have maybe 10 layers, and that’s a big advantage.”
—Jeffrey Shainline, NIST

A few companies have produced chips for implementing electronic spiking neural networks. Shainline’s research focuses on using superconducting optoelectronic elements in such networks. His work has recently advanced from investigating theoretical possibilities to performing hardware experiments. He tells Spectrumabout these latest developments in his lab.

I’ve heard for years about neuromorphic processing chips from IBM and elsewhere, but I don’t get a sense that they have gained traction in the practical world. Am I wrong?

Jeffrey Shainline: Good question: Spiking neural networks—what are they good for?

IBM’s True North chip from 2014 made a big splash because it was new and different and exciting. More recently, Intel has been doing great things with its Loihi chip. Intel now has its second generation of that. But whether these chips will solve real problems remains a big question.

We know that biological brains can do things that are unmatched by digital computers. Yet these spiking neuromorphic chips don’t immediately knock our socks off. Why not? I don’t think that’s an easy question to answer.

One thing that I’ll point out is that one of these chips doesn’t have 10 billion neurons (roughly the number of neurons in a person’s brain). Even a fruit-fly brain has about 150,000 neurons. Intel’s most recent Loihi chip doesn’t even have that.

Knowing that they are struggling with what they’re going to do with this chip, the folks at Intel have done something clever: They’re giving academics and startups cheap access to their chip—for free in a lot of cases. They’re crowdsourcing creativity in hopes that somebody will find a killer app.

What would you guess the first killer app will be?

Shainline: Maybe a smart speaker, a speaker needs to be always on waiting for you to say some keyword or phrase. That normally requires a lot of power. But studies have shown that a very simple spiking neural algorithm running in one simple chip can do this while using almost no power.

Tell me about the optoelectronic devices that you and your NIST colleagues are working on and how they might improve spiking neural networks.

Shainline: First, you need to understand that light is going to be the best way that you can communicate between neurons in a spiking neural system. That’s because nothing can go faster than light. So using light for communication will allow you to have the biggest spiking neural network possible.

But it’s not enough to just send signals fast. You also need to do it in an energy-efficient manner. So once you’ve chosen to send signals in the form of light, the best energy efficiency you can achieve is if you send just one photon from a neuron to each of its synaptic connections. You can’t make an amount of light any less.

And the superconducting detectorswe are investigating are the best there is when it comes to detecting single photons of light—best in terms of how much energy they dissipate and how fast they can operate.

You could also build a spiking neural network that uses room-temperature semiconductors to send and receive optical signals, though. And right now, it isn’t obvious which strategy is best. But because I’m biased, let me share some reasons to pursue the superconducting approach.

Admittedly, using superconducting elements imposes a lot of overhead—you have to build everything in a cryogenic environment so that your devices remain cold enough to superconduct. But once you’ve done that, you can easily add another crucial element: something called a Josephson junction.

Josephson junctions are the key building block for superconducting computing hardware, whether they’re for superconducting qubits in a quantum computer, superconducting digital logic gates, or superconducting neurons.

Once you’ve decided to use light for communication and superconducting single-photon detectors to sense that light, you will have to build your computer in a cryogenic environment. So without further overhead, you now have Josephson junctions at your disposal.

And this brings a benefit that’s not obvious: It turns out that it is easier to integrate Josephson junctions in three dimensions than it is to integrate [MOSFETs—metal oxide semiconductor field-effect transistors] in three dimensions. That’s because with semiconductors, you fabricate MOSFETs on the lower plane of a silicon wafer. Then you put all your wiring layers up on top. And it becomes essentially impossible to put another layer of MOSFETs on top of that with standard processing techniques.

In contrast, it’s not hard to fabricate Josephson junctions on multiple planes. Two different research groups have demonstrated that. The same is true for the single-photon detectors that we’ve been talking about.

This is a key benefit when you consider what will be needed to allow these networks to scale into something resembling a brain in complexity. Compared with semiconductors, you can fit many more neurons and synapses on a wafer because you can stack in the third dimension. You can have maybe 10 layers, and that’s a big advantage.

The theoretical implications of this approach to computing are impressive. But what kind of hardware have you and your colleagues actually built?

Shainline: One of our most exciting recent results is the demonstration of the integration of superconducting single-photon detectors with Josephson junctions. What that allows us to do is receive single photons of light and use that to switch a Josephson junction and produce an electrical signal and then integrate the signals from many photon pulses.

We’ve recently demonstrated that technology here in our lab. We have also fabricated on a chip light sources that work at low temperatures. And we’ve spent a lot of time on the waveguides needed to carry light signals around on a chip, too.

I mentioned the 3D-integration—the stacking—that’s possible with this kind of computing technology. But if you’re going to have each neuron communicate to a few thousand other neurons, you would also need some way for optical signals to transition without loss from a waveguide in one layer to a waveguide in another layer. We’ve demonstrated that with as many as three stacked planes of these waveguides and believe we could extend that to 10 or so layers.

When you say “integration,” do you just mean that you’ve wired these components together, or do you have everything working on one chip?

Shainline: We have indeed combined superconducting single-photon detectors with Josephson junctions on one chip. That chip gets mounted on a little printed circuit board that we put inside a cryostat to keep it cold enough to remain superconducting. And we use fiber optics for communication from room temperature to low temperature.

Why are you so keen to pursue this approach, and why aren’t others doing the same?

Shainline: There are some pretty strong theoretical arguments as to why this approach to neuromorphic computing could be quite a game changer. But it requires interdisciplinary thinking and collaboration, and right now, we’re really the only group doing specifically this.

I would love it if more people got into the mix. My goal as a researcher is not to be the one that does all this stuff first. I'd be very pleased if researchers from different backgrounds contributed to the development of this technology”

My opinion only DYOR
FF

AKIDA BALLISTA
FF, I see no mention, yet again, of our exciting Akida chip. Maybe I will send off a friendly email to jeff Shainline asking "please explain"
 
  • Like
  • Haha
  • Fire
Reactions: 10 users


About NVISO

NVISO is an Artificial Intelligence company founded in 2009 and headquartered at the Innovation Park of the École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland. Its mission is to help teach machines to understand people and their behavior to make autonomous machines safe, secure, and personalized for humans. As leader in human behavioral AI, it provides robust embedded software solutions that can sense, comprehend, and act upon human behavior in real-world environments deployed at the deep edge. It achieves this through real-time perception and observation of people and objects in contextual situations combined with the reasoning and semantics of human behavior based on trusted scientific research. NVISO’s technology is made accessible through ready-to-use AI solutions addressing Smart Mobility and Smart Health and Living applications (in-cabin monitoring systems, health assessments, and companion robot sensing) with a key focus on the deep and extreme edge with ultra-low power processing capabilities. With a singular focus on how to apply the most advanced and robust technology to industry and societal problems that matter, NVISO’s solutions help advance human potential through more robust and rich human machine interactions. ir.nviso.ai
 
  • Like
  • Fire
Reactions: 37 users
About NVISO

NVISO is an Artificial Intelligence company founded in 2009 and headquartered at the Innovation Park of the École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland. Its mission is to help teach machines to understand people and their behavior to make autonomous machines safe, secure, and personalized for humans. As leader in human behavioral AI, it provides robust embedded software solutions that can sense, comprehend, and act upon human behavior in real-world environments deployed at the deep edge. It achieves this through real-time perception and observation of people and objects in contextual situations combined with the reasoning and semantics of human behavior based on trusted scientific research. NVISO’s technology is made accessible through ready-to-use AI solutions addressing Smart Mobility and Smart Health and Living applications (in-cabin monitoring systems, health assessments, and companion robot sensing) with a key focus on the deep and extreme edge with ultra-low power processing capabilities. With a singular focus on how to apply the most advanced and robust technology to industry and societal problems that matter, NVISO’s solutions help advance human potential through more robust and rich human machine interactions. ir.nviso.ai

4400D933-22F8-469D-83D0-AC1D72DAEA18.jpeg

FC7C77C2-5679-49C8-8783-E8E456A997B7.jpeg

F5EEEA06-A496-42F8-857D-6DB4D02F6841.jpeg
 
  • Like
  • Fire
  • Wow
Reactions: 47 users

Kachoo

Regular
I can see the excitement but realisticly most new that Akida would likely not me in a Merc until 2024.

The likely revenue stream kicking off would be from Renases and Megachips and the earlier partners. Development of new leading edge tech takes time.

Though seeing what's out there is pretty cool.
 
  • Like
  • Love
Reactions: 17 users
FF, I see no mention, yet again, of our exciting Akida chip. Maybe I will send off a friendly email to jeff Shainline asking "please explain"
HI Dhm
Maybe not this time. I have sent this article to Tony Dawe who is responsible for having Macquarie initiate coverage of Brainchip. It is a project he has had running to bring them on board. Congratulations to Tony.

The purpose of sending the article was to suggest it might be something to send to his Macquarie contacts to undermine the suggestion in their report that IBM - True North and Intel - Loihi are competitors in this life.

The beauty of this article is that the solution being explored gives you hypothermia in your lounge room not to mention frost bite if you touch the TV remote and only Elon Musk and Jeff Bezos can afford it.

I have also sent it to my other contact on the basis that they might add him to their list of academics to bring into the Brainchip fold.

In other words don’t bite him on the heal just yet. LOL He is so well qualified that he is a very useful resource.

Regards
FF

AKIDA BALLISTA
 
  • Like
  • Fire
  • Love
Reactions: 28 users

Terroni2105

Founding Member
About NVISO

NVISO is an Artificial Intelligence company founded in 2009 and headquartered at the Innovation Park of the École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland. Its mission is to help teach machines to understand people and their behavior to make autonomous machines safe, secure, and personalized for humans. As leader in human behavioral AI, it provides robust embedded software solutions that can sense, comprehend, and act upon human behavior in real-world environments deployed at the deep edge. It achieves this through real-time perception and observation of people and objects in contextual situations combined with the reasoning and semantics of human behavior based on trusted scientific research. NVISO’s technology is made accessible through ready-to-use AI solutions addressing Smart Mobility and Smart Health and Living applications (in-cabin monitoring systems, health assessments, and companion robot sensing) with a key focus on the deep and extreme edge with ultra-low power processing capabilities. With a singular focus on how to apply the most advanced and robust technology to industry and societal problems that matter, NVISO’s solutions help advance human potential through more robust and rich human machine interactions. ir.nviso.ai

involved in Smart Health, Smart Mobility, Smart Wealth

 
  • Like
  • Love
  • Fire
Reactions: 25 users
Trolling through their recent news feeds NVISO is working with some interesting companies it seems.



Another strong partnership with another premium company in my opinion. Well done Brainchip.
 
  • Like
  • Fire
  • Love
Reactions: 40 users

Boab

I wish I could paint like Vincent
If you want to stop worrying about competitors and whether Brainchip is moving fast enough I suggest reading the following article:

Toward Optoelectronic Chips That Mimic the Human Brain
Mon, 18 Apr 2022 19:00:01 +0000
a-glowing-chip-sits-in-the-middle-of-a-colorfully-rendered-illustration-of-a-brain-made-of-connecting-lines.jpg


The human brain, which is made up of some 86 billion neuronsconnected in a neural network, can perform remarkable feats of computing. Yet it consumes just a dozen or so watts. How does it do it?

IEEE Spectrum recently spoke with Jeffrey Shainline, a physicist at the National Institute of Standards and Technology in Boulder, Colo., whose work may shine some light on this question. Shainline is pursuing an approach to computing that can power advanced forms of artificial intelligence—so-called spiking neural networks, which more closely mimic the way the brain works compared with the kind of artificial neural networks that are widely deployed now. Today, the dominant paradigm uses software running on digital computers to create artificial neural networks that have multiple layers of neurons. These “deep” artificial neural networks have proved immensely successful, but they require enormous computing resources and energy to run. And those energy requirements are growing quickly: in particular, the calculations involved in training deep neural networks are becoming unsustainable.

Researchers have long been tantalized by the prospect of creating artificial neural networks that more closely reflect what goes on in networks of biological neurons, where, as one neuron accepts signals from multiple other neurons, it may reach a threshold level of activation that causes it to “fire,” meaning that it produces an output signal spike that is sent to other neurons, perhaps inducing some of them to fire as well.

“Compared with semiconductors, you can fit many more neurons and synapses on a wafer because you can stack in the third dimension. You can have maybe 10 layers, and that’s a big advantage.”
—Jeffrey Shainline, NIST

A few companies have produced chips for implementing electronic spiking neural networks. Shainline’s research focuses on using superconducting optoelectronic elements in such networks. His work has recently advanced from investigating theoretical possibilities to performing hardware experiments. He tells Spectrumabout these latest developments in his lab.

I’ve heard for years about neuromorphic processing chips from IBM and elsewhere, but I don’t get a sense that they have gained traction in the practical world. Am I wrong?

Jeffrey Shainline: Good question: Spiking neural networks—what are they good for?

IBM’s True North chip from 2014 made a big splash because it was new and different and exciting. More recently, Intel has been doing great things with its Loihi chip. Intel now has its second generation of that. But whether these chips will solve real problems remains a big question.

We know that biological brains can do things that are unmatched by digital computers. Yet these spiking neuromorphic chips don’t immediately knock our socks off. Why not? I don’t think that’s an easy question to answer.

One thing that I’ll point out is that one of these chips doesn’t have 10 billion neurons (roughly the number of neurons in a person’s brain). Even a fruit-fly brain has about 150,000 neurons. Intel’s most recent Loihi chip doesn’t even have that.

Knowing that they are struggling with what they’re going to do with this chip, the folks at Intel have done something clever: They’re giving academics and startups cheap access to their chip—for free in a lot of cases. They’re crowdsourcing creativity in hopes that somebody will find a killer app.

What would you guess the first killer app will be?

Shainline: Maybe a smart speaker, a speaker needs to be always on waiting for you to say some keyword or phrase. That normally requires a lot of power. But studies have shown that a very simple spiking neural algorithm running in one simple chip can do this while using almost no power.

Tell me about the optoelectronic devices that you and your NIST colleagues are working on and how they might improve spiking neural networks.

Shainline: First, you need to understand that light is going to be the best way that you can communicate between neurons in a spiking neural system. That’s because nothing can go faster than light. So using light for communication will allow you to have the biggest spiking neural network possible.

But it’s not enough to just send signals fast. You also need to do it in an energy-efficient manner. So once you’ve chosen to send signals in the form of light, the best energy efficiency you can achieve is if you send just one photon from a neuron to each of its synaptic connections. You can’t make an amount of light any less.

And the superconducting detectorswe are investigating are the best there is when it comes to detecting single photons of light—best in terms of how much energy they dissipate and how fast they can operate.

You could also build a spiking neural network that uses room-temperature semiconductors to send and receive optical signals, though. And right now, it isn’t obvious which strategy is best. But because I’m biased, let me share some reasons to pursue the superconducting approach.

Admittedly, using superconducting elements imposes a lot of overhead—you have to build everything in a cryogenic environment so that your devices remain cold enough to superconduct. But once you’ve done that, you can easily add another crucial element: something called a Josephson junction.

Josephson junctions are the key building block for superconducting computing hardware, whether they’re for superconducting qubits in a quantum computer, superconducting digital logic gates, or superconducting neurons.

Once you’ve decided to use light for communication and superconducting single-photon detectors to sense that light, you will have to build your computer in a cryogenic environment. So without further overhead, you now have Josephson junctions at your disposal.

And this brings a benefit that’s not obvious: It turns out that it is easier to integrate Josephson junctions in three dimensions than it is to integrate [MOSFETs—metal oxide semiconductor field-effect transistors] in three dimensions. That’s because with semiconductors, you fabricate MOSFETs on the lower plane of a silicon wafer. Then you put all your wiring layers up on top. And it becomes essentially impossible to put another layer of MOSFETs on top of that with standard processing techniques.

In contrast, it’s not hard to fabricate Josephson junctions on multiple planes. Two different research groups have demonstrated that. The same is true for the single-photon detectors that we’ve been talking about.

This is a key benefit when you consider what will be needed to allow these networks to scale into something resembling a brain in complexity. Compared with semiconductors, you can fit many more neurons and synapses on a wafer because you can stack in the third dimension. You can have maybe 10 layers, and that’s a big advantage.

The theoretical implications of this approach to computing are impressive. But what kind of hardware have you and your colleagues actually built?

Shainline: One of our most exciting recent results is the demonstration of the integration of superconducting single-photon detectors with Josephson junctions. What that allows us to do is receive single photons of light and use that to switch a Josephson junction and produce an electrical signal and then integrate the signals from many photon pulses.

We’ve recently demonstrated that technology here in our lab. We have also fabricated on a chip light sources that work at low temperatures. And we’ve spent a lot of time on the waveguides needed to carry light signals around on a chip, too.

I mentioned the 3D-integration—the stacking—that’s possible with this kind of computing technology. But if you’re going to have each neuron communicate to a few thousand other neurons, you would also need some way for optical signals to transition without loss from a waveguide in one layer to a waveguide in another layer. We’ve demonstrated that with as many as three stacked planes of these waveguides and believe we could extend that to 10 or so layers.

When you say “integration,” do you just mean that you’ve wired these components together, or do you have everything working on one chip?

Shainline: We have indeed combined superconducting single-photon detectors with Josephson junctions on one chip. That chip gets mounted on a little printed circuit board that we put inside a cryostat to keep it cold enough to remain superconducting. And we use fiber optics for communication from room temperature to low temperature.

Why are you so keen to pursue this approach, and why aren’t others doing the same?

Shainline: There are some pretty strong theoretical arguments as to why this approach to neuromorphic computing could be quite a game changer. But it requires interdisciplinary thinking and collaboration, and right now, we’re really the only group doing specifically this.

I would love it if more people got into the mix. My goal as a researcher is not to be the one that does all this stuff first. I'd be very pleased if researchers from different backgrounds contributed to the development of this technology”

My opinion only DYOR
FF

AKIDA BALLISTA
This reminds me of the story of Archer Materials (AXE). They have figured out a way of doing quantum computing at room temperature and like BRN Mr Shainline may not be aware of this?
AXE is a very exciting Co but I felt they were a long way off from producing a commercially available product and thats why I sold all my AXE shares to buy more BRN. As Shawn Hare (sic) said "watch the financials".
I'm excited.
 
  • Like
  • Fire
  • Love
Reactions: 17 users
AGM notice listed on the ASX. Details how to log in remotely and vote etc.

98 pages of info to devour. :)
 
  • Like
  • Wow
  • Love
Reactions: 16 users
involved in Smart Health, Smart Mobility, Smart Wealth

SMART WEALTH how appropriate is that beneficial use for all visionary Brainchip investors. I mostly hate marketing spin but I must say this is one for a T Shirt. 😂🤣

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Haha
  • Like
  • Fire
Reactions: 15 users
Top Bottom