NVISO / BRN

Logo


HomeIoTBrainChip and NVISO associate on human behavioral analytics in automotive and edge...

IoT

BrainChip and NVISO associate on human behavioral analytics in automotive and edge AI gadgets​



Byadmin
April 24, 2022
0
6

https://www.facebook.com/sharer.php...-analytics-in-automotive-and-edge-ai-gadgets/
https://twitter.com/intent/tweet?te...utomotive-and-edge-ai-gadgets/&via=Wpcom.info
https://pinterest.com/pin/create/bu...l+analytics+in+automotive+and+edge+AI+gadgets
https://api.whatsapp.com/send?text=...-analytics-in-automotive-and-edge-ai-gadgets/


Laguna Hills, United States and Lausanne, Switzerland. – 19 April, 2022 – BrainChip Holdings Ltd, a business producer of neuromorphic synthetic intelligence (AI) IP and chips, and nViso SA (NVISO), a human behavioural analytics AI firm, introduced a collaboration focusing on battery-powered purposes in robotics and mobility/automotive. This can handle the necessity for prime ranges of AI efficiency with ultra-low energy applied sciences. The preliminary effort will embody implementing NVISO’s AI options for social robots and In-cabin monitoring methods on BrainChip’s Akida processors.

Builders of automotive and shopper applied sciences are striving for gadgets that higher reply to human behaviour — which requires instruments and purposes to interpret human habits captured from cameras and sensors on gadgets. Nonetheless, these environments may be constrained by restricted compute efficiency, energy consumption, and cloud connectivity lapses. Akida addresses these weaknesses with excessive efficiency and ultra-low energy (micro- to milliwatts) in addition to by performing AI/ML processing of imaginative and prescient/picture, movement, and sound knowledge instantly on gadgets, as an alternative of in a distant cloud. Since info just isn’t despatched off-device, person privateness and safety are additionally protected.
NVISO’s expertise is uniquely in a position to analyse alerts of human behaviour comparable to facial expressions, feelings, identification, head poses, gaze, gestures, actions, and objects with which customers work together. In robotics and in-vehicle purposes, human habits analytics detect the person’s emotional state to offer personalised, adaptive, interactive, protected gadgets and methods. The results of the collaboration between NVISO and BrainChip is anticipated to allow extra superior, extra succesful, and extra correct AI on shopper merchandise.
“Our work with BrainChip will help AI’s demanding energy/value/efficiency wants for OEMs, even at mass manufacturing and scale, to allow them to profit from sooner and extra environment friendly improvement cycles,” says, Tim Llewellynn, CEO of NVISO. “Extremely-low energy edge-based shopper processing is anticipated to ship a extra clever and individualised person expertise, and we imagine operating our AI options for Social Robots and In-cabin Monitoring Methods on Akida will present a aggressive edge for joint prospects demanding always-on options on low energy budgets.”
“NVISO’s human behavioural analytics AI methods supply fascinating potentialities in properties, vehicles, buildings, hospitals, and extra, and we’re obsessed with supporting these capabilities with BrainChip’s processing efficiency and power effectivity,” says, Sean Hehir, BrainChip CEO. “This isn’t solely a collaboration between two firms, it’s advancing the state-of-the-art in AI with platforms for edge AI gadgets to interpret human habits, enhancing product efficiency and person expertise.”
BrainChip’s neuromorphic processor, Akida, mimics the human mind to analyse solely important sensor inputs on the level of acquisition, processing knowledge with unparalleled effectivity, precision, and economic system of power. Retaining AI/ML native to the chip, unbiased of the cloud, additionally dramatically reduces latency whereas enhancing privateness and knowledge safety.
 
  • Like
  • Fire
  • Wow
Reactions: 20 users
Pages and pages of links to this latest press release regarding Nviso from everywhere around the World - Jerome Nadel is certainly having an impact on raising the international awareness of our Australian born unicorn.

The best kept secret will soon be the secret everyone is talking about.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
Reactions: 24 users

Boab

I wish I could paint like Vincent
Logo


HomeIoTBrainChip and NVISO associate on human behavioral analytics in automotive and edge...

IoT

BrainChip and NVISO associate on human behavioral analytics in automotive and edge AI gadgets​



Byadmin
April 24, 2022
0
6

https://www.facebook.com/sharer.php?u=https://wpcom.info/brainchip-and-nviso-associate-on-human-behavioral-analytics-in-automotive-and-edge-ai-gadgets/
https://twitter.com/intent/tweet?text=BrainChip+and+NVISO+associate+on+human+behavioral+analytics+in+automotive+and+edge+AI+gadgets&url=https://wpcom.info/brainchip-and-nviso-associate-on-human-behavioral-analytics-in-automotive-and-edge-ai-gadgets/&via=Wpcom.info
https://pinterest.com/pin/create/bu...l+analytics+in+automotive+and+edge+AI+gadgets
https://api.whatsapp.com/send?text=BrainChip+and+NVISO+associate+on+human+behavioral+analytics+in+automotive+and+edge+AI+gadgets https://wpcom.info/brainchip-and-nviso-associate-on-human-behavioral-analytics-in-automotive-and-edge-ai-gadgets/


Laguna Hills, United States and Lausanne, Switzerland. – 19 April, 2022 – BrainChip Holdings Ltd, a business producer of neuromorphic synthetic intelligence (AI) IP and chips, and nViso SA (NVISO), a human behavioural analytics AI firm, introduced a collaboration focusing on battery-powered purposes in robotics and mobility/automotive. This can handle the necessity for prime ranges of AI efficiency with ultra-low energy applied sciences. The preliminary effort will embody implementing NVISO’s AI options for social robots and In-cabin monitoring methods on BrainChip’s Akida processors.

Builders of automotive and shopper applied sciences are striving for gadgets that higher reply to human behaviour — which requires instruments and purposes to interpret human habits captured from cameras and sensors on gadgets. Nonetheless, these environments may be constrained by restricted compute efficiency, energy consumption, and cloud connectivity lapses. Akida addresses these weaknesses with excessive efficiency and ultra-low energy (micro- to milliwatts) in addition to by performing AI/ML processing of imaginative and prescient/picture, movement, and sound knowledge instantly on gadgets, as an alternative of in a distant cloud. Since info just isn’t despatched off-device, person privateness and safety are additionally protected.
NVISO’s expertise is uniquely in a position to analyse alerts of human behaviour comparable to facial expressions, feelings, identification, head poses, gaze, gestures, actions, and objects with which customers work together. In robotics and in-vehicle purposes, human habits analytics detect the person’s emotional state to offer personalised, adaptive, interactive, protected gadgets and methods. The results of the collaboration between NVISO and BrainChip is anticipated to allow extra superior, extra succesful, and extra correct AI on shopper merchandise.
“Our work with BrainChip will help AI’s demanding energy/value/efficiency wants for OEMs, even at mass manufacturing and scale, to allow them to profit from sooner and extra environment friendly improvement cycles,” says, Tim Llewellynn, CEO of NVISO. “Extremely-low energy edge-based shopper processing is anticipated to ship a extra clever and individualised person expertise, and we imagine operating our AI options for Social Robots and In-cabin Monitoring Methods on Akida will present a aggressive edge for joint prospects demanding always-on options on low energy budgets.”
“NVISO’s human behavioural analytics AI methods supply fascinating potentialities in properties, vehicles, buildings, hospitals, and extra, and we’re obsessed with supporting these capabilities with BrainChip’s processing efficiency and power effectivity,” says, Sean Hehir, BrainChip CEO. “This isn’t solely a collaboration between two firms, it’s advancing the state-of-the-art in AI with platforms for edge AI gadgets to interpret human habits, enhancing product efficiency and person expertise.”
BrainChip’s neuromorphic processor, Akida, mimics the human mind to analyse solely important sensor inputs on the level of acquisition, processing knowledge with unparalleled effectivity, precision, and economic system of power. Retaining AI/ML native to the chip, unbiased of the cloud, additionally dramatically reduces latency whereas enhancing privateness and knowledge safety.
Wow, what a fabulous article. A must read.
 
  • Like
  • Love
  • Fire
Reactions: 8 users
Google translation of Mainland Chinese news item:

Focussing on self-driving BrainChip works with NVISO

Pay close attention

Zhou Yi, editor in charge

2022-04-24 09:17

[Autohome Information] According to foreign media reports, BrainChip, a commercial manufacturer of neural AI IP and chips, announced that it has reached a cooperation with nViso SA (NVISO), a human behaviour analysis AI company, to focus on batteries in the field of robotics and mobile travel/automatic driving. Power application to meet the needs of high-level artificial intelligence performance using ultra-low power consumption technology. In the initial stage of cooperation, the two sides will apply NVISO's AI solution to BrainChip's Akida processor for social robots and in-vehicle monitoring systems.

Developers of automotive and consumer technologies are working hard to develop devices that can better respond to human behaviour, so tools and applications are needed to explain human behaviour captured from cameras and sensors on devices. However, these environments may be limited by limited computing performance, power consumption and cloud connection failures. With high performance and ultra-low power consumption (micro to milliwatts), as well as AI/ML processing of visual/image, motion and sound data directly on the device rather than in the remote cloud, Akida can deal with the above problems. Because information is not sent outside the device, the privacy and security of users are also protected.

NVISO technology can analyse the signals of human behaviour, such as facial expressions, emotions, identity, head posture, gaze, gestures, activities and objects that interact with users. In robotics and on-board applications, human behaviour analysis can detect the emotional state of users to provide personalised, adaptive, interactive and safe devices and systems. The results of the cooperation between NVISO and BrainChip are expected to achieve more advanced, powerful and accurate artificial intelligence in consumer products.

Tim Llewellynn, CEO of NVISO, said: "This cooperation will meet the demanding power/cost/performance needs of AI by OEMs. Even in the case of mass production and scale, OEMs can benefit from faster and more efficient development cycles. Edge-based ultra-low-power consumer processing is expected to provide a smarter and more personalised user experience, and we believe that AI solutions running our social robots and in-vehicle monitoring systems on Akida will provide a competitive advantage for common customers to meet their requirements such as low-power budgets and continuous functions. ."

Sean Hehir, CEO of BrainChip, said: "NVISO's human behaviour analysis AI system can provide more possibilities for homes, cars, buildings, hospitals, etc. We are keen to realise the above functions through BrainChip's processing performance and energy efficiency. This cooperation will promote the development of artificial intelligence through the edge AI device platform, so as to explain human behaviour and improve product performance and user experience. ( Compilation/Autohome Zhou Yi)
By the way I did not insert the words “Pay close attention” these are actually in the article - perhaps a directive from the CCP😂 - maybe it is just a translation thing - could mean in their language ‘Important Announcement’

Regards
FF

AKIDA BALLISTA
 
  • Fire
  • Like
  • Haha
Reactions: 9 users

Kachoo

Regular
Pages and pages of links to this latest press release regarding Nviso from everywhere around the World - Jerome Nadel is certainly having an impact on raising the international awareness of our Australian born unicorn.

The best kept secret will soon be the secret everyone is talking about.

My opinion only DYOR
FF

AKIDA BALLISTA
Yes FF the media releases is snowballing big-time. Only a matter of time till the company starts making money that will snowball also imo. I have never seam so many articles written up on BRN is such a short time. The word is getting out.

I can't keep up with the releases you all post lol with working taking up lots of time.
 
  • Like
  • Fire
  • Love
Reactions: 14 users
Just a brilliant find @IloveLamp. I thought the full article should be posted:
DESIGNLINES
AI & BIG DATA DESIGNLINE

A Shift in Computer Vision is Coming​

By Sally Ward-Foxton 04.28.2022 0
Share Post




Is computer vision about to reinvent itself, again?
Ryad Benosman, professor of Ophthalmology at the University of Pittsburgh and an adjunct professor at the CMU Robotics Institute, believes that it is. As one of the founding fathers of event–based vision technologies, Benosman expects neuromorphic vision — computer vision based on event–based cameras — is the next direction computer vision will take.
ADVERTISING

“Computer vision has been reinvented many, many times,” he said. “I’ve seen it reinvented twice at least, from scratch, from zero.”
Ryad Benosman Ryad Benosman (Source: University of Pittsburgh)
Benosman cites a shift in the 1990s from image processing with a bit of photogrammetry to a geometry–based approach, and then today with the rapid change towards machine learning. Despite these changes, modern computer vision technologies are still predominantly based on image sensors — cameras that produce an image similar to what the human eye sees.
According to Benosman, until the image sensing paradigm is no longer useful, it holds back innovation in alternative technologies. The effect has been prolonged by the development of high–performance processors such as GPUs which delay the need to look for alternative solutions.

“Why are we using images for computer vision? That’s the million–dollar question to start with,” he said. “We have no reasons to use images, it’s just because there’s the momentum from history. Before even having cameras, images had momentum.”

IMAGE CAMERAS

Image cameras have been around since the pinhole camera emerged in the fifth century B.C. By the 1500s, artists built room–sized devices used to trace the image of a person or a landscape outside the room onto canvas. Over the years, the paintings were replaced with film to record the images. Innovations such as digital photography eventually made it easy for image cameras to become the basis for modern computer vision techniques.
Benosman argues, however, .image camera–based techniques for computer vision are hugely inefficient. His analogy is the defense system of a medieval castle: guards positioned around the ramparts look in every direction for approaching enemies. A drummer plays a steady beat, and on each drumbeat, every guard shouts out what they see. Among all the shouting, how easy is it to hear the one guard who spots an enemy at the edge of a distant forest?
The 21st century hardware equivalent of the drumbeat is the electronic clock signal and the guards are the pixels — a huge batch of data is created and must be examined on every clock cycle, which means there is a lot of redundant information and a lot of unnecessary computation required.
Neuromorphic vision sensor from Prophesee Prophesee’s evaluation kit for its DVS sensor developed in collaboration with Sony. Benosman is a co–founder of Prophesee (Source: Prophesee)
“People are burning so much energy, it’s occupying the entire computation power of the castle to defend itself,” Benosman said. If an interesting event is spotted, represented by the enemy in this analogy, “you’d have to go around and collect useless information, with people screaming all over the place, so the bandwidth is huge… and now imagine you have a complicated castle. All those people have to be heard.”
Enter neuromorphic vision. The basic idea is inspired by the way biological systems work, detecting changes in the scene dynamics rather than analyzing the entire scene continuously. In our castle analogy, this would mean having guards keep quiet until they see something of interest, then shout their location to sound the alarm. In the electronic version, this means having individual pixels decide if they see something relevant.
“Pixels can decide on their own what information they should send, instead of acquiring systematic information they can look for meaningful information — features,” he said. “That’s what makes the difference.”
This event–based approach can save a huge amount of power, and reduce latency, compared to systematic acquisition at a fixed frequency.
“You want something more adaptive, and that’s what that relative change [in event–based vision] gives you, an adaptive acquisition frequency,” he said. “When you look at the amplitude change, if something moves really fast, we get lots of samples. If something doesn’t change, you’ll get almost zero, so you’re adapting your frequency of acquisition based on the dynamics of the scene. That’s what it brings to the table. That’s why it’s a good design.”
Benosman entered the field of neuromorphic vision in 2000, convinced that advanced computer vision could never work because images are not the right way to do it.
“The big shift was to say that we can do vision without grey levels and without images, which was heresy at the end of 2000 — total heresy,” he said.
The techniques Benosman proposed — the basis for today’s event–based sensing — were so different that papers presented to the foremost IEEE computer vision journal at the time were rejected without review. Indeed, it took until the development of the dynamic vision sensor (DVS) in 2008 for the technology to start gaining momentum.
Neuromorphic vision examples from Prophesee
Some Prophesee customer applications showing the difference between image camera and DVS sensor outputs (Source: Prophesee)

NEUROSCIENCE INSPIRATION

Neuromorphic technologies are those inspired by biological systems, including the ultimate computer, the brain and its compute elements, the neurons. The problem is that no–one fully understands exactly how neurons work. While we know that neurons act on incoming electrical signals called spikes, until relatively recently, researchers characterized neurons as rather sloppy, thinking only the number of spikes mattered. This hypothesis persisted for decades. More recent work has proven that the timing of these spikes is absolutely critical, and that the architecture of the brain is creating delays in these spikes to encode information.
Today’s spiking neural networks, which emulate the spike signals seen in the brain, are simplified versions of the real thing — often binary representations of spikes. “I receive a 1, I wake up, I compute, I sleep,” Benosman explained. The reality is much more complex. When a spike arrives, the neuron starts integrating the value of the spike over time; there is also leakage from the neuron meaning the result is dynamic. There are also around 50 different types of neurons with 50 different integration profiles. Today’s electronic versions are missing the dynamic path of integration, the connectivity between neurons, and the different weights and delays.
“The problem is to make an effective product, you cannot [imitate] all the complexity because we don’t understand it,” he said. “If we had good brain theory, we would solve it — the problem is we just don’t know [enough].”
Today, Bensoman runs a unique laboratory dedicated to understanding the mathematics behind cortical computation, with the aim of creating new mathematical models and replicating them as silicon devices. This includes directly monitoring spikes from pieces of real retina.
For the time being, Benosman is against trying to faithfully copy the biological neuron, describing that approach as old–fashioned.
“The idea of replicating neurons in silicon came about because people looked into the transistor and saw a regime that looked like a real neuron, so there was some thinking behind it at the beginning,” he said. “We don’t have cells; we have silicon. You need to adapt to your computing substrate, not the other way around… if I know what I’m computing and I have silicon, I can optimize that equation and run it at the lowest cost, lowest power, lowest latency.”

PROCESSING POWER

The realization that it’s unnecessary to replicate neurons exactly, combined with the development of the DVS camera, are the drivers behind today’s neuromorphic vision systems. While today’s systems are already on the market, there is still a way to go before we have fully human–like vision available for commercial use.
Initial DVS cameras had “big, chunky pixels,” since components around the photo diode itself reduced the fill factor substantially. While investment in the development of these cameras accelerated the technology, Benosman made it clear that the event cameras of today are simply an improvement of the original research devices developed as far back as 2000. State–of–the–art DVS cameras from Sony, Samsung, and Omnivision have tiny pixels, incorporate advanced technology such as 3D stacking, and reduce noise. Benosman’s worry is whether the types of sensors used today can successfully be scaled up.
“The problem is, once you increase the number of pixels, you get a deluge of data, because you’re still going super fast,” he said. “You can probably still process it in real time, but you’re getting too much relative change from too many pixels. That’s killing everybody right now, because they see the potential, but they don’t have the right processor to put behind it.”
General–purpose neuromorphic processors are lagging behind their DVS camera counterparts. Efforts from some of the industry’s biggest players (IBM Truenorth, Intel Loihi) are still a work in progress. Benosman said that the right processor with the right sensor would be an unbeatable combination.
“[Today’s DVS] sensors are extremely fast, super low bandwidth, and have a high dynamic range so you can see indoors and outdoors,” Benosman said. “It’s the future. Will it take off? Absolutely!”
“Whoever can put the processor out there and offer the full stack will win, because it’ll be unbeatable,” he added.
— Professor Ryad Benosman will give the keynote address at the Embedded Vision Summitin Santa Clara, Calif. on May 17.
 
  • Like
  • Fire
  • Love
Reactions: 14 users
I should mention here for newer shareholders that the founder of Prophesee and Peter van der Made had a little bit of social media admiration feast going on probably about 2019 discovered by others then it went quiet.

Just background nothing more but it is clear the author knows about AKIDA.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Love
  • Fire
Reactions: 8 users

IloveLamp

Top 20
Just a brilliant find @IloveLamp. I thought the full article should be posted:
DESIGNLINES
AI & BIG DATA DESIGNLINE

A Shift in Computer Vision is Coming​

By Sally Ward-Foxton 04.28.2022 0
Share Post




Is computer vision about to reinvent itself, again?
Ryad Benosman, professor of Ophthalmology at the University of Pittsburgh and an adjunct professor at the CMU Robotics Institute, believes that it is. As one of the founding fathers of event–based vision technologies, Benosman expects neuromorphic vision — computer vision based on event–based cameras — is the next direction computer vision will take.
ADVERTISING

“Computer vision has been reinvented many, many times,” he said. “I’ve seen it reinvented twice at least, from scratch, from zero.”
Ryad Benosman Ryad Benosman (Source: University of Pittsburgh)
Benosman cites a shift in the 1990s from image processing with a bit of photogrammetry to a geometry–based approach, and then today with the rapid change towards machine learning. Despite these changes, modern computer vision technologies are still predominantly based on image sensors — cameras that produce an image similar to what the human eye sees.
According to Benosman, until the image sensing paradigm is no longer useful, it holds back innovation in alternative technologies. The effect has been prolonged by the development of high–performance processors such as GPUs which delay the need to look for alternative solutions.

“Why are we using images for computer vision? That’s the million–dollar question to start with,” he said. “We have no reasons to use images, it’s just because there’s the momentum from history. Before even having cameras, images had momentum.”

IMAGE CAMERAS

Image cameras have been around since the pinhole camera emerged in the fifth century B.C. By the 1500s, artists built room–sized devices used to trace the image of a person or a landscape outside the room onto canvas. Over the years, the paintings were replaced with film to record the images. Innovations such as digital photography eventually made it easy for image cameras to become the basis for modern computer vision techniques.
Benosman argues, however, .image camera–based techniques for computer vision are hugely inefficient. His analogy is the defense system of a medieval castle: guards positioned around the ramparts look in every direction for approaching enemies. A drummer plays a steady beat, and on each drumbeat, every guard shouts out what they see. Among all the shouting, how easy is it to hear the one guard who spots an enemy at the edge of a distant forest?
The 21st century hardware equivalent of the drumbeat is the electronic clock signal and the guards are the pixels — a huge batch of data is created and must be examined on every clock cycle, which means there is a lot of redundant information and a lot of unnecessary computation required.
Neuromorphic vision sensor from Prophesee Prophesee’s evaluation kit for its DVS sensor developed in collaboration with Sony. Benosman is a co–founder of Prophesee (Source: Prophesee)
“People are burning so much energy, it’s occupying the entire computation power of the castle to defend itself,” Benosman said. If an interesting event is spotted, represented by the enemy in this analogy, “you’d have to go around and collect useless information, with people screaming all over the place, so the bandwidth is huge… and now imagine you have a complicated castle. All those people have to be heard.”
Enter neuromorphic vision. The basic idea is inspired by the way biological systems work, detecting changes in the scene dynamics rather than analyzing the entire scene continuously. In our castle analogy, this would mean having guards keep quiet until they see something of interest, then shout their location to sound the alarm. In the electronic version, this means having individual pixels decide if they see something relevant.
“Pixels can decide on their own what information they should send, instead of acquiring systematic information they can look for meaningful information — features,” he said. “That’s what makes the difference.”
This event–based approach can save a huge amount of power, and reduce latency, compared to systematic acquisition at a fixed frequency.
“You want something more adaptive, and that’s what that relative change [in event–based vision] gives you, an adaptive acquisition frequency,” he said. “When you look at the amplitude change, if something moves really fast, we get lots of samples. If something doesn’t change, you’ll get almost zero, so you’re adapting your frequency of acquisition based on the dynamics of the scene. That’s what it brings to the table. That’s why it’s a good design.”
Benosman entered the field of neuromorphic vision in 2000, convinced that advanced computer vision could never work because images are not the right way to do it.
“The big shift was to say that we can do vision without grey levels and without images, which was heresy at the end of 2000 — total heresy,” he said.
The techniques Benosman proposed — the basis for today’s event–based sensing — were so different that papers presented to the foremost IEEE computer vision journal at the time were rejected without review. Indeed, it took until the development of the dynamic vision sensor (DVS) in 2008 for the technology to start gaining momentum.
Neuromorphic vision examples from Prophesee
Some Prophesee customer applications showing the difference between image camera and DVS sensor outputs (Source: Prophesee)

NEUROSCIENCE INSPIRATION

Neuromorphic technologies are those inspired by biological systems, including the ultimate computer, the brain and its compute elements, the neurons. The problem is that no–one fully understands exactly how neurons work. While we know that neurons act on incoming electrical signals called spikes, until relatively recently, researchers characterized neurons as rather sloppy, thinking only the number of spikes mattered. This hypothesis persisted for decades. More recent work has proven that the timing of these spikes is absolutely critical, and that the architecture of the brain is creating delays in these spikes to encode information.
Today’s spiking neural networks, which emulate the spike signals seen in the brain, are simplified versions of the real thing — often binary representations of spikes. “I receive a 1, I wake up, I compute, I sleep,” Benosman explained. The reality is much more complex. When a spike arrives, the neuron starts integrating the value of the spike over time; there is also leakage from the neuron meaning the result is dynamic. There are also around 50 different types of neurons with 50 different integration profiles. Today’s electronic versions are missing the dynamic path of integration, the connectivity between neurons, and the different weights and delays.
“The problem is to make an effective product, you cannot [imitate] all the complexity because we don’t understand it,” he said. “If we had good brain theory, we would solve it — the problem is we just don’t know [enough].”
Today, Bensoman runs a unique laboratory dedicated to understanding the mathematics behind cortical computation, with the aim of creating new mathematical models and replicating them as silicon devices. This includes directly monitoring spikes from pieces of real retina.
For the time being, Benosman is against trying to faithfully copy the biological neuron, describing that approach as old–fashioned.
“The idea of replicating neurons in silicon came about because people looked into the transistor and saw a regime that looked like a real neuron, so there was some thinking behind it at the beginning,” he said. “We don’t have cells; we have silicon. You need to adapt to your computing substrate, not the other way around… if I know what I’m computing and I have silicon, I can optimize that equation and run it at the lowest cost, lowest power, lowest latency.”

PROCESSING POWER

The realization that it’s unnecessary to replicate neurons exactly, combined with the development of the DVS camera, are the drivers behind today’s neuromorphic vision systems. While today’s systems are already on the market, there is still a way to go before we have fully human–like vision available for commercial use.
Initial DVS cameras had “big, chunky pixels,” since components around the photo diode itself reduced the fill factor substantially. While investment in the development of these cameras accelerated the technology, Benosman made it clear that the event cameras of today are simply an improvement of the original research devices developed as far back as 2000. State–of–the–art DVS cameras from Sony, Samsung, and Omnivision have tiny pixels, incorporate advanced technology such as 3D stacking, and reduce noise. Benosman’s worry is whether the types of sensors used today can successfully be scaled up.
“The problem is, once you increase the number of pixels, you get a deluge of data, because you’re still going super fast,” he said. “You can probably still process it in real time, but you’re getting too much relative change from too many pixels. That’s killing everybody right now, because they see the potential, but they don’t have the right processor to put behind it.”
General–purpose neuromorphic processors are lagging behind their DVS camera counterparts. Efforts from some of the industry’s biggest players (IBM Truenorth, Intel Loihi) are still a work in progress. Benosman said that the right processor with the right sensor would be an unbeatable combination.
“[Today’s DVS] sensors are extremely fast, super low bandwidth, and have a high dynamic range so you can see indoors and outdoors,” Benosman said. “It’s the future. Will it take off? Absolutely!”
“Whoever can put the processor out there and offer the full stack will win, because it’ll be unbeatable,” he added.
— Professor Ryad Benosman will give the keynote address at the Embedded Vision Summitin Santa Clara, Calif. on May 17.
Thanks ff 😇
 
  • Like
Reactions: 4 users
I should mention here for newer shareholders that the founder of Prophesee and Peter van der Made had a little bit of social media admiration feast going on probably about 2019 discovered by others then it went quiet.

Just background nothing more but it is clear the author knows about AKIDA.

My opinion only DYOR
FF

AKIDA BALLISTA

@Fact Finder are you able to elaborate mate?
 
  • Like
Reactions: 1 users
I am not on LinkedIn or Twitter but without using the exact words they would like each other’s achievements and the founder of Prophese would add a few words of congratulations. It was that sort of thing. I suspect that one of the other long term investors who inhabited those other places probably has kept something as there are a few bowerbirds amongst our numbers.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
Reactions: 6 users

stuart888

Regular
I am not on LinkedIn or Twitter but without using the exact words they would like each other’s achievements and the founder of Prophese would add a few words of congratulations. It was that sort of thing. I suspect that one of the other long term investors who inhabited those other places probably has kept something as there are a few bowerbirds amongst our numbers.

My opinion only DYOR
FF

AKIDA BALLISTA
Thanks for sharing Fact Finder. Interesting that Brainchip has a direct link to this Prophesee Sensor Engineer's article in EETimes. On this page: https://brainchipinc.com/on-path-to-artificial-brains/
Links to Prophesee "Bring Neuromorphic Vison to the Edge", by Jean-Luc Jafford: https://www.eetimes.com/bringing-neuromorphic-vision-to-the-edge/

Prophesee has some really interesting Use Cases, where the Akida Spiking Neural Processing solution would fit nicely!

https://www.prophesee.ai/event-based-vision-applications/
1651423852268.png
 
  • Like
  • Fire
  • Love
Reactions: 13 users
D

Deleted member 118

Guest
Maybe we can throw Magik Eye into the equation to assist nvisio and brainchip in helping Panasonic build a social robot, as
Louis DiNardo once said the new partnership opens a new and exciting gateway for the company in Japan.
 
Last edited by a moderator:
  • Like
  • Fire
Reactions: 9 users

Perhaps

Regular
I am not on LinkedIn or Twitter but without using the exact words they would like each other’s achievements and the founder of Prophese would add a few words of congratulations. It was that sort of thing. I suspect that one of the other long term investors who inhabited those other places probably has kept something as there are a few bowerbirds amongst our numbers.

My opinion only DYOR
FF

AKIDA BALLISTA
Prophesee partners with SynSense, possibly not the right place for Brainchip:
 
Pages and pages of links to this latest press release regarding Nviso from everywhere around the World - Jerome Nadel is certainly having an impact on raising the international awareness of our Australian born unicorn.

The best kept secret will soon be the secret everyone is talking about.

My opinion only DYOR
FF

AKIDA BALLISTA

It’s starting to feel kinda scary that we all have the winning lottery tickets, don’t you think, I hope I have enough of them, maybe when we start splitting things will definitely get out of control.
The future looks very bright indeed 😎
 
  • Like
  • Love
Reactions: 6 users
D

Deleted member 118

Guest
Can’t wait to buy one for myself for Xmas, recently put up on Nviso website. Hopefully it can move around your home.
C2CCA16A-9AF3-4E5F-901B-1298520E9A51.png
 
Last edited by a moderator:
  • Like
  • Love
Reactions: 9 users
D

Deleted member 118

Guest
  • Like
  • Love
Reactions: 8 users

M_C

Founding Member
  • Like
  • Love
Reactions: 11 users

SERA2g

Founding Member
Not sure whether to post this in a Panasonic thread or NVISO. Also not sure whether this has already been shared but here goes!

Nviso delivers human behaviour AI for Panasonic’s companion robots

LinkedIn Post - "The two companies [Panasonic and Nviso] have signed a multi-year license agreement to deploy the technology."

In the LinkedIn post comments - Tim Llewellyn: "Congrats should go to the amazing NVISO engineering team and our partners to make ultra-low power high performance AI at the Extreme Edge a reality in a mass market consumer product all while having to endure the disruptions of COVID"
The linkedin post has been liked by Adam Osseiran, Tony Dawe and Anil Mankar.

Startup ticker article - "the AI apps can be optimised for typically resource-constrained low power and low-cost processing platforms demanded by battery-operated consumer products. The apps can be easily configured to suit any camera system for optimal distance and camera angle performance. The apps are robust to real-world imaging conditions thanks to NVISO’s large-scale proprietary human behaviour databases, and the solutions do not require information to be sent off-device for processing elsewhere, thus protecting user privacy and safety"

NVISO Press Release - "These AI Apps are specifically designed for low-power and low-memory companion robot hardware platforms enabling “always-on” operation without requiring an internet connection."

Nviso / Brainchip partnership - "The initial effort will include implementing NVISO’s AI solutions for Social Robots and In-cabin Monitoring Systems on BrainChip’s Akida™ processors"

So, extreme edge, always-on, social robot, ultra low power and on-device processing... This is akida!

A quick google tells me that Panasonic have only made a few hundred units so unfortunately I don't think this one will rake in the millions of royalties we're all wanting. It's really cool to see a consumer product containing akida though.


279444691_581619683573616_6097170447053769707_n.jpeg


279046692_1194369044728773_445272957083463360_n.jpeg
 
Last edited:
  • Like
  • Fire
  • Wow
Reactions: 35 users
Top Bottom