BRN Discussion Ongoing

HUSS

Regular
Hi JB

It's great that Mercedes fills you with confidence!

For what it's worth, I used Markus Schafer's comments regarding there being 'a long way to go' for context as it creates the context for Heinrich Gotzig's response "Thanks a lot for this very interesting article. I can confirm that it is a long way to go but very promising."

These comments don't diminish Valeo's progress in commercialising a product containing akida. They simply confirm that current neuromorphic technologies still have a long way to being a 'brain on a chip' that is comparable to the human brain which contains 100 billion neurons operating at 20 watts of power.

Cheers!
Yes correct 100%. Long way for full brain an a chip which will take few years of course and BrainChip already started paving this way already and been already successful. This been confirmed by many industry leaders and also we are 3 years min. Ahead them!!. But that doesn’t mean Valeo and MB are not using already our Akida technology which contains fraction of brain in many industries like Automotive, IoT, medical, robotics, industrial.

You don’t need to wait for having full brain on a chip to start using the technology! As we have many examples and uses already proofed and shown by our partners. I can’t imagine what would be the use cases when we will have full brain on a chip!!? plus BRN already in that technology race and also one of its leaders tooo! Because that needs alot of work, efforts and collaboration with our ecosystem partners.

Cheers
 
  • Like
  • Fire
  • Love
Reactions: 19 users
@chapman89 and @Diogenese

I’m off work with an injured back at the moment so I have a bit of time up my sleeve.

I‘m watching the rest of the Qualcomm video. It gets interesting at the 50.12 mark when he starts talking about ”Always on” reading just 1’s and zeros and neural network. He talks about support for third party neural network experiences.

:)


View attachment 27496




Just in case you missed it!

1674106321831.png



I’m hoping that’s my early retirement right there!

The use case example he gave walked and quacked like a duck.

@Diogenese

:)

Edit: of course it is just speculation. Until there is word from the company to confirm it then it’s just wild theories from an anonymous Brainchip investor who has a bias. Given there was an opportunity to promote the new technology during this presentation and Brainchip wasn’t mentioned then either they’re using their own technology or want to keep the magic secret to themselves!

So long as the money shows up in the quarterly I’m now fazed how it gets there!
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 30 users
D

Deleted member 118

Guest
  • Haha
  • Like
Reactions: 4 users
Just in case you missed it!

View attachment 27504


I’m hoping that’s my early retirement right there!

The use case example he gave walked and quacked like a duck.

@Diogenese

:)

Edit: of course it is just speculation. Until there is word from the company to confirm it then it’s just wild theories from an anonymous Brainchip investor who has a bias. Given there was an opportunity to promote the new technology during this presentation and Brainchip wasn’t mentioned then either they’re using their own technology or want to keep the magic secret to themselves!

So long as the money shows up in the quarterly I’m now fazed how it gets there!

Of course a simple Google search revealed Qualcomm has it’s own Neural Software:

1674107702505.png


So I could be guilty of jumping the gun on this one and providing poorly researched information!

Sorry. :(


Edit: I might blame my pain meds!
 
Last edited:
  • Sad
  • Fire
  • Haha
Reactions: 9 users
Of course a simple Google search revealed Qualcomm has it’s own Neural Software:

View attachment 27506

So I could be guilty of jumping the gun on this one and providing poorly researched information!

Sorry. :(

But I just quit my job… what the….

🤣😂

Also tomorrow is Friday. Get some lube ready for the usual reaming 😱😎🤣
 
  • Haha
  • Like
Reactions: 15 users

toasty

Regular
Of course a simple Google search revealed Qualcomm has it’s own Neural Software:

View attachment 27506

So I could be guilty of jumping the gun on this one and providing poorly researched information!

Sorry. :(


Edit: I might blame my pain meds!
I think what they have been using previously is software based CNN's. I'm in the camp that says this looks like AKIDA. If its not then there may be patent infringements here!!! Always on, reacts only to change events, super low power consumption, AI processor shown in the block diagram, and the camera module is from our partner Prophesee.........this definitely walks like a duck.........

My mental ramblings only.....DYOR and make up your own mind.
 
  • Like
  • Fire
  • Wow
Reactions: 22 users
Of course a simple Google search revealed Qualcomm has it’s own Neural Software:

View attachment 27506

So I could be guilty of jumping the gun on this one and providing poorly researched information!

Sorry. :(


Edit: I might blame my pain meds!
Jeez. I put a massive order in just after close and now my internet is down, mobile phone and data is not working due to statewide brownout also landline is munted. 🤯
If you're wondering how I sent this post.... Keep wondering 😂
 
  • Haha
  • Like
Reactions: 8 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Last edited:
  • Like
  • Fire
  • Love
Reactions: 34 users
BrainChip are partners with VVDN Technologies and VVDN Technologies and Qualcomm's qSmartAI80_CUQ610 AI vision kit was developed in partnership with VVDN Technologies.

Just saying...





It’s a tangled web. At least we know they are all aware of us through our partners. VVDN is a great company to be involved with!


1674112094205.png
 
  • Like
  • Fire
  • Love
Reactions: 23 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Hang on everyone! Put the cork back in the Bollingers.

I found this article from April 2022, which discusses Prophesee in collaboration with iCatch on a new event-based imaging sensor. And it repeats the same info as described in Qualcomm's presentation ans well as the slide I posted earlier. It states "From a hardware perspective, the new sensor appears to be very impressive, offering specs such as a >10k fps time-resolution equivalent, >120 dB dynamic range, and a 3nW/event power consumption. From a compute perspective, Metavision promises anywhere from 10 to 1000x less data than frame-rate-based solutions, with a throughput of >1000 obj/s, and a motion period irregularity detection of 1%."


Event-based Vision Sensor—“Metavison”—Promises Impressive Specs​

April 20, 2022 by Jake Hertz

PROPHESEE, along with iCatch, have teamed up to provide the industry with the "world's first event-based vision sensor" in a 13 x 15 mm mini PGBA package.​


As computer vision applications gain more and more momentum, companies are continually investing in new ways to improve the technology. At the same time, as pure imaging improves, power efficiency and data management become significant challenges on the hardware level.

An example computer vision block diagram.

An example computer vision block diagram. Image used courtesy of Cojbasic et al


One proposed solution to this challenge is ditching conventional imaging techniques in favor of event-based vision. Aiming to capitalize on this type of technology, this week, PROPHESEE, in collaboration with iCatch, released a new event-based vision sensor that boasts some impressive specs.
This article will discuss the concept of event-based vision, the benefits it offers, and dissect PROPHESEE’s newest offering.

Challenges in Conventional Vision​

One of the significant challenges in imaging systems is that, as imaging systems become conventionally better, they tend to put more stress on the hardware. Notably, as resolutions and field of view become better, the amount of raw data produced by the camera also increases.
While this may be a positive thing in terms of imaging quality, it creates a plethora of challenges for supporting hardware.

An example block diagram of a general image sensor.

An example block diagram of a general image sensor. Image used courtesy of Microsoft and LiKamWa et al


This increase in data traffic can have the harmful effect of placing an increased burden on computing resources, which now need to be able to process more data at faster speeds to maintain real-time operation. On top of this, conventional imaging systems work by applying the same frame rate to all objects in the scene. The result is that moving objects may end up being undersampled, and the important data in a scene can end up being lost.
When applied to machine learning, this increase in data traffic equals higher latency and more power consumption needed to complete a task. At the same time, much of the data being processed may not even be the essential information within a scene—further adding to the wasted energy and latency of the system.
These problems become even more concerning when coupled with the increasing demand for low power, low latency systems.

Solutions With Event-Based Vision?​

In an attempt to alleviate these issues, one promising solution is event-based vision.

Event-based vision (right) aims to remove redundant information from conventional vision (left) systems.

Event-based vision (right) aims to remove redundant information from conventional vision (left) systems. Image used courtesy of PROPHESEE


The concept of event-based vision rejects traditional frame-based imaging approaches, where every pixel reports back everything it sees at all times.
Instead, event-based sensing relies on each pixel to report what it sees only if it senses a significant change in its field of view. By only producing data when an event occurs, event-based sensing significantly reduces that raw amount of data created by imaging systems while also ensuring that the produced data is full of useful information.
Overall, the direct result type of sensing technology is that machine learning algorithms have to process less data, meaning less power consumption and lower latency overall.

The Metavision Sensor​

This week, PROPHESEE, in collaboration with iCatch, announced the release of its brand new event-based imaging sensor.
Dubbed the "Metavision sensor," the new IC leverages specialized pixels which only respond to changes in its field of view, activating themselves independently when triggered by events. While not an entirely novel technology, PROPHESEE claims that Metavision is the world's first event-based vision sensor available in an industry-standard package, coming in a 13 x 15 mm mini PGBA package.

The new Metavision sensor.

The new Metavision sensor. Image used courtesy of PROPHESEE


From a hardware perspective, the new sensor appears to be very impressive, offering specs such as a >10k fps time-resolution equivalent, >120 dB dynamic range, and a 3nW/event power consumption.
From a compute perspective, Metavision promises anywhere from 10 to 1000x less data than frame-rate-based solutions, with a throughput of >1000 obj/s, and a motion period irregularity detection of 1%.

Push for More Event-based Vision​

With Metavision, PROPHESEE and iCatch appear to have brought an exciting and promising new technology to an industry-standard format, making it more accessible for engineers everywhere.
Thanks to this, the companies are hopeful that event-based vision could start to permeate into the industry and bring its benefits along with it.
 
  • Like
  • Sad
  • Thinking
Reactions: 21 users

Terroni2105

Founding Member
Of course a simple Google search revealed Qualcomm has it’s own Neural Software:

View attachment 27506

So I could be guilty of jumping the gun on this one and providing poorly researched information!

Sorry. :(


Edit: I might blame my pain meds!
But he talks about hardware and Akida is hardware isn’t it?
“…this is happening in hardware in real time in 60fps and can be shot with video or photo so pretty cool stuff and you can only do this in hardware”

edit. I just relistened to the video and the part I have quoted above is not,related to,Prophesee, it is when he is talking about another bit
 
Last edited:
  • Like
Reactions: 11 users
Thank you!







Is this what you are looking for not sure if someone already sorted you out.
 

Diogenese

Top 20
Hang on everyone! Put the cork back in the Bollingers.

I found this article from April 2022, which discusses Prophesee in collaboration with iCatch on a new event-based imaging sensor. And it repeats the same info as described in Qualcomm's presentation ans well as the slide I posted earlier. It states "From a hardware perspective, the new sensor appears to be very impressive, offering specs such as a >10k fps time-resolution equivalent, >120 dB dynamic range, and a 3nW/event power consumption. From a compute perspective, Metavision promises anywhere from 10 to 1000x less data than frame-rate-based solutions, with a throughput of >1000 obj/s, and a motion period irregularity detection of 1%."


Event-based Vision Sensor—“Metavison”—Promises Impressive Specs​

April 20, 2022 by Jake Hertz

PROPHESEE, along with iCatch, have teamed up to provide the industry with the "world's first event-based vision sensor" in a 13 x 15 mm mini PGBA package.​


As computer vision applications gain more and more momentum, companies are continually investing in new ways to improve the technology. At the same time, as pure imaging improves, power efficiency and data management become significant challenges on the hardware level.

An example computer vision block diagram.

An example computer vision block diagram. Image used courtesy of Cojbasic et al


One proposed solution to this challenge is ditching conventional imaging techniques in favor of event-based vision. Aiming to capitalize on this type of technology, this week, PROPHESEE, in collaboration with iCatch, released a new event-based vision sensor that boasts some impressive specs.
This article will discuss the concept of event-based vision, the benefits it offers, and dissect PROPHESEE’s newest offering.

Challenges in Conventional Vision​

One of the significant challenges in imaging systems is that, as imaging systems become conventionally better, they tend to put more stress on the hardware. Notably, as resolutions and field of view become better, the amount of raw data produced by the camera also increases.
While this may be a positive thing in terms of imaging quality, it creates a plethora of challenges for supporting hardware.

An example block diagram of a general image sensor.

An example block diagram of a general image sensor. Image used courtesy of Microsoft and LiKamWa et al


This increase in data traffic can have the harmful effect of placing an increased burden on computing resources, which now need to be able to process more data at faster speeds to maintain real-time operation. On top of this, conventional imaging systems work by applying the same frame rate to all objects in the scene. The result is that moving objects may end up being undersampled, and the important data in a scene can end up being lost.
When applied to machine learning, this increase in data traffic equals higher latency and more power consumption needed to complete a task. At the same time, much of the data being processed may not even be the essential information within a scene—further adding to the wasted energy and latency of the system.
These problems become even more concerning when coupled with the increasing demand for low power, low latency systems.

Solutions With Event-Based Vision?​

In an attempt to alleviate these issues, one promising solution is event-based vision.

Event-based vision (right) aims to remove redundant information from conventional vision (left) systems.

Event-based vision (right) aims to remove redundant information from conventional vision (left) systems. Image used courtesy of PROPHESEE


The concept of event-based vision rejects traditional frame-based imaging approaches, where every pixel reports back everything it sees at all times.
Instead, event-based sensing relies on each pixel to report what it sees only if it senses a significant change in its field of view. By only producing data when an event occurs, event-based sensing significantly reduces that raw amount of data created by imaging systems while also ensuring that the produced data is full of useful information.
Overall, the direct result type of sensing technology is that machine learning algorithms have to process less data, meaning less power consumption and lower latency overall.

The Metavision Sensor​

This week, PROPHESEE, in collaboration with iCatch, announced the release of its brand new event-based imaging sensor.
Dubbed the "Metavision sensor," the new IC leverages specialized pixels which only respond to changes in its field of view, activating themselves independently when triggered by events. While not an entirely novel technology, PROPHESEE claims that Metavision is the world's first event-based vision sensor available in an industry-standard package, coming in a 13 x 15 mm mini PGBA package.

The new Metavision sensor.

The new Metavision sensor. Image used courtesy of PROPHESEE


From a hardware perspective, the new sensor appears to be very impressive, offering specs such as a >10k fps time-resolution equivalent, >120 dB dynamic range, and a 3nW/event power consumption.
From a compute perspective, Metavision promises anywhere from 10 to 1000x less data than frame-rate-based solutions, with a throughput of >1000 obj/s, and a motion period irregularity detection of 1%.

Push for More Event-based Vision​

With Metavision, PROPHESEE and iCatch appear to have brought an exciting and promising new technology to an industry-standard format, making it more accessible for engineers everywhere.
Thanks to this, the companies are hopeful that event-based vision could start to permeate into the industry and bring its benefits along with it.

Put your happy pants back on - April 2022 was before June 2022 if I remember correctly:-

https://www.prophesee.ai/2022/06/20/brainchip-partners-with-prophesee/

BrainChip Partners with Prophesee Optimizing Computer Vision AI Performance and Efficiency​


Laguna Hills, Calif. – June 14, 2022 – BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of neuromorphic AI IP, and Prophesee, the inventor of the world’s most advanced neuromorphic vision systems, today announced a technology partnership that delivers next-generation platforms for OEMs looking to integrate event-based vision systems with high levels of AI performance coupled with ultra-low power technologies.
 
  • Like
  • Love
  • Fire
Reactions: 63 users
But he talks about hardware and Akida is hardware isn’t it?
“…this is happening in hardware in real time in 60fps and can be shot with video or photo so pretty cool stuff and you can only do this in hardware”

Yes. I agree and I did see that was software whereas they were promoting on chip hardware but I was a bit exuberant at the start and hadn’t looked into it as close as I should before I started celebrating so I was putting a bit more of a realistic approach to my post so as not to raise expectations. Ultimately we still haven’t got an affirming announcement, even via media if not the ASX.

I see Bravo has probably now probably found the answer with Icatch. Although again I’m not sure where Icatch has gotten their hardware from as it looks (with a quick read) they are SOC designers which my understanding is they still needed to get their IP from somewhere.


I was hoping someone with actual technical knowledge could fill the void of my lack of knowledge!

The highs and lows, knowns and unknowns!

Edit: and DIO has entered the room. Someone with actual technical knowledge and expertise; so I’ll listen to him.
 
  • Like
  • Love
  • Fire
Reactions: 22 users

Dhm

Regular
Sorry....off piste, but I just witnessed a new Australian tennis star win just then. Alexie Popyrin. Amazing game, amazingly humble.
 
  • Like
  • Fire
  • Love
Reactions: 16 users

Getupthere

Regular
 
  • Like
Reactions: 3 users
Yes. I agree and I did see that was software whereas they were promoting on chip hardware but I was a bit exuberant at the start and hadn’t looked into it as close as I should before I started celebrating so I was putting a bit more of a realistic approach to my post so as not to raise expectations. Ultimately we still haven’t got an affirming announcement, even via media if not the ASX.

I see Bravo has probably now probably found the answer with Icatch. Although again I’m not sure where Icatch has gotten their hardware from as it looks (with a quick read) they are SOC designers which my understanding is they still needed to get their IP from somewhere.


I was hoping someone with actual technical knowledge could fill the void of my lack of knowledge!

The highs and lows, knowns and unknowns!

Edit: and DIO has entered the room. Someone with actual technical knowledge and expertise; so I’ll listen to him.
I had another listen to the Brainchip Prophesee podcast today.

Apologies for the rough transcript copying:
25:32
continues to evolve um that's for sure so let's talk about brain chip and prophecy and you know I
25:39
remember when when we had you here uh in in in Laguna Hills at our facility and
25:46
and we start to show you some of the demos that we were doing with some of your systems and and the time frame it
25:51
took us and the excitement on Kristoff's face when whoa this is great um and we're building upon that and
25:57
we're going to continue driving that forward the the closeness between the prophecy technology and what you guys
26:03
have done and leveraging brainships you know neural network accelerator with the
26:08
Kia Akita excuse me and what we're doing um so let's talk about that let's talk
26:14
about how they complement and enhance the performance of your products
and uh just tell us a little bit about that
26:20
yeah so I must say that we we are very excited about this this collaboration with uh with the brain chip uh because I
26:28
would uh I mean we are natural natural partner we our technology are are very
26:35
uh complementary
from the very beginning when we started the prophecy with Kristoff we knew that um
26:43
um we were actually building a house of story in fact because the retina the
26:48
retina per se is an extension of the brain that is doing uh this fantastic
26:54
job of pre-select processing information only sending what is relevant for the
26:59
decision but then the brain is doing the Rex is is actually
27:04
processing this event and then taking the decision and today what we are doing
27:10
at process is that um uh from the beginning we have been we have been
27:15
interfacing our sensor with the conventional uh compute platform based on conventional architecture that are
27:22
today optimized for frame-based type of data uh and therefore uh we faced we
27:30
have been facing we're still facing some integration challenges uh that are
27:35
sometimes uh also uh impacting to some extent the performance of of of the
27:43
sensor itself right you need to make some trade-off so now combining our
27:48
human inspired sensor with uh brainship human inspired the processing platform
27:54
which is actually by Design like our sensor conceived to actually process starts and
28:01
asynchronous and fast data that are naturally generated by ourselves then then the health story we're missing from
28:08
the beginning with Christoph is now complete now we can tell a full story to our customers
it is extremely powerful
28:14
because all the intrinsic benefit of this human inspired technology from the
28:20
acquisition to the process into the decision is now is now possible so now we can really show
28:26
um we really push the level of performance in terms of speed in terms of efficiency uh to uh to levels that
28:34
are unprecedented in the industry
yeah it's absolutely correct and this is this
28:40
this is a really key point for for all of our listeners um

This is a key exchange between Luca and Rob without disclosing confidential information. My interpretation of this conversation is that Prophesee would be CRAZY to use anything else other than Akida as much as possible IF:
- They don't want lesser performance
- They don't want said trade off's to use a different technology to Akida
- They don't want continued challenges in integration
- They don't want to continue along the path of already precedented levels of performance in the industry

I recommend giving the podcast another listen to all who haven't already! My favourite podcast Brainchip has done to date.
 
  • Like
  • Love
  • Fire
Reactions: 51 users

Adam

Regular
Hey GUYS

I'M BACK. long story and i have so much catching up to do. But all is well now and I'm back :) If someone can summarise in a nutshell what i missed :cool:
We're all here. Everyone is doing great dot joining to 5 million companies. Some downrampers, but, as a LTH, I'm gonna wait till the NDAs are signed off, and we see a formal statement from Brainchip, then, we buy the champagne. Here endeth the sermon.
 
  • Like
  • Fire
Reactions: 11 users

Getupthere

Regular

Home » AI » Consumer Products Could Soon Feature Neuromorphic Sensing

Consumer Products Could Soon Feature Neuromorphic Sensing​

Article By : Sally Ward-Foxton​

Prophesee-Sony-Sensor_600.jpg

In this interview with Prophesee CEO Luca Verre, we discuss Prophesee's approach to commercializing its retina-inspired sensors and where the technology will go from here.
What does “neuromorphic” mean today?
“You will get 10 different answers from 10 different people,” laughed Luca Verre, CEO of Prophesee. “As companies take the step from ‘this is what we believe’ to ‘how can we make this a reality,’ what neuromorphic means will change.”
Most companies doing neuromorphic sensing and computing have a similar vision in mind, he said, but implementations and strategies will be different based on varying product, market, and investment constraints.
【Download】The QA exchange deck in Solido Crosscheck enables an IP qualification handshake
“The reason why… all these companies are working [on neuromorphic technologies] is because there is a fundamental belief that the biological model has superior characteristics compared to the conventional,” he said. “People make different assumptions on product, on system integration, on business opportunities, and they make different implementations… But fundamentally, the belief is the same.”
Prophesee CEO Luca Verre Luca Verre (Source: Prophesee)
Verre’s vision is that neuromorphic technologies can bring technology closer to human beings, which ultimately makes for a more immersive experience and allows technologies such as autonomous driving and augmented reality to be adopted faster.
“When people understand the technology behind it is closer to the way we work, and fundamentally natural, this is an incredible source of reassurance,” he said.
Which markets first?
Prophesee is already several years into its mission to commercialize the event–based camera using its proprietary dynamic vision sensor technology. The company has collaborated with camera leader Sony to make a compact, high–resolution event–based camera module, the IMX 636. In this module, the photodiode layer is stacked directly on top of the CMOS layer using Sony’s 3D die stacking process.
According to Verre, the sector closest to commercial adoption of this technology is industrial machine vision.
“Industrial is a leading segment today because historically we pushed our third–generation camera into this segment, which was a bigger sensor and more tuned for this type of application,” he said. “Industrial has historically been a very active machine vision segment, in fact, it is probably one of the segments that adopted the CCD and CMOS technologies at the very beginning… definitely a key market.”
Prophesee Sony IMX 636 event-based camera Prophesee and Sony IMX 636 is a fourth-generation product. Prophesee said future generations will reduce pixel pitch and ease integration with conventional computing platforms (Source: Prophesee)
The second key market for the IMX 636 is consumer technologies, driven by the shrink in size enabled by Sony’s die–stacking process. Consumer applications include IoT cameras, surveillance cameras, action cameras, drones, consumer robots, and even smartphones. In many cases, the event–based camera is used alongside a full–frame camera, detecting motion so that image processing can be applied to capture better quality images, even when the subject is moving.
“The reason is very simple: event–based cameras are great to understand motion,” he said. “This is what they are meant for. Frame–based cameras are more suited to understanding static information. The combination of the dynamic information from an event–based camera and static information from a frame–based camera is complementary if you want to capture a picture or video in a scene where there’s something moving.”
Event data can be combined with full–frame images to correct any blur on the frame, especially for action cameras and surveillance cameras.
“We clearly see some traction in this in this area, which of course is very promising because the volume typically associated with this application can be quite substantial compared to industrial vision,” he said.
Prophesee is also working with a customer on automotive driver monitoring solutions, where Verre said event–based cameras bring advantages in terms of low light performance, sensitivity, and fast detection. Applications here include eye blinking detection, tracking or face tracking, and micro–expression detection.
Approach to commercialization
Prophesee's EV4 evaluation kit Prophesee’s EV4 evaluation kit (Source: Prophesee)
Prophesee has been working hard on driving commercialization of event–based cameras. The company recently released a new evaluation kit (EVK4) for the IMX 636. This kit is designed for industrial vision with a rugged housing but will work for all applications (Verre said several hundred of these kits have been sold). The company’s Metavision SDK for event–based vision has also recently been open–sourced in order to reduce friction in the adoption of event–based technology. The Metavision community has around 5,000 registered members today.
“The EDK is a great tool to further push and evangelize the technology, and it comes in a very typical form factor,” Verre said. “The SDK hides the perception of complexity that every engineer or researcher may have when testing or exploring a new technology… Think about engineers that have been working for a couple of decades on processing images that now see events… they don’t want to be stretched too much out of their comfort zone.”
New to the Metavision SDK is a simulator to convert full frames into events to help designers transition between the way they work today and the event domain. Noting a reluctance of some designers to move away from full frames, Verre said the simulator is intended to show them there’s nothing magic about events.
“[Events are] just a way of capturing information from the scene that contains much more temporal precision compared to images, and is actually much more relevant, because typically you get only what is changing,” he said.
How Prophesee's event-based cameras work How Prophesee’s event-based cameras work (click to enlarge) (Source: Prophesee)
The simulator can also reconstruct image full frames from event data, which he says people find reassuring.
“The majority of customers don’t pose this challenge any longer because they understand that they need to see from a different perspective, similar to when they use technology like time of flight or ultrasound,” he said. “The challenge is when their perception is that this is another image sensor… for this category of customer, we made this tool that can show them the way to transition stepwise to this new sensing modality… it is a mindset shift that may take some time, but it will come.”
Applications realized in the Prophesee developer community include restoring some sight for the blind, detecting and classifying contaminants in medical samples, particle tracking in research, robotic touch sensors, and tracking space debris.
Hardware roadmap
In terms of roadmap, Prophesee plans to continue development of both hardware and software, alongside new evaluation kits, development kits, and reference designs. This may include system reference designs which combine Prohpesee sensors with specially developed processors. For example, Prohpesee partner iCatch has developed an AI vision processor SoC that interfaces natively with the IMX 636 and features an on–chip event decoder. Japanese AI core provider DMP is also working with Prophesee on an FPGA–based system, and there are more partnerships in the works, said Verre.
“We see that there is growing interest from ecosystem partners at the SoC level, but also the software level, that are interested in building new solutions based on Prophesee technology,” he said. “This type of asset is important for the community, because it is another step towards the full solution — they can get the sensor, camera, computing platform, and software to develop an entire solution.”
Prophesee event-based camera roadmap The evolution of Prophesee’s event-based vision sensors (click to enlarge) (Source: Prophesee)
Where does event–based sensor hardware go from here? Verre cited two key directions the technology will move in. The first is further reduction of pixel size (pixel pitch) and overall reduction of the sensor to make it suitable for compact consumer applications such as wearables. The second is facilitating the integration of event–based sensing with conventional SoC platforms.
Working with computing companies will be critically important to ensure next–generation sensors natively embed the capability to interface with the computing platform, which simplifies the task at the system level. The result will be smarter sensors, with added intelligence at the sensor level.
“We think events make sense, so let’s do more pre-processing inside the sensor itself, because it’s where you can make the least compromise,” Verre said. “The closer you get to the acquisition of the information, the better off you are in terms of efficiency and low latency. You also avoid the need to encode and transmit the data. So this is something that we are pursuing.”
As foundries continue to make progress in the 3D stacking process, stacking in two or even three layers using the most advanced CMOS processes can help bring more intelligence down to the pixel level.
How much intelligence in the pixel is the right amount?
Verre said it’s a compromise between increasing the cost of silicon and having sufficient intelligence to make sure the interface with conventional computing platforms is good enough.
“Sensors don’t typically use advanced process nodes, 28nm or 22nm at most,” he said. “Mainstream SoCs use 12nm, 7nm, 5nm, and below, so they’re on technology nodes that can compress the digital component extremely well. The size versus cost equation means at a certain point it’s more efficient, more economical [to put the intelligence] in the SoC.”
Applications for Prophesee event-based cameras A selection of applications for Prophesee event-based cameras (click to enlarge) (Source: Prophesee)
There is also a certain synergy to combining event–based sensors with neuromorphic computing architectures.
“The ultimate goal of neuromorphic technology is to have both the sensing and processing neuromorphic or event–based, but we are not yet there in terms of maturity of this type of solution,” he said. “We are very active in this area to prepare for the future — we are working with Intel, SynSense, and other partners in this area — but in the short term, the mainstream market is occupied by conventional SoC platforms.
Prophesee’s approach here is pragmatic. Verre said the company’s aim is to try to minimize any compromises to deliver benefits that are superior to conventional solutions.
“Ultimately we believe that events should naturally stream asynchronously to a compute architecture that is also asynchronous in order to benefit fully in terms of latency and power,” he said. “But we need to be pragmatic and stage this evolution, and really capitalize on the existing platforms out there and work with key partners in this space that are willing to invest in software–hardware developments and to optimize certain solution for certain markets.”
This article was originally published on EE Times.
Sally Ward-Foxton covers AI technology and related issues for EETimes.com and all aspects of the European industry for EE Times Europe magazine. Sally has spent more than 15 years writing about the electronics industry from London, UK. She has written for Electronic Design, ECN, Electronic Specifier: Design, Components in Electronics, and many more. She holds a Masters’ degree in Electrical and Electronic Engineering from the University of Cambridge.
 
  • Like
  • Fire
  • Love
Reactions: 18 users

Earlyrelease

Regular
Well my takeaway from today must be the definition of the employment term for hybrid.
I look forward to applying this to my retirement plans. Thus a $10 SP, I will swap between sightseeing, indulging in fine food and drinks between a variety of workplaces aka- locations which may or may not be restricted the WA hopefully consisting of palms trees beaches and stunning scenery.
🌴🛩️🛳️🏕️⛩️🛤️🕌
 
  • Like
  • Love
  • Fire
Reactions: 19 users
Top Bottom