MadMayHam | 合氣道
Regular
Thank you!
Is this what you are looking for not sure if someone already sorted you out.
Is this what you are looking for not sure if someone already sorted you out.
Hang on everyone! Put the cork back in the Bollingers.
I found this article from April 2022, which discusses Prophesee in collaboration with iCatch on a new event-based imaging sensor. And it repeats the same info as described in Qualcomm's presentation ans well as the slide I posted earlier. It states "From a hardware perspective, the new sensor appears to be very impressive, offering specs such as a >10k fps time-resolution equivalent, >120 dB dynamic range, and a 3nW/event power consumption. From a compute perspective, Metavision promises anywhere from 10 to 1000x less data than frame-rate-based solutions, with a throughput of >1000 obj/s, and a motion period irregularity detection of 1%."
Event-based Vision Sensor—“Metavison”—Promises Impressive Specs
April 20, 2022 by Jake Hertz
PROPHESEE, along with iCatch, have teamed up to provide the industry with the "world's first event-based vision sensor" in a 13 x 15 mm mini PGBA package.
As computer vision applications gain more and more momentum, companies are continually investing in new ways to improve the technology. At the same time, as pure imaging improves, power efficiency and data management become significant challenges on the hardware level.
![]()
An example computer vision block diagram. Image used courtesy of Cojbasic et al
One proposed solution to this challenge is ditching conventional imaging techniques in favor of event-based vision. Aiming to capitalize on this type of technology, this week, PROPHESEE, in collaboration with iCatch, released a new event-based vision sensor that boasts some impressive specs.
This article will discuss the concept of event-based vision, the benefits it offers, and dissect PROPHESEE’s newest offering.
Challenges in Conventional Vision
One of the significant challenges in imaging systems is that, as imaging systems become conventionally better, they tend to put more stress on the hardware. Notably, as resolutions and field of view become better, the amount of raw data produced by the camera also increases.
While this may be a positive thing in terms of imaging quality, it creates a plethora of challenges for supporting hardware.
![]()
An example block diagram of a general image sensor. Image used courtesy of Microsoft and LiKamWa et al
This increase in data traffic can have the harmful effect of placing an increased burden on computing resources, which now need to be able to process more data at faster speeds to maintain real-time operation. On top of this, conventional imaging systems work by applying the same frame rate to all objects in the scene. The result is that moving objects may end up being undersampled, and the important data in a scene can end up being lost.
When applied to machine learning, this increase in data traffic equals higher latency and more power consumption needed to complete a task. At the same time, much of the data being processed may not even be the essential information within a scene—further adding to the wasted energy and latency of the system.
These problems become even more concerning when coupled with the increasing demand for low power, low latency systems.
Solutions With Event-Based Vision?
In an attempt to alleviate these issues, one promising solution is event-based vision.
![]()
Event-based vision (right) aims to remove redundant information from conventional vision (left) systems. Image used courtesy of PROPHESEE
The concept of event-based vision rejects traditional frame-based imaging approaches, where every pixel reports back everything it sees at all times.
Instead, event-based sensing relies on each pixel to report what it sees only if it senses a significant change in its field of view. By only producing data when an event occurs, event-based sensing significantly reduces that raw amount of data created by imaging systems while also ensuring that the produced data is full of useful information.
Overall, the direct result type of sensing technology is that machine learning algorithms have to process less data, meaning less power consumption and lower latency overall.
The Metavision Sensor
This week, PROPHESEE, in collaboration with iCatch, announced the release of its brand new event-based imaging sensor.
Dubbed the "Metavision sensor," the new IC leverages specialized pixels which only respond to changes in its field of view, activating themselves independently when triggered by events. While not an entirely novel technology, PROPHESEE claims that Metavision is the world's first event-based vision sensor available in an industry-standard package, coming in a 13 x 15 mm mini PGBA package.
![]()
The new Metavision sensor. Image used courtesy of PROPHESEE
From a hardware perspective, the new sensor appears to be very impressive, offering specs such as a >10k fps time-resolution equivalent, >120 dB dynamic range, and a 3nW/event power consumption.
From a compute perspective, Metavision promises anywhere from 10 to 1000x less data than frame-rate-based solutions, with a throughput of >1000 obj/s, and a motion period irregularity detection of 1%.
Push for More Event-based Vision
With Metavision, PROPHESEE and iCatch appear to have brought an exciting and promising new technology to an industry-standard format, making it more accessible for engineers everywhere.
Thanks to this, the companies are hopeful that event-based vision could start to permeate into the industry and bring its benefits along with it.
But he talks about hardware and Akida is hardware isn’t it?
“…this is happening in hardware in real time in 60fps and can be shot with video or photo so pretty cool stuff and you can only do this in hardware”
I had another listen to the Brainchip Prophesee podcast today.Yes. I agree and I did see that was software whereas they were promoting on chip hardware but I was a bit exuberant at the start and hadn’t looked into it as close as I should before I started celebrating so I was putting a bit more of a realistic approach to my post so as not to raise expectations. Ultimately we still haven’t got an affirming announcement, even via media if not the ASX.
I see Bravo has probably now probably found the answer with Icatch. Although again I’m not sure where Icatch has gotten their hardware from as it looks (with a quick read) they are SOC designers which my understanding is they still needed to get their IP from somewhere.
![]()
iCatch and Prophesee to develop AI vision processor
iCatch and Prophesee collaborated on development of AI vision processor natively compatible with Prophesee Event-based Metavision® sensing technologies.www.prophesee.ai
I was hoping someone with actual technical knowledge could fill the void of my lack of knowledge!
The highs and lows, knowns and unknowns!
Edit: and DIO has entered the room. Someone with actual technical knowledge and expertise; so I’ll listen to him.
We're all here. Everyone is doing great dot joining to 5 million companies. Some downrampers, but, as a LTH, I'm gonna wait till the NDAs are signed off, and we see a formal statement from Brainchip, then, we buy the champagne. Here endeth the sermon.Hey GUYS
I'M BACK. long story and i have so much catching up to do. But all is well now and I'm backIf someone can summarise in a nutshell what i missed
![]()
Hi Dio, luca told me back in October 2022 that the relationship between Prophesee & Brainchip started back in 2021…..despite the offical announcement in June 2022.Put your happy pants back on - April 2022 was before June 2022 if I remember correctly:-
https://www.prophesee.ai/2022/06/20/brainchip-partners-with-prophesee/
BrainChip Partners with Prophesee Optimizing Computer Vision AI Performance and Efficiency
Laguna Hills, Calif. – June 14, 2022 – BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of neuromorphic AI IP, and Prophesee, the inventor of the world’s most advanced neuromorphic vision systems, today announced a technology partnership that delivers next-generation platforms for OEMs looking to integrate event-based vision systems with high levels of AI performance coupled with ultra-low power technologies.
Absolutely Dhm.... What a game ... Great Lad .. looking forward to his next matchSorry....off piste, but I just witnessed a new Australian tennis star win just then. Alexie Popyrin. Amazing game, amazingly humble.
Put your happy pants back on - April 2022 was before June 2022 if I remember correctly:-
https://www.prophesee.ai/2022/06/20/brainchip-partners-with-prophesee/
BrainChip Partners with Prophesee Optimizing Computer Vision AI Performance and Efficiency
Laguna Hills, Calif. – June 14, 2022 – BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of neuromorphic AI IP, and Prophesee, the inventor of the world’s most advanced neuromorphic vision systems, today announced a technology partnership that delivers next-generation platforms for OEMs looking to integrate event-based vision systems with high levels of AI performance coupled with ultra-low power technologies.
Thanks Jesse,Hi Dio, luca told me back in October 2022 that the relationship between Prophesee & Brainchip started back in 2021…..despite the offical announcement in June 2022. View attachment 27512
Hi Bravo,Dear Dodgy-knees,
I just took my PJ's off and put my happy pants back on and I also took the cork back out of the champers just in case! Here are two extracts from an article dated Thursday 6 October 2022 which are leading me to think, hope and pray that iCatch may have incorporated Akida. If @Stable Genius is right when he said iCatch are "SOC designers so my understanding is they still needed to get their IP from somewhere", then where else would they be getting their IP that can perform all of the bits and bobs outlined in the two extracts below? See the link at the bottom of the post for the full article.
I mean, does Sony Semiconductor Solutions have its own AI Deep Learning Accelerator? Can it protect the driver's privacy, monitor their attentiveness and their health status through multi-sensor fusion and AI edge computing and even take control over the car when needed?
View attachment 27513
View attachment 27514
I don't want to get too excited. Heck, who am I kidding? I wan't to get REALLY, REALLY EXCITED!!!!!
![]()
iCatch Technology provides driver monitoring system (DMS) solutions
As electric vehicle and ADAS (Advanced Driver Assistance System) continue to penetrate the market, iCatch Technology also has been investing lots of resources in imaging-based ADAS technology and business development. How do you ensure that the driver is still paying attention to the road...www.digitimes.com
Hi Bravo,
By all means keep the champers flowing and keep your happy pants on, but I'm afraid @Stable Genius is leading you up the garden path.
iCatch have a very complex chip, the V57 AI processor (see above), which includes their proprietary NPU.
Unfortunately for them, they have used a design from the last millennium for their NPU.
Bottoms up!
View attachment 27525
No. I thought I was wrong once, but I was mistaken.Thanks @Diogenese.
I give up. Going back to shutting my mouth, reading and learning for a while. I’m out of my depth!
![]()