rgupta
Regular
Can we realise some connection of Qualcomm and akida since merc time. And as soon as they see prophesee is a partner of brn they realised it is an opportunity for them ?Hi MrRomper,
We all want @Bravo to be right.
One issue is that this product could be tied up in the October 2021 Prophesee/Synsense agreement to produce the sensor/AI chip.
Our partnership with Prophesee dates from mid-2022. Luca Verre has been quoted as saying that the relationship with BrainChip was in its infancy.
Let's hope that the multi-year Prophesee/Qualcomm deal has room for Akida.
In the following, Prophesee mentions both Qualcomm and Sony together.
Camera chip startup Prophesee and Qualcomm sign multi-year deal | Reuters
February 28, 20236:47 AM GMT+11Last Updated a month ago
Camera chip startup Prophesee and Qualcomm sign multi-year deal
By Jane Lee
OAKLAND, Calif., Feb 27 (Reuters) - Paris-based startup Prophesee, a maker of camera chips inspired by the way the human eye works, said on Monday it has signed a multi-year deal with Qualcomm Inc (QCOM.O) to be used with the smartphone technology giant's product.
While today's camera chips continuously process the full frame of images, Prophesee's chip will only process changes in the scene, such as light or movement, which makes it faster and requires less computing power, said Luca Verre, co-founder and chief executive at Prophesee.
The technology works with pixels on the sensor that only send information to the processor when there is change, while pixels that perceive no change stay muted. There are a million pixels on Prophesee's latest chips.
Manufacturing of the chip will be outsourced to Sony Group Corp (6758.T). "So we are really combining both key players in the space," said Verre, referring to both Qualcomm and Sony, without disclosing financial terms of the deal.
Verre said the Prophesee chip will be used in addition to conventional camera chips in a blueprint for smartphones that will be released this week at Mobile World Congress in Barcelona. Mass production of the chips is planned for next year when they would be integrated into phones, he said.
The additional Prophesee chip will help correct some of the blurry imagery in existing smartphone camera systems, said Verre.
DYNAP-CNN — the World’s First 1M Neuron, Event-Driven Neuromorphic AI Processor for Vision Processing | SynSense
DYNAP-CNN — the World’s First 1M Neuron, Event-Driven Neuromorphic AI Processor for Vision Processing | SynSense
Today we are announcing our new fully-asynchronous event-driven neuromorphic AI processor for ultra-low power, always-on, real-time applications. DYNAP-CNN opens brand-new possibilities for dynamic vision processing, bringing event-based vision applications to power-constrained devices for the...www.synsense.ai
Computation in DYNAP-CNN is triggered directly by changes in the visual scene, without using a high-speed clock. Moving objects give rise to sequences of events, which are processed immediately by the processor. Since there is no notion of frames, DYNAP-CNN’s continuous computation enables ultra-low-latency of below 5ms. This represents at least a 10x improvement from the current deep learning solutions available in the market for real-time vision processing.
SynSense and Prophesee develop one-chip event-based smart sensing solution
/2021/10/15/synsense-prophesee-neuromorphic-processing-and-sensing/PROPHESEE | Metavision for Machines
REVEAL THE INVISIBLE with the world's most advanced neuromorphic vision system, inspired by human vision and built on the foundation of neuromorphic engineering. PROPHESEE gives Metavision to machines, revealing what was previously invisible to them.www.prophesee.ai
Oct 15, 2021 – SynSense and Prophesee, two leading neuromorphic technology companies, today announced a partnership that will see the two companies leverage their respective expertise in sensing and processing to develop ultra-low-power solutions for implementing intelligence on the edge for event-based vision applications.
… The sensors facilitate machine vision by recording changes in the scene rather than recording the entire scene at regular intervals. Specific advantages over frame-based approaches include better low light response (<1lux) and dynamic range (>120dB), reduced data generation (10x-1000x less than conventional approaches) leading to lower transfer/processing requirements, and higher temporal resolution (microsecond time resolution, i.e. >10k images per second time resolution equivalent).
Just a stupid one but still may have merit?