Hmmmm ...Liked by Anup
![]()
#prophesee #amd #eventbasedvision #fpga #ai #computervision | Luca Verre | 14 comments
Exciting News! 🚀 Prophesee is thrilled to announce our collaboration with AMD, a leading provider of computer processors, FPGA, and related technologies for business and consumer markets. Together, we are delivering the industry-first Prophesee Event-Based Vision solution on the AMD Kria FPGA...www.linkedin.com
View attachment 62559 View attachment 62560
AMD seem pretty wedded to NN's with MACs:
US11921784B2 Flexible, scalable graph-processing accelerator 20211229
[0070] FIG. 4 B illustrates an embodiment of a systolic array for performing matrix multiplication operations, in the context of a neural network computation, on a set of weights (stored in the A-Matrix buffer 411 ) and activations (stored in the B-Matrix buffer 412 ), with the results stored in the C-Matrix buffer 413 . Each of the multiply/accumulate units (MAC4) in the systolic array 414 receives four inputs from the A-Matrix 411 and four inputs from the B-Matrix 412 ; thus, the array 414 of MAC4 units receives a total of eight inputs from the A-Matrix 411 and eight inputs from the B-Matrix 412 , or eight pairs of operands. In one embodiment, zero detection logic is incorporated into the MAC4 units to apply the approach illustrated in FIG. 4 A. For multiplying sufficiently sparse data, no more than four of the eight pairs of operands will include two nonzero values in most cases. Thus, the four MAC4 units are usually sufficient to perform the computations for the eight pairs of operands in a single cycle. The MAC4 units compute the products for the nonzero pairs of operands, and the multiply results for pairs that have at least one zero operand are set to zero.
The Prophesee X320 is the reduced pixel version adapted for use with inferior processors.
Prophesee Reinvents DVS Camera For AIoT Applications - EE Times
Dynamic vision sensor (DVS) company Prophesee has resized its event-based camera into a form factor that suits always-on, AIoT (artificial intelligence of things) devices. Compared with the company’s 1-million–pixel fourth-generation sensor, the fifth generation, GenX320, has reduced its resolution to 320 x 320 pixels, reduced die size to 13 mm2, and now includes standard data formats for easy downstream processing and a hierarchy of ultra-low power modes that can be used to wake up other parts of the system, Prophesee CEO Luca Verre told EE Times.
...
Prophesee’s previous-generation sensor, designed in partnership with Sony, went to mass production at the end of 2021. This HD sensor is a BSI (back-side illuminated), 3D-stacked chip small enough to target consumer electronics for the first time.
Prophesee has been working with Qualcomm to demonstrate this sensor with Snapdragon mobile processors. Verre said at least two OEMs are building smartphones for 2025 release with the combination of Snapdragon and Prophesee included. The use case here is to fuse full-frame and event data to improve smartphone camera image quality. Motion blur can be reduced using the Prophesee sensor to detect movement in the photo and then using Prophesee’s computational photography algorithms to correct for it.
GenX320, with its lower resolution, lower power consumption and smaller size, will target AR/VR headsets, security and monitoring/detection systems, touchless displays, eye tracking, and other always-on IoT devices. However, while still a BIS 3D-stacked design, this sensor isn’t developed with Sony, and will be manufactured in a different foundry.
Maybe Prophesee could relegate its use of Akida, if any, to high precision applications. I don't know whether this includes the high-resolution 3D stacked pixel array developed with Sony, because Sony does have its in-house analog NNs (US2022020757A1 SEMICONDUCTOR STORAGE DEVICE AND NEURAL NETWORK DEVICE 20181218), and Qualcomm has Snapdragon Hexagon - but then there's TeNNs/ViT.