Fullmoonfever
Top 20
However, in a Secret Santa, some more interesting news / developmentsMerry Christmas, happy holidays all
Be safe and enjoy the time with loved ones, friends, fam and doing what makes you happy however you celebrate this time.
Oct French article on the X320 I needed to work around through VPN, private tab etc to get to read with an English translation.
Didn't recall seeing before.
Hopefully the link may work.
Highlighted a comment by our mate Luca that says imo, essentially the neuromorphic processor side via BRN or Intel or Synsense not happening at the mo as we not mainstream enough yet.
Unlike the Sony, Qualcomm, Renesas, AMD of the world. So unless we snuck in there via Renesas, can probs park this one for the mo unfortunately imo until we get a bit more traction in the mkt.
Hopefully wrong and the comments were lost in translation
Prophesee part à la conquête de l’IoT et de la réalité immersive avec son nouveau capteur événementiel GenX320
La deeptech parisienne Prophesee dévoile, ce lundi 16 octobre 2023, son nouveau capteur bio-inspiré dénommé GenX320, plus petit que les...-Deeptechwww.usinenouvelle.com
Prophesee sets out to conquer the’IoT and immersive reality with its new GenX320 event sensor
The Parisian deeptech Prophesee unveils, this Monday, October 16, 2023, its new bio-inspired sensor called GenX320, smaller than previous generations. It’s it addresses the markets of connected objects and immersive reality headsets.
5 Min.
React
Frederic Monflier
16 october 2023 \ 14h00
© Prophesee
The new GenX320 sensor measures only 3x4 mm. It has a resolution of 320x320 pixels.
And five for the French deeptech Prophesee: its GenX320 sensor, whose availability is announced on Monday, October 16, 2023, inaugurates the fifth generation of its bio-inspired vision technology, also called event. « With a resolution of 320x320 pixels, C’ is the smallest event sensor developed in the world so far’, says Luca Verre, CEO of Prophesee. It is intended for immersive reality headsets and connected objects.
»Co-developed with Sony, the previous generation sensor, referenced IMX636 in the Japanese catalog and benefiting from’a definition of 1280x720 pixels, is mainly for industrial applications (fast counting, etc). It will also be able to express itself in consumer smartphones, as evidenced by the’announcement of’a partnership in February 2023 between Qualcomm and Prophesee.
With its new model, deeptech is therefore launching a new challenge by addressing the’Internet of objects and immersive reality. This time, it alone supports the commercial, software and technical support aspects, while continuing to rely on the qualities of its neuromorphic technology.
Three years of R&D
A Prophesee sensor is indeed inspired by the human retina. Unlike an ordinary’image sensor, it does not ACQUIRE images one after the other, but just changes (called events) in the’image, using independent and asynchronous photodetectors. Direct consequence : the capture speed is much higher, of the’order of the microsecond, and the amount of data produced decreases significantly, which translates into energy savings’.
Connected objects, often running on batteries, and augmented/virtual reality headsets, however, impose additional constraints in terms of’integration, cost and energy consumption. The deeptech has spared no effort. « We have been working for three years on the GenX320 to meet these needs », explains Luca Verre. R&D efforts that focused on three axes, he continues: « Miniaturization, energy consumption and’IA. »
The GenX320 sensor measures only’a fifth of an inch diagonally, a surface of 3 mm x 4 mm. It is manufactured by the superposition of two silicon wafers, the first comprising the photodetectors, the second the analog and digital processing circuits.
36 Microwatts in standby
This technique of « stacking » had also made it possible to miniaturize the’IMX636 compared to the three previous generations, which had never been marketed. With the difference that Sony is no longer at work this time: the GenX320 is produced in a European foundry, whose name Prophesee prefers to silence. Like its predecessors, its manufacture uses standard technologies in microelectronics (Cmos) and its cost is aligned with that of traditional’image sensors. The first samples of the GenX320 have been available since the end of 2022.
Energy consumption, meanwhile, is only « 36 Microwatts, in standby mode (always on), and n’atts only 3 milliwatts at most when the sensor wakes up », specifies Luca Verre.
Finally comes the’ aspect IA. « Since a flow of’ events is very scattered and very fast, this type of data does not interface naturally with the accelerating chips of neural networks, which process sequential images, points out Luca Verre. In this GenX320 sensor, we have added digital blocks that, at the output, prepare this data for these accelerators. » This pretreatment gives the possibility of’accumulate events in the form of’a histogram, for example.
Synchronous or asynchronous operation
Directly speeding up the processing of event data could go through neuromorphic chips such as those developed by Brainchip (Akida), Intel (Loihi) and many others, some of whom are beginning their commercial careers.
The neural networks with pulses (SNN, spiking neural networks) that’elles execute are thus particularly suitable for asynchronous calculations.
« It would be the’ideal, recognizes Luca Verre. But, although’on collaborates with Brainchip, Synsense and Intel, their chips are not yet mainstream. » Unlike the systems of Qualcomm, Renesas, d’AMD, etc., optimized to apply today's most common’IA algorithms (including convolution) to images from standard sensors.
« However, thanks to the GenX320, our customers have the choice between synchronous or asynchronous operating modes », shade Luca Glass. Synchronous mode reduces time accuracy to the millisecond range, « what remains powerful for IoT applications », says Luca Verre.
Tracking the eye movements of a driver is a way to measure his degree of fatigue. An event sensor is particularly interesting in this context because, not capturing any image, it does not compromise the confidentiality of the person.
Deeptech cites the American’ Zinn Labs, which uses the GenX320 sensor to perform gaze detection (eye tracking), whose Swedish Tobi is one of the world's specialists, which spreads in virtual reality headsets, allows to concentrate the maximum of calculations for the 3D image where the user's gaze is directed (what’on calls the foveal rendering).
« The tracking speed reaches 1 kilohertz, with a consumption of less than 20 milliwatts, whereas this is currently 1 or 2 watts and the’sampling is limited to 120 images/second to decrease the amount of data », argues Luca Verre. Such a high frequency is better for tracking the very fast angular movements of the’oeil. According to Prophesee, Zinn Labs has developed a prototype and commercialization could occur in 2024.
Another potential outlet: the follow-up of the’attention of the drivers which rests that the counting of the number of flashes of’ eye per second, by the American’ Xperi. This company appreciates the fact that’an event sensor, not capturing any image, preserves the privacy of the driver.
Designed by Prophesee, this module (top black part) with the GenX320 sensor easily interfaces with the popular STM32 microcontroller, which will promote application development.
In addition to its new sensor, Prophesee has designed in parallel modules that facilitate the’ interconnection and development of’applications with certain computing platforms. Thus, a plug-and-play module, hosting the GenX320, S’interface specifically with the STMicroelectronics STM32 microcontroller, popular for embedded vision. Deeptech wants to give itself every chance to succeed in its breakthrough in these new markets.
According to a paper just released a couple of days ago (not peer reviewed as yet) it appears some Snr Researchers over at Ericsson have been playing with Akida and "for instance, to demonstrate the feasibility of AI-enabled ZE-IoT, we have developed a prototype of a solar-powered AI-enabled ZE-IoT camera device with neuromorphic computing."
My question would be is this something off their own back or do we have a hand in the background somewhere as well
Towards 6G Zero-Energy Internet of Things:
Standards, Trends, and Recent Results
- December 2023