Brainchip + Prophesee


Laguna Hills, Calif. – June 19, 2022 BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of neuromorphic AI IP, and Prophesee, the inventor of the world’s most advanced neuromorphic vision systems, today announced a technology partnership that delivers next-generation platforms for OEMs looking to integrate event-based vision systems with high levels of AI performance coupled with ultra-low power technologies.

Inspired by human vision, Prophesee’s technology uses a patented sensor design and AI algorithms that mimic the eye and brain to reveal what was invisible until now using standard frame-based technology. Prophesee’s computer vision systems open new potential in areas such as autonomous vehicles, industrial automation, IoT, security and surveillance, and AR/VR.

BrainChip’s first-to-market neuromorphic processor, Akida, mimics the human brain to analyze only essential sensor inputs at the point of acquisition, processing data with unparalleled efficiency, precision, and economy of energy. Keeping AI/ML local to the chip, independent of the cloud, also dramatically reduces latency.

“We’ve successfully ported the data from Prophesee’s neuromorphic-based camera sensor to process inference on Akida with impressive performance,” said Anil Mankar, Co-Founder and CDO of BrainChip. “This combination of intelligent vision sensors with Akida’s ability to process data with unparalleled efficiency, precision and economy of energy at the point of acquisition truly advances state-of-the-art AI enablement and offers manufacturers a ready-to-implement solution.”

“By combining our Metavision solution with Akida-based IP, we are better able to deliver a complete high-performance and ultra-low power solution to OEMs looking to leverage edge-based visual technologies as part of their product offerings,” said Luca Verre, CEO and co-founder of Prophesee.



Event-Based Camera Chips Are Here, What’s Next?

Prophesee's CEO explains the future of sensors that only see changes​

SAMUEL K. MOORE
12 OCT 2021
This month, Sony starts shipping its first high-resolution event-based camera chips. The two Sony chips—the 0.92 megapixel IMX636 and the smaller 0.33 megapixel IMX637—combine Prophesee's event-based circuits with Sony's 3D-chip stacking technology to produce a chip with the smallest event-based pixels on the market. Prophesee CEO Luca Verre explains what comes next for this neuromorphic technology.

Luca Verre: The scope is broader than just industrial. In the space of automotive, we are very active, in fact, in non-safety related applications. In April we announced a partnership with Xperi, which developed an in-cabin driver monitoring solution for autonomous driving. [Car makers want in-cabin monitoring of the driver to ensure they are attending to driving even when a car is autonomous mode.] Safety-related applications [such as sensors for autonomous driving] is not in the scope of the IMX636 because it would require safety compliancy, which this design is not meant for. However, there are a number of OEM and Tier 1 suppliers that are doing evaluation on it, fully aware that the sensor cannot be, as-is, put in mass production. They're testing it because they want to evaluate the technology's performance and then potentially consider pushing us and Sony to redesign it, to make it compliant with safety. Automotive safety remains an area of interest, but more longer-term. In any case, if some of this evaluation work leads to a decision for product development, it would require quite a few years [before it appears in a car].

IEEE Spectrum: What's next for this sensor?

Luca Verre: For the next generation, we are working along three axes. One axis is around the reduction of the pixel pitch. Together with Sony, we made great progress by shrinking the pixel pitch from the 15 micrometers of Generation 3 down to 4.86 micrometers with generation 4. But, of course, there is still some large room for improvement by using a more advanced technology node or by using the now-maturing stacking technology of double and triple stacks. [The sensor is a photodiode chip stacked onto a CMOS chip.] You have the photodiode process, which is 90 nanometers, and then the intelligent part, the CMOS part, was developed on 40 nanometers, which is not necessarily a very aggressive node. Going for more aggressive nodes like 28 or 22 nm, the pixel pitch will shrink very much.

The benefits are clear: It's a benefit in terms of cost; it's a benefit in terms of reducing the optical format for the camera module, which means also reduction of cost at the system level; plus it allows integration in devices that require tighter space constraints. And then of course, the other related benefit is the fact that with the equivalent silicon surface, you can put more pixels in, so the resolution increases. The event-based technology is not following necessarily the same race that we are still seeing in the conventional [color camera chips]; we are not shooting for tens of millions of pixels. It's not necessary for machine vision, unless you consider some very niche exotic applications.

The second axis is around the further integration of processing capability. There is an opportunity to embed more processing capabilities inside the sensor to make the sensor even smarter than it is today. Today it's a smart sensor in the sense that it's processing the changes [in a scene]. It's also formatting these changes to make them more compatible with the conventional [system-on-chip] platform. But you can even push this reasoning further and think of doing some of the local processing inside the sensor [that's now done in the SoC processor].

The third one is related to power consumption. The sensor, by design, is actually low-power, but if we want to reach an extreme level of low power, there's still a way of optimizing it. If you look at the IMX636 gen 4, power is not necessarily optimized. In fact, what is being optimized more is the throughput. It's the capability to actually react to many changes in the scene and be able to correctly timestamp them at extremely high time precision. So in extreme situations where the scenes change a lot, the sensor has a power consumption that is equivalent to conventional image sensor, although the time precision is much higher. You can argue that in those situations you are running at the equivalent of 1000 frames per second or even beyond. So it's normal that you consume as much as a 10 or 100 frame-per-second sensor.[A lower power] sensor could be very appealing, especially for consumer devices or wearable devices where we know that there are functionalities related to eye tracking, attention monitoring, eye lock, that are becoming very relevant.

IEEE Spectrum: Is getting to lower power just a question of using a more advanced semiconductor technology node?

Luca Verre: Certainly using a more aggressive technology will help, but I think marginally. What will substantially help is to have some wake-up mode program in the sensor. For example, you can have an array where essentially only a few active pixels are always on. The rest are fully shut down. And then when you have reached a certain critical mass of events, you wake up everything else.


Is computer vision about to reinvent itself, again?

Ryad Benosman, professor of Ophthalmology at the University of Pittsburgh and an adjunct professor at the CMU Robotics Institute, believes that it is. As one of the founding fathers of event–based vision technologies, Benosman expects neuromorphic vision — computer vision based on event–based cameras — is the next direction computer vision will take.

“Computer vision has been reinvented many, many times,” he said. “I’ve seen it reinvented twice at least, from scratch, from zero.”

According to Benosman, until the image sensing paradigm is no longer useful, it holds back innovation in alternative technologies. The effect has been prolonged by the development of high–performance processors such as GPUs which delay the need to look for alternative solutions.

“Why are we using images for computer vision? That’s the million–dollar question to start with,” he said. “We have no reasons to use images, it’s just because there’s the momentum from history. Before even having cameras, images had momentum.”

Neuromorphic technologies are those inspired by biological systems, including the ultimate computer, the brain and its compute elements, the neurons. The problem is that no–one fully understands exactly how neurons work. While we know that neurons act on incoming electrical signals called spikes, until relatively recently, researchers characterized neurons as rather sloppy, thinking only the number of spikes mattered. This hypothesis persisted for decades. More recent work has proven that the timing of these spikes is absolutely critical, and that the architecture of the brain is creating delays in these spikes to encode information.

Benosman made it clear that the event cameras of today are simply an improvement of the original research devices developed as far back as 2000. State–of–the–art DVS cameras from Sony, Samsung, and Omnivision have tiny pixels, incorporate advanced technology such as 3D stacking, and reduce noise. Benosman’s worry is whether the types of sensors used today can successfully be scaled up.

“The problem is, once you increase the number of pixels, you get a deluge of data, because you’re still going super fast,” he said. “You can probably still process it in real time, but you’re getting too much relative change from too many pixels. That’s killing everybody right now, because they see the potential, but they don’t have the right processor to put behind it.”

General–purpose neuromorphic processors are lagging behind their DVS camera counterparts. Efforts from some of the industry’s biggest players (IBM Truenorth, Intel Loihi) are still a work in progress. Benosman said that the right processor with the right sensor would be an unbeatable combination.

“[Today’s DVS] sensors are extremely fast, super low bandwidth, and have a high dynamic range so you can see indoors and outdoors,” Benosman said. “It’s the future. Will it take off? Absolutely!”

“Whoever can put the processor out there and offer the full stack will win, because it’ll be unbeatable,” he added.
 
  • Like
  • Love
  • Fire
Reactions: 44 users
D

Deleted member 118

Guest
This is on there twitter but can seem to obtain the link

94680469-B64B-4A59-8B7C-7364BFA24B50.png
 
  • Like
Reactions: 9 users

Pmel

Regular
  • Like
  • Wow
  • Fire
Reactions: 9 users

stuart888

Regular
Yeah to a Neuromorphic Event Vision video! Especially when it features a friend we know, the Prophesee CEO.



1660586812606.png
 
  • Like
  • Fire
  • Love
Reactions: 19 users

Neuromorphia

fact collector
Prophesee is based in Paris, with local offices in Grenoble, Shanghai, Tokyo and Silicon Valley. The company is driven by a team of 102 visionary engineers, holds more than 50 international patents and is backed by leading international investors including Sony, iBionext, 360 Capital Partners, Intel Capital, Robert Bosch Venture Capital, Supernova Invest, and European Investment Bank.

and

https://app.dealroom.co/companies/prophesee_1

  • PROPHESEE FUNDING​

  • PROPHESEE INVESTORS​


RECENT NEWS ABOUT PROPHESEE​

  1. La deeptech française dans le viseur du fonds de la CIA
    Oct 2021 by via lesechos.fr
  2. CIA fund interested in French start-up Prophesee despite its Chinese backers
    Oct 2021 by via intelligenceonline.com
  3. Prophesee attracts investment from Sinovation and Xiaomi
    Jul 2021 by via imveurope.com
  4. Paris-based Prophesee raises €25 million to transform machine vision sensors for use in industry and VR
    Oct 2019 by via EuStartups
  5. Parisian Prophesee snags $28 million Series C to bring its computer vision tech to market
    Oct 2019 by via Tech.EU
 
Last edited:
  • Like
  • Fire
  • Wow
Reactions: 7 users

equanimous

Norse clairvoyant shapeshifter goddess
  • Like
  • Fire
Reactions: 4 users

equanimous

Norse clairvoyant shapeshifter goddess
 
  • Like
  • Fire
Reactions: 9 users

stuart888

Regular
The more one learns-listens-and-explores the 2 corner stone founders of Prophesee, the wider the Brainchip smile. Both Luca Verra and Ryad Benosman are sharp, kind of the marketing leads too.

Great overview of their firm, when Sony came on board in Generation 4, and informative on event based video and how it "fits like hand-with-glove" to Brainchip's SNN framework.

 
  • Like
  • Fire
Reactions: 7 users

stuart888

Regular
Ryad Benosman talks over the slides. Very intelligent speaker on now and where event-based vision is heading. Clue: spikes!

This was mid last year, Brainchip in the slide deck. Here is his bio first:

1660765022813.png


1660764847241.png


 
  • Like
  • Fire
  • Love
Reactions: 11 users

stuart888

Regular
From the Business perspective, the fact that Brainchip and Prophesee formed a partnership is a clue or telling. Brainchip could have just sold their IP directly to Prophesee and any fat cats like Sony that want to back the deal. But rather they partnered, joining up.

Sony must have approved the whole thing, all involved in some way or another.

Piecing it together: Prophesee and Sony are in love with Brainchip!:love:

Event based video cameras on the edge, spiking for low power! The partnership is really good news, maybe better than we think.
 
  • Like
  • Fire
Reactions: 18 users

Braintonic

Regular
Keeping an eye on the growing problem of #spacejunk. Prophesee’s #eventbasedvision enables the #Astrosite mobile telescope @westernsydneyu, the world’s first neuromorphic system of its kind #inventorscommunity #ICNS
👉https://t.co/Z7k9CVd7hv https://t.co/geb3ZwXFJT
 
  • Like
  • Love
  • Fire
Reactions: 9 users

stuart888

Regular
  • Like
  • Love
Reactions: 9 users

stuart888

Regular
  • Like
Reactions: 5 users
D

Deleted member 118

Guest
  • Like
Reactions: 5 users
D

Deleted member 118

Guest
 
  • Fire
  • Like
Reactions: 2 users

Krustor

Regular
D

Deleted member 118

Guest
  • Like
Reactions: 1 users

Krustor

Regular
Glad you watched it as I never.
Made 1 Screenshot when he mentioned that they are going into mass production with Sony:

Please see attached.

Edit says: Time stamp in the Video to find it faster: 16:15

BR from Bavaria
 

Attachments

  • Screenshot_20221114-200257_Samsung Internet.jpg
    Screenshot_20221114-200257_Samsung Internet.jpg
    411.6 KB · Views: 69
Last edited:
  • Like
  • Fire
  • Love
Reactions: 10 users

Learning

Learning to the Top 🕵‍♂️
  • Like
  • Fire
Reactions: 6 users
Top Bottom