Miss you BURNING BIN BOY
Hi Dodgy,This is the Prophesee patent for combining the DVS with frame camera.
US2022329771A1 METHOD OF PIXEL-BY-PIXEL REGISTRATION OF AN EVENT CAMERA TO A FRAME CAMERA 20210402
View attachment 58399
A method for registering pixels provided in a pixel event stream comprising:
acquiring image frames from a frame-based camera, each image frame being generated using an exposure period;
generating a first point matrix from one or more of the image frames, the first point matrix being associated with an acquisition period of the image frames;
acquiring a pixel event stream generated during the acquisition period;
generating a second point matrix from pixel events of the pixel event stream, occurring during the acquisition period of the first point matrix;
computing a correlation scoring function applied to at least a part of the points of the first and second point matrices, and
estimating respective positions of points of the second point matrix in the first point matrix, due to depths of the points of the first point matrix related to the second point matrix, by maximizing the correlation scoring function.
FIG. 4 depicts the exposure timings of a vertical rolling shutter sensor. Each row R 0 , R 1 , . . . , Rn of pixels of the rolling shutter sensor is exposed during an exposure period of the same duration Tr, shown in FIG. 4 by a rectangle RE. The start of exposure period RE of each row R 0 -Rn is offset by the rolling shutter skew Rs compared to the previous row, In contrast, the exposure periods of the pixel rows in a global shutter sensor are all identical, as shown in FIG. 4 by a single central rectangle GE. The duration T of the exposure period of the global shutter sensor may be the same as or different from the duration Tr of the row exposure periods.
If the sensor has a vertical rolling shutter starting from top to bottom, and the y coordinate varies from 0 (top of the sensor) to height-1 (bottom or last row of the sensor), function Es(x,y) can be computed as Es(x,y)=ts+y·Rs. Function Es(x,y) has a same value for all pixels lying on the same row.
Function Ee(x,y) can be computed as Ee(x,y)=Es(x,y)+Tr, where Tr is the duration of the exposure period of one row of image I. Alternatively, function Ee can also be computed as Ee(x,y)=te+y·Rs, to being the time of the end of the exposure period at the first row of the frame I.
It can be observed that the point matrix J is defined only for points corresponding to pixel events occurring during the exposure period T. The other points are undefined and may be set to any arbitrary value, for example 0 or the value of the nearest defined point.
it is fantastic to see our BrainChip employee giving BrainChip a moment in the spotlight.View attachment 58416
Alf Kuchenbuch on LinkedIn: On board of the Australian satellite Optimus 1 is Akida, Brainchip's…
On board of the Australian satellite Optimus 1 is Akida, Brainchip's neuromorphic AI accelerator. 14 hours until the launch!www.linkedin.com
I would add... just for the record: the only stock at the moment that is actually really flying towards the moon! 🫡View attachment 58416
Alf Kuchenbuch on LinkedIn: On board of the Australian satellite Optimus 1 is Akida, Brainchip's…
On board of the Australian satellite Optimus 1 is Akida, Brainchip's neuromorphic AI accelerator. 14 hours until the launch!www.linkedin.com
"Additionally, pruning technology employed in the DRP-AI3 accelerator significantly improves AI computing efficiency"I don't know if this has been posted, but tghis could bve the Renensas offering....?
If they waste the technology on developing Snapcat to do better filters etc it can only be good for us.Yep, was quiet, to me anyway.
Wasn't GML considered and early competitor at one stage, or for that wrong?
Grai Matter Labs quietly snapped up by Snap - Bits&Chips
Last October, neuromorphic computing company Grai Matter Labs (GML) was acquired by Snap, the American developer of the instant messaging app Snapchat.bits-chips.nl
Grai Matter Labs quietly snapped up by Snap
Nieke Roos
22 February
Last October, neuromorphic computing company Grai Matter Labs (GML) was acquired by Snap, the American developer of the instant messaging app Snapchat. Headquartered in Paris with offices in Eindhoven and Silicon Valley (San Jose), GML is working on a full-stack AI system-on-chip platform. The takeover didn’t receive much publicity except for a couple of mentions in French media, tracing back to an original article by the economic investigation site L’Informé. It appears that Snap considers it a confidential matter. According to L’Informé, GML was driven into American arms because it couldn’t find funding in Europe.
We have a new employee at Brainchip
View attachment 58420
Kurt Manninen
Senior Solutions Architect at Brainchip, Inc
Info
I am a Senior Solutions Architect at Brainchip, Inc. I work directly with Brainchip's current and future customers to help them utilize Brainchip's Akida Neuromorphic System-on-Chip to solve problems for computer vision (classification, object detection) , audio processing (keyword spotting), sensor fusion, and anomaly detection.
"Additionally, pruning technology employed in the DRP-AI3 accelerator significantly improves AI computing efficiency"
This is referring to the N:M pruning, I think and is probably the same one that was previously discussed, though not at length?
My interpretation of what @Diogenese said, is that this is more likely (or possibly) their own version of the N of M coding, used in AKIDA.
It's also possible, that they've just decided to change the terminology, as they have no need to mention us, since they've bought the licence, as others have said..
I'm not sure what you're saying IloveLamp..BRN Discussion Ongoing
https://www.renesas.com/us/en/applications/industrial/industrial-automation/visual-detection-single-board-computer?utm_campaign=f-up-mpu_rzv2h-epsg-epbd-epbz-null&utm_source=eloqua&utm_medium=email&utm_content=wc&elqTrackId=880a234014bf4aa5bd6e695d6aa73e45&elq=65aae00b30014fd7892b62931ca435f4&elq...thestockexchange.com.au
BRN Discussion Ongoing
Hi Diogenes. Please translate…do you agree or do you not agree that DRP AI3 indeed utilizes our IP? Can’t wait to translate your reply! 😂 Our N-of-M coding is based on the timing of activation spikes which arrive asynchronously. All spikes after the first N are discarded. Renesas prunes the...thestockexchange.com.au
Post in thread 'BRN Discussion Ongoing' https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-413765
I wasn't saying anything, simply showing you what had been previously posted, i prefer not to draw conclusions for others (most of the time )I'm not sure what you're saying IloveLamp..
Not using a cooling fan, doesn't prove it's us, as the N:M pruning increases speed and efficiency and doesn't need it because of that.
What we need to know...
Is N:M pruning a renaming of the N of M coding, used by AKIDA, or is it their own "inspiration" after examining how our IP works?
I don't think @Diogenese has found any Renesas patents regarding N:M pruning, but they have 18 months or something? Before they'd need to produce them?
Hi DB,"Additionally, pruning technology employed in the DRP-AI3 accelerator significantly improves AI computing efficiency"
This is referring to the N:M pruning, I think and is probably the same one that was previously discussed, though not at length?
My interpretation of what @Diogenese said, is that this is more likely (or possibly) their own version of the N of M coding, used in AKIDA.
It's also possible, that they've just decided to change the terminology, as they have no need to mention us, since they've bought the licence, as others have said..
So basically they've found a workaround, to achieve a similar result, as AKIDAs N of M coding..Hi DB,
Our N:M coding is applied to the activation signal, a time-varying signal. The first N of M signals are processed and the remainder discarded. This is because the strongest signals from the optic nerves arrive before weaker signals (the stronger input signal causes the nerve to reach its firing threshold earlier). The strongest signals carry the most relevant information.
Renesas apply N:M coding to the static weights stored in memory, so there is no time element. Renesas base their selection on the magnitude of the stored signal. It's not about which arrives first. It's about which is strongest.
It's similar but different (and derivative).