BRN Discussion Ongoing

Sirod69

bavarian girl ;-)

Easing automotive software migration: From discrete ECUs to Zonal Controllers in emerging EE architectures​


....

One way to achieve FFI is by sandboxing each software component into virtual machines isolated by a separation kernel. Armv8-R supports this feature by means of real-time virtualization. Through using a hypervisor, or simpler separation kernel, on Armv8-R based processors, like Cortex-R52 and Cortex-R52+, it is possible to achieve FFI among multiple software workloads.

Therefore, Cortex-R52 and Cortex-R52+ processors offer an ideal platform for building zonal controllers that can be useful for deploying multiple software workloads, which are currently running on discrete ECUs, many of which are based on Arm Cortex-M processors. For more information on virtualization supported by Armv8-R, please refer to the Best Practices for Armv8-R Cortex-R52+ Software Consolidation.
1709537042231.png

 
  • Like
  • Thinking
  • Fire
Reactions: 10 users

Rach2512

Regular
Wowzers!
 

Attachments

  • Screenshot_20240304-152351_Samsung Internet.jpg
    Screenshot_20240304-152351_Samsung Internet.jpg
    338.8 KB · Views: 262
  • Like
  • Fire
  • Wow
Reactions: 22 users

Teach22

Regular
This is the Prophesee patent for combining the DVS with frame camera.

US2022329771A1 METHOD OF PIXEL-BY-PIXEL REGISTRATION OF AN EVENT CAMERA TO A FRAME CAMERA 20210402

View attachment 58399

A method for registering pixels provided in a pixel event stream comprising:
acquiring image frames from a frame-based camera, each image frame being generated using an exposure period;
generating a first point matrix from one or more of the image frames, the first point matrix being associated with an acquisition period of the image frames;
acquiring a pixel event stream generated during the acquisition period;
generating a second point matrix from pixel events of the pixel event stream, occurring during the acquisition period of the first point matrix;
computing a correlation scoring function applied to at least a part of the points of the first and second point matrices, and
estimating respective positions of points of the second point matrix in the first point matrix, due to depths of the points of the first point matrix related to the second point matrix, by maximizing the correlation scoring function
.


FIG. 4 depicts the exposure timings of a vertical rolling shutter sensor. Each row R 0 , R 1 , . . . , Rn of pixels of the rolling shutter sensor is exposed during an exposure period of the same duration Tr, shown in FIG. 4 by a rectangle RE. The start of exposure period RE of each row R 0 -Rn is offset by the rolling shutter skew Rs compared to the previous row, In contrast, the exposure periods of the pixel rows in a global shutter sensor are all identical, as shown in FIG. 4 by a single central rectangle GE. The duration T of the exposure period of the global shutter sensor may be the same as or different from the duration Tr of the row exposure periods.

If the sensor has a vertical rolling shutter starting from top to bottom, and the y coordinate varies from 0 (top of the sensor) to height-1 (bottom or last row of the sensor), function Es(x,y) can be computed as Es(x,y)=ts+y·Rs. Function Es(x,y) has a same value for all pixels lying on the same row.

Function Ee(x,y) can be computed as Ee(x,y)=Es(x,y)+Tr, where Tr is the duration of the exposure period of one row of image I. Alternatively, function Ee can also be computed as Ee(x,y)=te+y·Rs, to being the time of the end of the exposure period at the first row of the frame I.

It can be observed that the point matrix J is defined only for points corresponding to pixel events occurring during the exposure period T. The other points are undefined and may be set to any arbitrary value, for example 0 or the value of the nearest defined point
.
Hi Dodgy,

There were previously one or two posters who were convinced (and maybe still are) that Brainchip were somehow involved with the Qualcomm/Snapdragon thing.

There’s a lot of talk around the Prophesee/Qualcomm collaboration at present and our association with Prophesee have a lot of holders thinking it is related to BRN.
Your above post, whilst looking very impressive, doesn’t help someone like me to understand whether or not Akida is being used by Prophesee.
Could I ask your opinion on this?

Thanks.
 
  • Like
  • Love
Reactions: 13 users
  • Like
  • Fire
  • Love
Reactions: 113 users

Terroni2105

Founding Member
View attachment 58416

it is fantastic to see our BrainChip employee giving BrainChip a moment in the spotlight.
I think this is exactly what the Brainchip official account should be doing, putting it out there, posting with pride. Where are they?
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 42 users

7für7

Regular
  • Like
  • Fire
Reactions: 17 users

Sirod69

bavarian girl ;-)
We have a new employee at Brainchip🥰😘

1709540182501.png


Kurt Manninen​


Senior Solutions Architect at Brainchip, Inc

Info

I am a Senior Solutions Architect at Brainchip, Inc. I work directly with Brainchip's current and future customers to help them utilize Brainchip's Akida Neuromorphic System-on-Chip to solve problems for computer vision (classification, object detection) , audio processing (keyword spotting), sensor fusion, and anomaly detection.
 
  • Like
  • Love
  • Fire
Reactions: 43 users
Yep, was quiet, to me anyway.

Wasn't GML considered and early competitor at one stage, or for that wrong?


Grai Matter Labs quietly snapped up by Snap​

Nieke Roos
22 February

Last October, neuromorphic computing company Grai Matter Labs (GML) was acquired by Snap, the American developer of the instant messaging app Snapchat. Headquartered in Paris with offices in Eindhoven and Silicon Valley (San Jose), GML is working on a full-stack AI system-on-chip platform. The takeover didn’t receive much publicity except for a couple of mentions in French media, tracing back to an original article by the economic investigation site L’Informé. It appears that Snap considers it a confidential matter. According to L’Informé, GML was driven into American arms because it couldn’t find funding in Europe.
 
  • Like
  • Thinking
  • Fire
Reactions: 18 users
I don't know if this has been posted, but tghis could bve the Renensas offering....?

"Additionally, pruning technology employed in the DRP-AI3 accelerator significantly improves AI computing efficiency"

This is referring to the N:M pruning, I think and is probably the same one that was previously discussed, though not at length?

My interpretation of what @Diogenese said, is that this is more likely (or possibly) their own version of the N of M coding, used in AKIDA.

It's also possible, that they've just decided to change the terminology, as they have no need to mention us, since they've bought the licence, as others have said..
 
  • Like
  • Fire
Reactions: 8 users
Yep, was quiet, to me anyway.

Wasn't GML considered and early competitor at one stage, or for that wrong?


Grai Matter Labs quietly snapped up by Snap​

Nieke Roos
22 February

Last October, neuromorphic computing company Grai Matter Labs (GML) was acquired by Snap, the American developer of the instant messaging app Snapchat. Headquartered in Paris with offices in Eindhoven and Silicon Valley (San Jose), GML is working on a full-stack AI system-on-chip platform. The takeover didn’t receive much publicity except for a couple of mentions in French media, tracing back to an original article by the economic investigation site L’Informé. It appears that Snap considers it a confidential matter. According to L’Informé, GML was driven into American arms because it couldn’t find funding in Europe.
If they waste the technology on developing Snapcat to do better filters etc 🙄 it can only be good for us.
 
  • Haha
  • Like
Reactions: 5 users

IloveLamp

Top 20
We have a new employee at Brainchip🥰😘

View attachment 58420

Kurt Manninen

Senior Solutions Architect at Brainchip, Inc

Info

I am a Senior Solutions Architect at Brainchip, Inc. I work directly with Brainchip's current and future customers to help them utilize Brainchip's Akida Neuromorphic System-on-Chip to solve problems for computer vision (classification, object detection) , audio processing (keyword spotting), sensor fusion, and anomaly detection.



Beat ya 😜

1000013868.gif
 
Last edited:
  • Haha
Reactions: 15 users

IloveLamp

Top 20
"Additionally, pruning technology employed in the DRP-AI3 accelerator significantly improves AI computing efficiency"

This is referring to the N:M pruning, I think and is probably the same one that was previously discussed, though not at length?

My interpretation of what @Diogenese said, is that this is more likely (or possibly) their own version of the N of M coding, used in AKIDA.

It's also possible, that they've just decided to change the terminology, as they have no need to mention us, since they've bought the licence, as others have said..


Post in thread 'BRN Discussion Ongoing' https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-413765
 
  • Thinking
  • Fire
  • Like
Reactions: 4 users

IloveLamp

Top 20
  • Like
  • Thinking
  • Fire
Reactions: 13 users


Post in thread 'BRN Discussion Ongoing' https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-413765
I'm not sure what you're saying IloveLamp..

Not using a cooling fan, doesn't prove it's us, as the N:M pruning increases speed and efficiency and doesn't need it because of that.

What we need to know...

Is N:M pruning a renaming of the N of M coding, used by AKIDA, or is it their own "inspiration" after examining how our IP works?

I don't think @Diogenese has found any Renesas patents regarding N:M pruning, but they have 18 months or something? Before they'd need to produce them?
 
  • Like
  • Fire
Reactions: 7 users

IloveLamp

Top 20
I'm not sure what you're saying IloveLamp..

Not using a cooling fan, doesn't prove it's us, as the N:M pruning increases speed and efficiency and doesn't need it because of that.

What we need to know...

Is N:M pruning a renaming of the N of M coding, used by AKIDA, or is it their own "inspiration" after examining how our IP works?

I don't think @Diogenese has found any Renesas patents regarding N:M pruning, but they have 18 months or something? Before they'd need to produce them?
I wasn't saying anything, simply showing you what had been previously posted, i prefer not to draw conclusions for others (most of the time 😏)
 
  • Like
  • Haha
  • Fire
Reactions: 10 users

Tothemoon24

Top 20
IMG_8541.jpeg
 

Attachments

  • IMG_8540.jpeg
    IMG_8540.jpeg
    409.2 KB · Views: 155
  • Like
  • Fire
  • Love
Reactions: 51 users

wilzy123

Founding Member
  • Haha
  • Fire
Reactions: 13 users

Diogenese

Top 20
"Additionally, pruning technology employed in the DRP-AI3 accelerator significantly improves AI computing efficiency"

This is referring to the N:M pruning, I think and is probably the same one that was previously discussed, though not at length?

My interpretation of what @Diogenese said, is that this is more likely (or possibly) their own version of the N of M coding, used in AKIDA.

It's also possible, that they've just decided to change the terminology, as they have no need to mention us, since they've bought the licence, as others have said..
Hi DB,

Our N:M coding is applied to the activation signal, a time-varying signal. The first N of M signals are processed and the remainder discarded. This is because the strongest signals from the optic nerves arrive before weaker signals (the stronger input signal causes the nerve to reach its firing threshold earlier). The strongest signals carry the most relevant information.

Renesas apply N:M coding to the static weights stored in memory, so there is no time element. Renesas base their selection on the magnitude of the stored signal. It's not about which arrives first. It's about which is strongest.

It's similar but different (and derivative).
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 24 users
Top Bottom