BRN Discussion Ongoing

Lucid cars are amazing. Would be a tough decision between the Lucid or the Merc #firstworldproblems
One of each makes it easy but which colour? Always a problem. No one of each colour problem solved. 😂🤣😂🤡
 
  • Haha
  • Like
Reactions: 11 users

equanimous

Norse clairvoyant shapeshifter goddess
On that note I thought it would be helpful to provide the following list of published Prophesee partners for @Diogenese:

1. Sony

2. Century Ark

3. IMAGO - SpaceX seems to be a customer

4. Datalogic

5. LUCID

6. CIS Corporation

7. FRAMOS Imaging

8. MV Technology

9. Renault

By extension these are all now part of the growing Brainchip Ecosystem.

My opinion only DYOR
FF

AKIDA BALLISTA
Better make that several coffees for Diognese

1665026904418.png
 
  • Like
  • Haha
Reactions: 13 users

Evermont

Stealth Mode
How can removing sensors make the car safer?

Maybe it's because the sensor data overloads the processor and increases latency?

Somebody should invent a co-processor capable of handling the sensor data in real time and with minimal power consumption ...

PS: Just speaking with the ogre, and, in his opinion Musk's allusion to AI does not mean Akida, because if it were to be Akida, he wouldn't need to abandon the ultrasound sensors.

Recent news suggests a continued emphasis on AI utilisation for autonomy. Someone should really sell Elon the benefits on on-chip learning.

1665026924974.png


AI Integration Software Engineer, Autonomy​

Job CategoryAutopilot & Robotics
LocationPALO ALTO, California
Req. ID136288
Job TypeFull-time
Apply
Tesla participates in the E-Verify Program

What to Expect
As an AI Integration Software Engineer within the Autonomy group, you will have the opportunity to apply your technical skills to foundational code targeting automation, validation, and optimization of AI workloads for Autopilot and Humanoid robot. The nature of the role is multi-disciplinary, and it means that the code you will write, debug, and maintain will contribute to deploying Neural Networks trained by Machine Learning engineers. You will be developing system tools to benchmark, characterize and optimize the latency and throughput of the AI workloads on the FSD chip. You will write tests and integrate with our evaluation pipeline to continuously validate the AI deployment flow.

What You’ll Do


  • Write, debug and maintain robust software for Autopilot and Humanoid robot AI deployment stack; depending on needs and your interests/skills, you might work on code related to our Camera & Vision stack, write custom GPU kernels for AI models, or make our NN evaluation software more stable and performant.
  • Automate the flow of deploying Neural Networks and reduce the time it takes for AI models to go from Pytorch land to Tesla cars.
  • Optimize Tesla's in-house AI ASIC resources usage by profiling the Neural Network execution, consult with both AI scientists and hardware architects and introduce new features in the Neural Network deployment stack.
  • Advocate for best coding practices amongst the group, build tools helping engineers to write better code (for instance, performance/memory tracking)
What You’ll Bring

  • Experience programming C/C++ including modern C/C++ (C++14/17/20), and Python.
  • Experience or familiarity with Computer Vision, Machine Learning & related software concepts.
  • Experience with performant software design, compiler design and/or hardcore lower-level C code.
  • Experience with at least one of the following preferred: Cuda/OpenCL, SIMD, and multithreading.
Tesla is an Equal Opportunity / Affirmative Action employer committed to diversity in the workplace. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, age, national origin, disability, protected veteran status, gender identity or any other factor protected by applicable federal, state or local laws.

Tesla is also committed to working with and providing reasonable accommodations to individuals with disabilities. Please let your recruiter know if you need an accommodation at any point during the interview process.

For quick access to screen reading technology compatible with this site click here to download a free compatible screen reader (free step by step tutorial can be found here). Please contact accommodationrequest@tesla.com for additional information or to request accommodations.

Privacy is a top priority for Tesla. We build it into our products and view it as an essential part of our business. To understand more about the data we collect and process as part of your application, please view our Tesla Talent Privacy Notice.

 
  • Like
  • Fire
Reactions: 12 users

Diogenese

Top 20
One item Luca (Prophesee) referred to was the deblurring of mobile camera images by using a frame camera and a DVS camera.

This is their patent:
EP3929864A1 IMAGE ENHANCEMENT METHOD, APPARATUS AND SYSTEM

an image enhancement method, comprising:

  • obtaining a reference image frame of a scene, by a frame-based sensor, which contains artifacts and has an exposure duration; [#### frame based camera]
  • obtaining a stream of events, by an event-based sensor which is synchronized with the frame-based sensor, at least during the exposure duration of the reference image frame, wherein the events encode brightness changes of the scene corresponding to the reference image ; and
  • deriving a corrected image frame without the artifacts from the reference image by means of the stream of events.

"artifacts" in this context means unintentionally generated image element (pixel activations) caused by movement of an object while the shutter is open.

The "stream of events" refers to the activation of individual pixels caused by motion of the object captured by the DVS during a single frame period of the standard camera. The DVS pixel data includes the time at which the pixel illumination changed. Thus the system can use the timing information contained in the DVS data to correct the blurring of the camera image which occurred due to movement of the object during the frame period.
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 32 users

Diogenese

Top 20
On that note I thought it would be helpful to provide the following list of published Prophesee partners for @Diogenese:

1. Sony

2. Century Ark

3. IMAGO - SpaceX seems to be a customer

4. Datalogic

5. LUCID

6. CIS Corporation

7. FRAMOS Imaging

8. MV Technology

9. Renault

By extension these are all now part of the growing Brainchip Ecosystem.

My opinion only DYOR
FF

AKIDA BALLISTA
Well it's good to be able to put a name to the haystacks ...

The problem is that the haystacks are undercover for 18 months, so nothing filed since 4 April 2021 is available to the public.

Since we only recently teamed up with Prophesee and since it will take several months to implement in silicon, I'd say it's a bit too early for a Prophesee/Akida combo.
 
Last edited:
  • Like
  • Haha
  • Love
Reactions: 27 users

HopalongPetrovski

I'm Spartacus!
  • Haha
  • Like
  • Fire
Reactions: 5 users

Diogenese

Top 20
Recent news suggests a continued emphasis on AI utilisation for autonomy. Someone should really sell Elon the benefits on on-chip learning.

View attachment 18147

AI Integration Software Engineer, Autonomy​

Job CategoryAutopilot & Robotics
LocationPALO ALTO, California
Req. ID136288
Job TypeFull-time
Apply
Tesla participates in the E-Verify Program

What to Expect
As an AI Integration Software Engineer within the Autonomy group, you will have the opportunity to apply your technical skills to foundational code targeting automation, validation, and optimization of AI workloads for Autopilot and Humanoid robot. The nature of the role is multi-disciplinary, and it means that the code you will write, debug, and maintain will contribute to deploying Neural Networks trained by Machine Learning engineers. You will be developing system tools to benchmark, characterize and optimize the latency and throughput of the AI workloads on the FSD chip. You will write tests and integrate with our evaluation pipeline to continuously validate the AI deployment flow.

What You’ll Do

  • Write, debug and maintain robust software for Autopilot and Humanoid robot AI deployment stack; depending on needs and your interests/skills, you might work on code related to our Camera & Vision stack, write custom GPU kernels for AI models, or make our NN evaluation software more stable and performant.
  • Automate the flow of deploying Neural Networks and reduce the time it takes for AI models to go from Pytorch land to Tesla cars.
  • Optimize Tesla's in-house AI ASIC resources usage by profiling the Neural Network execution, consult with both AI scientists and hardware architects and introduce new features in the Neural Network deployment stack.
  • Advocate for best coding practices amongst the group, build tools helping engineers to write better code (for instance, performance/memory tracking)
What You’ll Bring

  • Experience programming C/C++ including modern C/C++ (C++14/17/20), and Python.
  • Experience or familiarity with Computer Vision, Machine Learning & related software concepts.
  • Experience with performant software design, compiler design and/or hardcore lower-level C code.
  • Experience with at least one of the following preferred: Cuda/OpenCL, SIMD, and multithreading.
Tesla is an Equal Opportunity / Affirmative Action employer committed to diversity in the workplace. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, age, national origin, disability, protected veteran status, gender identity or any other factor protected by applicable federal, state or local laws.

Tesla is also committed to working with and providing reasonable accommodations to individuals with disabilities. Please let your recruiter know if you need an accommodation at any point during the interview process.

For quick access to screen reading technology compatible with this site click here to download a free compatible screen reader (free step by step tutorial can be found here). Please contact accommodationrequest@tesla.com for additional information or to request accommodations.

Privacy is a top priority for Tesla. We build it into our products and view it as an essential part of our business. To understand more about the data we collect and process as part of your application, please view our Tesla Talent Privacy Notice.

"custom GPU kernels for AI models" is so last milennium.
 
  • Haha
  • Like
  • Love
Reactions: 13 users
Well it's good to be able to put a name to the haystacks ...

The problem is that the haystacks are undercover for 18 months, so nothing filed since 4 April 2021 is available to the public.

Since we only recently teamed up with Prophesee and since it will take several months to implement in silicon, I'd say it's a bit too early for a Prophesee/Akida combo.
Don’t worry spikes are bigger than needles and as Rob Telson said if you are looking for a spike in a haystack just use AKIDA because it will only see the spikes not all the hay. Here is a photo to assist:

1665028046592.jpeg

😂🤣😂🤡
Regards
FF

AKIDA BALLISTA
 
  • Haha
  • Like
  • Wow
Reactions: 22 users

Diogenese

Top 20
One item Luca (Prophesee) referred to was the deblurring of mobile camera images by using a frame camera and a DVS camera.

This is their patent:
EP3929864A1 IMAGE ENHANCEMENT METHOD, APPARATUS AND SYSTEM

an image enhancement method, comprising:

  • obtaining a reference image frame of a scene, by a frame-based sensor, which contains artifacts and has an exposure duration; [#### frame based camera]
  • obtaining a stream of events, by an event-based sensor which is synchronized with the frame-based sensor, at least during the exposure duration of the reference image frame, wherein the events encode brightness changes of the scene corresponding to the reference image ; and
  • deriving a corrected image frame without the artifacts from the reference image by means of the stream of events.

"artifacts" in this context means unintentionally generated image element (pixel activations) caused by movement of an object while the shutter is open.

The "stream of events" refers to the activation of individual pixels caused by motion of the object captured during a single frame period of the standard camera. The DVS pixel data includes the time at which the pixel illumination changed. Thus the system can use the timing information contained in the DVS data to correct the blurring of the camera image which occurred due to movement of the object during the frame period.
You may (or may not) recall the mechanical photo finish camera used at Randwick where the film was rolled past a slit aligned with the finish post at a speed equivalent to the speed of the horse adjusted for the difference between the horse and the slit and the film and the slit (say 30 m to 1 cm at 40 mph ... so 40/3000 mph). The horse legs came out bent because the legs moved faster than the body of the horse.
 
  • Like
  • Haha
  • Love
Reactions: 12 users

Diogenese

Top 20
  • Like
  • Haha
Reactions: 4 users

HopalongPetrovski

I'm Spartacus!
You may (or may not) recall the mechanical photo finish camera used at Randwick where the film was rolled past a slit aligned with the finish post at a speed equivalent to the speed of the horse adjusted for the difference between the horse and the slit and the film and the slit (say 30 m to 1 cm at 40 mph ... so 40/3000 mph). The horse legs came out bent because the legs moved faster than the body of the horse.
It's a common problem. The classic example is trying to photograph a golf swing. Even very expensive club's tend to get bent. 🤣
And not in the gooD way!
They might be able to solve the banding issue in video as well, combining the relevant data from an event camera with a regular one.
In fact, this may just turn out to be the answer for many common photographic issues.
I wonder if this is something Sony are looking at as they would love to have this technical edge on Cannon.
 
  • Like
  • Love
Reactions: 9 users

HopalongPetrovski

I'm Spartacus!
  • Haha
  • Like
Reactions: 9 users

JK200SX

Regular
One item Luca (Prophesee) referred to was the deblurring of mobile camera images by using a frame camera and a DVS camera.

This is their patent:
EP3929864A1 IMAGE ENHANCEMENT METHOD, APPARATUS AND SYSTEM

an image enhancement method, comprising:

  • obtaining a reference image frame of a scene, by a frame-based sensor, which contains artifacts and has an exposure duration; [#### frame based camera]
  • obtaining a stream of events, by an event-based sensor which is synchronized with the frame-based sensor, at least during the exposure duration of the reference image frame, wherein the events encode brightness changes of the scene corresponding to the reference image ; and
  • deriving a corrected image frame without the artifacts from the reference image by means of the stream of events.

"artifacts" in this context means unintentionally generated image element (pixel activations) caused by movement of an object while the shutter is open.

The "stream of events" refers to the activation of individual pixels caused by motion of the object captured by the DVS during a single frame period of the standard camera. The DVS pixel data includes the time at which the pixel illumination changed. Thus the system can use the timing information contained in the DVS data to correct the blurring of the camera image which occurred due to movement of the object during the frame period.

Straight out of the podcast!

1665030534784.png
 
  • Like
  • Fire
  • Love
Reactions: 33 users

JoMo68

Regular
I am sure you remember a while back the very negative response a poster received from Ryad Benosman the cofounder of Prophesee regarding AKIDA. Clearly that response was born out of a desire to suppress speculation about a relationship between Brainchip and Prophesee.

On looking up Mr. Benosman just now I found this further interesting connection:


Carnegie Mellon University.

My opinion only DYOR
FF


AKIDA BALLISTA
I thought exactly the same thing myself FF. Wondering if our MB friend’s “not that one” response when asked about Akida IP was in the same vein as Benosman’s negative comments - smoke and mirrors…
 
  • Like
  • Thinking
Reactions: 11 users

Violin1

Regular
Well it's good to be able to put a name to the haystacks ...

The problem is that the haystacks are undercover for 18 months, so nothing filed since 4 April 2021 is available to the public.

Since we only recently teamed up with Prophesee and since it will take several months to implement in silicon, I'd say it's a bit too early for a Prophesee/Akida combo.
Didn't Luca say they'd been collaborating for a couple of years? Might be wrong but thought there was a suggestion in there somewhere.


CORRECTION - I WENT BACK AND RELISTENED TO THE SECTION AND IT ISN'T THERE. CLEARLY MY WISHFUL THINKING!
ps - love the edit button!!
 
Last edited:
  • Like
  • Thinking
Reactions: 10 users

Cgc516

Regular
Why BRN still give them chance to do so!
Why ASX allows them keep doing it ?


54D4056D-5DBC-49D2-B54E-2EC2B66DB9AB.png
 

Attachments

  • BA1BB19C-27AA-4498-A26C-E43E75BAC31A.png
    BA1BB19C-27AA-4498-A26C-E43E75BAC31A.png
    523.8 KB · Views: 66
  • Like
  • Sad
  • Thinking
Reactions: 15 users

Zedjack33

Regular
  • Like
  • Sad
Reactions: 5 users

Diogenese

Top 20
Didn't Luca say they'd been collaborating for a couple of years? Might be wrong but thought there was a suggestion in there somewhere.
Hope you're right!
 
  • Like
  • Sad
Reactions: 3 users

Violin1

Regular
  • Like
  • Love
Reactions: 4 users

Xray1

Regular
IMO..... A good Co announcement came out this afternoon...!!!!
Seems like a couple of the employee's at BRN, know that their onto something good and big ..... thus enticing them to take up a significant amount of options.
 
  • Like
  • Love
  • Fire
Reactions: 19 users
Top Bottom