BRN Discussion Ongoing

Was having a check in on MB jobs.

This one advertised beginning Nov for India MB.

Doesn't mention Akida but that would probs be difficult to request that experience as I don't expect as yet, that many out there with full hands on experience.

Is based around NVIDIA from what I read.

But what caught my eye was the job title.

Now, I'm wondering if that is just an internal naming convention or if relates back to Momenta AI out of China.

Musing on overlaps, diff scenarios of where we fit still without any formal updates and US / China tech issues.

Article below on some strategic investors in Momenta inc MB and Bosch.



Vendor/Privacy Policy
Career Job Detail

Mercedes Benz Logo

November 2022
Mercedes-Benz Research and Development India Private Limited

MBOS - ADAS – Software Development​

tasks

Our team:
We build best in class software for Autonomous driving functions for Mercedes Benz passenger cars. We are a team of talented engineers working in a global network to develop the next generation of Driver Assistance and Automated Driving features used in all Mercedes Benz cars sold globally.
About the role:
Be part of an agile software development team developing AI & ML based software for ADAS and Automated Driving features
Job Title:
MB - Momenta ADAS & Automated Driving – Software Development

Job Category:
Internal
Department/Group:
ICA
Job Code:

Years of Experience
1-4 Years(T9)
Opportunity No:

Responsibilities
- Contribute to the R&D of cutting-edge ADAS/AD functions
- AI and ML based function development of ADAS/AD features (e.g. highway pilot) with data driven and rule based approaches.
- Implement ADAS/AD functions from the system/software specification
- Practical usage of mathematical, physical and logical knowledge
- Develop functions from concept until production ready maturity level
- Implement algorithms on embedded hardware
- Test and validate algorithms in simulation and real-car driving
- Analyze and solve problem reports coming from field tests
- Perform effective root cause analysis for problems reported during vehicle testing
- Create tests for the feature on various levels e.g. Unit Tests, SIL, HIL
- Ensure release readiness by creating the required documentation for the customer
- Provide fast and effective software implementation for proof of concepts, when required


Required Skills: Technical
- Over 1-4 years of experience in software development or equivalent in the automotive industry, preferably in the AD/ADAS domain
- Strong C/C++ skills to design, build, maintain efficient and reliable code
- Strong with data driven algorithm development using Machine learning, Deep learning and AI
- Familiarity with hardware acceleration platforms (Nvidia TensorRT, Intel OpenVino)
- Familiarity with some common ML frameworks (TensorFlow, Onxx, Pytorch, etc)
- Understanding of ML workflow: preparing the data, implementing and training ML models, evaluating results, deploying inference on different platforms
- Experience making performant ML pipelines/ inference servers
- Knowledge of different ML models and how to train/benchmark their performance
- Familiar with Python programming
- Familiar with Linux environment
- Professional experience with classic and/or adaptive AUTOSAR is a strong plus
- Strong skills in Automates/State-Machines
- Knowledge in the ADAS field of different SAE Automated Driving levels and in ISO 26262 will be an advantage
- Exposure to configuration management tools like GIT and requirement management tools like DOORS, design tools like Rhapsody, Enterprise Architect
- Good understanding of the SW build environment and build process – compilation, linking, preprocessing etc.
- Hands-on experience in working with Static Quality check tools PRQA, ASTREE, Coverity
- Capability of testing on target (both, HiL and test vehicles)

Required Skills: Non-Technical
- Ability to work in a team environment with open mindedness.
- Curious, team-work, self-motivated
- Excellent oral and written communication skills
- Strong Problem solving, logical thinking and analytical skills.
- Ability to communicate and discuss ideas effectively - verbally and through presentations
- Deep interest in technology
- Excellent organizational, time management, prioritization and multi-tasking skills




Autonomous Driving Startup Momenta Raises Another $500M

In the wake of a $300 million investment from General Motors in September, Momenta, an autonomous driving solution provider from China, announced today an additional $500 million added to its Series C round.

Source: TechCrunch | Published on November 5, 2021


The new injection brings the total of the startup’s Series C to over $1 billion. Momenta adapts what it calls a two-legged strategy of supplying advanced driver assistance systems (ADAS) to auto OEMs like GM and Tier 1 suppliers like Bosch, while conducting R&D on truly unmanned vehicles, that is, Level 4 driving.

The startup has assembled a list of heavyweight strategic investors, including China’s state-owned SAIC Motor, GM, Toyota, Mercedes Benz, and Bosch. Singapore’s sovereign fund Temasek and Jack Ma’s Yunfeng Capital are among its institutional investors.

Momenta often speaks of how its alliance with car manufacturers differentiates itself from its peers, which have chosen a more cash-intensive route of developing in-house robotaxi fleets. Instead, it counts on gleaning data from a network of mass-produced vehicles powered by its solutions. Pony.ai and WeRide are among its closest rivals and have also raised a significant amount of capital.

In the case of the GM deal, Momenta’s solution, which uses a mix of consumer-grade millimeter-wave radars and high-definition cameras, will be used in GM’s vehicles sold in China rather than the United States. The startup recently opened its first overseas office in Stuttgart to be closer to its German partners, which may imply the footprint of its technology could extend beyond its home market.
 
  • Like
  • Fire
  • Thinking
Reactions: 11 users
Anybody looked into this yet?

 
  • Like
  • Love
Reactions: 6 users

Neuromorphia

fact collector
Last edited:
  • Like
  • Fire
Reactions: 13 users
Anybody looked into this yet?

Yes a few weeks ago. It will accelerate their ability to undertake research but does not have any implications for AKIDA/Brainchip apart from this.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Love
Reactions: 7 users
Yes a few weeks ago. It will accelerate their ability to undertake research but does not have any implications for AKIDA/Brainchip apart from this.

My opinion only DYOR
FF

AKIDA BALLISTA
Cheers for that. I'll still have a further dig anyway just in case 🤓
 
  • Like
Reactions: 2 users

TheFunkMachine

seeds have the potential to become trees.
Don't know if this was posted previously when Anils one was?

Is from the recent TinyML forum end of Sept.

Watch from around the 9.15 min mark when at the end of his piece the moderator asks Christoph about the Neuromorphic Hardware / Chips etc roles could play with event-based vision.

Starts answer / comment with.....but then gets a bit cagey on architecture...wonder if just on their stuff or us as well :unsure:

View attachment 23060

This is the vid presso:

Neuromorphic Event-based Vision
Christoph POSCH, CTO, PROPHESEE

Abstract (English)
Neuromorphic Event-based (EB) vision is an emerging paradigm of acquisition and processing of visual information that takes inspiration from the functioning of the human vision system, trying to recreate its visual information acquisition and processing operations on VLSI silicon chips. In contrast to conventional image sensors, EB sensors do not use one common sampling rate (=frame rate) for all pixels, but each pixel defines the timing of its own sampling points in response to its visual input by reacting to changes of the amount of incident light. The highly efficient way of acquiring sparse data, the high temporal resolution and the robustness to uncontrolled lighting conditions are characteristics of the event sensing process that make EB vision attractive for numerous applications in industrial, surveillance, IoT, AR/VR, automotive. This short presentation will give an introduction to EB sensing technology and highlight a few exemplary use cases.


Don't know if this was posted previously when Anils one was?

Is from the recent TinyML forum end of Sept.

Watch from around the 9.15 min mark when at the end of his piece the moderator asks Christoph about the Neuromorphic Hardware / Chips etc roles could play with event-based vision.

Starts answer / comment with.....but then gets a bit cagey on architecture...wonder if just on their stuff or us as well :unsure:

View attachment 23060

This is the vid presso:

Neuromorphic Event-based Vision
Christoph POSCH, CTO, PROPHESEE

Abstract (English)
Neuromorphic Event-based (EB) vision is an emerging paradigm of acquisition and processing of visual information that takes inspiration from the functioning of the human vision system, trying to recreate its visual information acquisition and processing operations on VLSI silicon chips. In contrast to conventional image sensors, EB sensors do not use one common sampling rate (=frame rate) for all pixels, but each pixel defines the timing of its own sampling points in response to its visual input by reacting to changes of the amount of incident light. The highly efficient way of acquiring sparse data, the high temporal resolution and the robustness to uncontrolled lighting conditions are characteristics of the event sensing process that make EB vision attractive for numerous applications in industrial, surveillance, IoT, AR/VR, automotive. This short presentation will give an introduction to EB sensing technology and highlight a few exemplary use cases.



@ 9:15 she asks him about the use of neuromorphic chips in regards to event based vision systems. He states that he think SNN are a good fit, but then after that he said something strange considering our previous conversation with Prophesee on our podcast where they basically said that Brainchip is the key to their success (paraphrased)

He said: “but I’m not sure if the optimal processing architecture has been identified yet” Very strange thing to say when Luca Verre spoke so highly of Brainchip and Akida and the recent partnership etc. am I reading too much into this?
 
  • Like
  • Love
  • Wow
Reactions: 9 users
@ 9:15 she asks him about the use of neuromorphic chips in regards to event based vision systems. He states that he think SNN are a good fit, but then after that he said something strange considering our previous conversation with Prophesee on our podcast where they basically said that Brainchip is the key to their success (paraphrased)

He said: “but I’m not sure if the optimal processing architecture has been identified yet” Very strange thing to say when Luca Verre spoke so highly of Brainchip and Akida and the recent partnership etc. am I reading too much into this?
Agreed.

I didn't want to bias the post preferring to let others judge themselves.

I worked on the premise that maybe saying that to not give too much away or provide a form of confirmation of where they're at with Akida even though Luca spoke quite glowingly.

That and they probs still running tests against the options they have and don't have complete results yet.
 
  • Like
Reactions: 8 users

Slade

Top 20
Stay patient Chippers. 2023 is going to be a fun year.
 
  • Like
  • Love
  • Fire
Reactions: 21 users

alwaysgreen

Top 20
This is an example of what the next generation of mass production vehicles will be coming out with.

Akida can do facial recognition, voice commands and gestures all at the same time using very low power.


Akida can do this easy.
I'm not trying to be a dick but if you post an article/video etc, it would be good to give it some context. So many posters do it and its annoying
 
  • Like
  • Love
  • Fire
Reactions: 12 users
I'm not trying to be a dick but if you post an article/video etc, it would be good to give it some context. So many posters do it and its annoying
I put my hand up to that. Will make a conscious effort from now on.
 
  • Like
  • Love
Reactions: 5 users

SERA2g

Founding Member
I'm not trying to be a dick but if you post an article/video etc, it would be good to give it some context. So many posters do it and its annoying
I agree with you on this one. It is annoying to see video's or articles posted by someone but the poster provides literally no context or opinion. I never read or watch the links when that's the case.

If it wasn't worth the posters time to to provide some information then I assume it's not worth my time to watch/read.

In this case though, @Getupthere posted the video then posted a separate comment providing the context, so go easy on the fella!
 
  • Like
  • Love
  • Fire
Reactions: 14 users

alwaysgreen

Top 20
I agree with you on this one. It is annoying to see video's or articles posted by someone but the poster provides literally no context or opinion. I never read or watch the links when that's the case.

If it wasn't worth the posters time to to provide some information then I assume it's not worth my time to watch/read.

In this case though, @Getupthere posted the video then posted a separate comment providing the context, so go easy on the fella!
Only because I asked if it was akida :) All good though, it's no big deal but it's just as you say, if they aren't going to read it before posting and add some context, it's likely a useless article that isn't worth reading.
 
  • Like
  • Love
  • Haha
Reactions: 5 users

Learning

Learning to the Top 🕵‍♂️

Akida in action with Renesas

Thanks for sharing Bennysmadness,

The video shows said Renesas is using their DPR-AI ACCELERATORS.
Screenshot_20221129_105308_Chrome.jpg


However, here is my non engineer background thinking out loud.
As we know from the Brainchip & Arm Podcast.
Arm is using Akida neuromorphic architecture for very specific task.
And we know Renesas only licensed two nodes of Akida.
So could it be, Renesas had integrated Akida into Renesas DPR-AI ACCELERATORS and as it not the full Akida architecture, there for, Renesas can advertise its as their in house AI accelerators.

Learning
PS, if I get 🤣 emoji from our tech members, then I know I am wrong in my thought. 😂😂😂
 
  • Like
  • Thinking
  • Fire
Reactions: 17 users
@ 9:15 she asks him about the use of neuromorphic chips in regards to event based vision systems. He states that he think SNN are a good fit, but then after that he said something strange considering our previous conversation with Prophesee on our podcast where they basically said that Brainchip is the key to their success (paraphrased)

He said: “but I’m not sure if the optimal processing architecture has been identified yet” Very strange thing to say when Luca Verre spoke so highly of Brainchip and Akida and the recent partnership etc. am I reading too much into this?
I would say yes to your question you are reading too much into this. Simple logic based on known facts.

AKIDA is the only commercial SNN available as IP. AKIDA IP is so advanced it is science fiction. So if for example AKIDA was producing 96% accuracy with Prophesee's event based vision sensor it is a long way ahead of anything out there and yet is 4% short of 100% accuracy. This missing 4% allows for a statement regarding whether they have found the optimal solution without telling an untruth.

The other thing however is that if AKIDA is producing 96% today just like it was first producing 94% with NaNose this level of accuracy will increase over time. In Nanose it increased to on various reports 96% to 98% accuracy which was over a year ago now so who knows what has been achieved since then. So with incremental learning and fine tuning 96% today could be 98% tomorrow or 100%. At 100% there would be no room to suggest it was not optimal.

On the speculative side of the equation do not forget that both Prophesee and AKIDA are working with NASA and DARPA and some pretty secret research is being undertaken across these two bodies with respect to hypersonic missile tracking. If this old technophobe can work out that event based vision and AKIDA provide the avenue for finding and tracking hypersonic missiles highly intelligent techies working for foreign governments would have worked it out and so there would be a lot of pressure on both companies to be circumspect about what they say at this stage.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Love
  • Fire
Reactions: 37 users

Diogenese

Top 20
Thanks for sharing Bennysmadness,

The video shows said Renesas is using their DPR-AI ACCELERATORS.
View attachment 23102

However, here is my non engineer background thinking out loud.
As we know from the Brainchip & Arm Podcast.
Arm is using Akida neuromorphic architecture for very specific task.
And we know Renesas only licensed two nodes of Akida.
So could it be, Renesas had integrated Akida into Renesas DPR-AI ACCELERATORS and as it not the full Akida architecture, there for, Renesas can advertise its as their in house AI accelerators.

Learning
PS, if I get 🤣 emoji from our tech members, then I know I am wrong in my thought. 😂😂😂
Hi Learning,

Renesas have been developing their DRP-AI for more than 10 years and they are not about to throw away all that R&D for an upstart. I'm guessing their DRP-AI is too complex and inflexible for the lower end of the market, so Akida lite fills the gap.
 
  • Like
  • Fire
  • Love
Reactions: 28 users

Learning

Learning to the Top 🕵‍♂️
Hi Learning,

Renesas have been developing their DRP-AI for more than 10 years and they are not about to throw away all that R&D for an upstart. I'm guessing their DRP-AI is too complex and inflexible for the lower end of the market, so Akida lite fills the gap.
Thanks Dio,

My apologies that my previous post was not clear.

What I am trying to say, was Renesas would use 90% of their in-house DPR-AI and add 10% of Akida to it processing. So in hindsight, it's still Renesas's DPR-AI, just with a little bits of Akida.

Maybe???

Learning 😅
 
  • Like
Reactions: 13 users

Mt09

Regular
Thanks Dio,

My apologies that my previous post was not clear.

What I am trying to say, was Renesas would use 90% of their in-house DPR-AI and add 10% of Akida to it processing. So in hindsight, it's still Renesas's DPR-AI, just with a little bits of Akida.

Maybe???

Learning 😅
They’ll use either Drpai or the 2 nodes of Akida they’ve licensed depending on application.
 
  • Like
  • Fire
Reactions: 7 users

Diogenese

Top 20
Thanks Dio,

My apologies that my previous post was not clear.

What I am trying to say, was Renesas would use 90% of their in-house DPR-AI and add 10% of Akida to it processing. So in hindsight, it's still Renesas's DPR-AI, just with a little bits of Akida.

Maybe???

Learning 😅
Their actual comment was to the effect that they would use their DRP-AI for the complex stuff and "at the very low end" they would use Akida, for which they had the necessary licence.
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 14 users

robsmark

Regular
So you go and have a beer with a top line horse trainer, 3 drinks in your asking him about things in his stable ,which horses to back, , A meeting with Tony and what your talking about the Weather
Sorry David, as hard as I tried I just couldn’t work out what you were saying/implying?
 
  • Haha
  • Like
  • Thinking
Reactions: 5 users

Learning

Learning to the Top 🕵‍♂️
Their actual comment was to the effect that they would use their DRP-AI for the complex stuff and "at the very low end" they would use Akida, for which they had the necessary licence.
They’ll use either Drpai or the 2 nodes of Akida they’ve licensed depending on application.
Thanks Mt & Dio.

I will stop imagine things. 😅😂🤣

Learning
 
  • Like
  • Sad
Reactions: 7 users
Top Bottom