BRN Discussion Ongoing

Brainchip really need to surprise the market with an announcement.
In free fall atm
With the ground fast approaching
 
  • Like
  • Haha
Reactions: 10 users

Iseki

Regular
Dio...your earlier pick up on the software development being the current direction Mercedes is working on was a very good assessment
on your behalf, an announcement is pending from Mercedes, possibly around October, who knows, maybe Brainchip's name will to be
released to the market, similar to early 2022.

Appreciate your continued solid contribution to this forum, thank you....Tech (y)
@TECH @Diogenese

Isn't MB decision to go with software yet more proof that the IP Only licensing isn't working? Not even MB can go it alone, license the IP and manufacture the akida chips.

Would MB be going down the software route if Akida chips were available.

Isn't the whole notion of comparting ourselves with Arm completely unproven? Yes, companies will go and license IP for chips where similar chips are running a billion devices. But they will wait on someone else to make a chip that no one else has, and see it that works?

The MB story says one thing. Yes, we love the great technology. No we will not license Akida IP to put in chips.
 
  • Like
Reactions: 1 users

Diogenese

Top 20
Hi @Diogenese

Where did you get the article with TENNs from? I cannot find it in the quoted link. Was it deleted meanwhile or is it your opinion? :unsure:
It is a bit confusing.
Hi Chips,

I deduced that Mercedes has had access to TeNNs software for 2 years.

The TeNNs patent was filed in mid-2022. That made it possible for BRN to disclose the tech to EAPs, including Mercedes.

On Linkedin, Magnus has made repeated references to the software defined vehicle and splitting hardware and software developments, even making reference to "new software algorithms" specifically designed to work with neuromorphic hardware" and even "Hey Mercedes!" and controlling almost everything electronically with your voice.

He also explained that the adoption of automotive grade chips required extensive testing

"Widespread use of neuromorphic computing will depend on many factors. The technology requires new programming and algorithms, so it will not immediately replace traditional processors. One key factor for us is that automotive-grade chips must meet extremely strict reliability requirements. However, we are already actively working to drive development and we are committed to being the first to use this technology in the automotive industry."
 
  • Like
  • Fire
  • Love
Reactions: 50 users

Iseki

Regular
Brainchip really need to surprise the market with an announcement.
In free fall atm
With the ground fast approaching
Even if they don't have a contract to announce, I think an ann. to say we're taping out something would stop the rot.
 
  • Like
Reactions: 2 users

Diogenese

Top 20
Hi Chips,

I deduced that Mercedes has had access to TeNNs software for 2 years.

The TeNNs patent was filed in mid-2022. That made it possible for BRN to disclose the tech to EAPs, including Mercedes.

On Linkedin, Magnus has made repeated references to the software defined vehicle and splitting hardware and software developments, even making reference to "new software algorithms" specifically designed to work with neuromorphic hardware" and even "Hey Mercedes!" and controlling almost everything electronically with your voice.

He also explained that the adoption of automotive grade chips required extensive testing

"Widespread use of neuromorphic computing will depend on many factors. The technology requires new programming and algorithms, so it will not immediately replace traditional processors. One key factor for us is that automotive-grade chips must meet extremely strict reliability requirements. However, we are already actively working to drive development and we are committed to being the first to use this technology in the automotive industry."
Hi @CHIPS,

This is a collection of neuromorphic-related linkedin quotes from Magnus this year:

In neuromorphic computing, those human neurons and synapses are modelled in circuits and communication is event-driven, with information coded in spikes, mimicking the processing fundamentals of the brain. Those spikes propagate through a Spiking Neural Network of artificial neurons and synapses to predict results. Information processing is measured by spike rate or spike time instead of the number of calculations. Thus, neuromorphic chips are more energy efficient and have lower latency than conventional CPUs and GPUs. That means much faster computation using considerably less power.

However, this change in data processing also requires new software algorithms specifically designed to work with neuromorphic hardware. Existing algorithms can only partially leverage the many benefits of neural technology. ...

We at Mercedes-Benz AG are currently working on novel algorithms that take advantage of neuromorphic computing to improve the energy efficiency and performance of our cars. Our primary goals are to extend vehicle range, make safety systems react faster, and increase the number of hashtag#AI functions possible. In 2020, we already joined the hashtag#Intel Neuromorphic Research Community and since then we are continuously expanding our collaborations with other research partners and universities to ensure our software and hardware solutions continue to lead the industry.

Later he said:

The innovative 'Hey Mercedes' hashtag#MBUX Voice Assistant has been an enormous success for Mercedes-Benz AG, and we are continuously expanding its features. Last year we added a hashtag#ChatGPT AI beta programme in the U.S., and soon we will launch our MBUX Virtual Assistant based on a large language model (LLM) and generative AI.
... and then:

With our next hashtag#MBUXVirtualAssistant running on MB.OS, you will be able to control almost everything electronically with your voice using natural and fully interactive speech.

... and then:

Widespread use of neuromorphic computing will depend on many factors. The technology requires new programming and algorithms, so it will not immediately replace traditional processors. One key factor for us is that automotive-grade chips must meet extremely strict reliability requirements. However, we are already actively working to drive development and we are committed to being the first to use this technology in the automotive industry.

and then:

A combination of voice and imagery will provide a more natural, intuitive, personal and empathic way to communicate with the car. The MBUX Virtual Assistant will understand what you want and transfer those feelings and emotions into actions.


... and then:

Our digital and visual user interface between the vehicle systems and the customer, allowing them to control the vehicle comfort features and infotainment, including the AI-based MBUX Virtual Assistant, navigation and other applications.

decoupling the software and hardware innovation cycles, we can ensure our vehicles are constantly up to date.

and then:

. However, the design of MB.OS demands a different approach because we are decoupling the hardware and software innovation cycles and integration steps. This will make software development and integration much faster, and it also facilitates the constant flow of innovation into the vehicle, resulting in better products for our customers.







(13) Post | LinkedIn

https://www.linkedin.com/posts/magnus-östberg_mercedesbenz-mbuxvirtualassistant-ai-activity-7204012074081931264-xCIE/?utm_source=share&utm_medium=member_ios



#MercedesBenz is reinventing the in-car digital experience by leveraging the power of artificial intelligence.

Navigation is one example. Today, more than three million customers around the world are using Google Place Details in their cars. Next year, with the advent of our new own Mercedes-Benz Operating System, MB.OS, we will take navigation to a new level with MBUX Surround Navigation.

Voice assistance also benefits from AI. Our upcoming hashtag#MBUXVirtualAssistant uses generative AI and advanced 3D graphics to make interactions more natural, intuitive and personalized.

None of these game-changing technologies would be possible without the skill and dedication from our team of talented engineers. So, it’s great to be able to announce that our MBUX Virtual Assistant technology has won the Automotive AI Product of the Year award from the ICA Summit 2024.👏

Recognition from the industry strengthens our focus and further underlines why hashtag#AI is at the heart of our software strategy.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 65 users
Brainchip really need to surprise the market with an announcement.
In free fall atm
With the ground fast approaching
You had me really excited for a minute as I was hoping we were approaching 20 cents only to see us green

1717640202654.gif
 
  • Haha
  • Like
Reactions: 5 users

Diogenese

Top 20
Hi Chips,

I deduced that Mercedes has had access to TeNNs software for 2 years.

The TeNNs patent was filed in mid-2022. That made it possible for BRN to disclose the tech to EAPs, including Mercedes.

On Linkedin, Magnus has made repeated references to the software defined vehicle and splitting hardware and software developments, even making reference to "new software algorithms" specifically designed to work with neuromorphic hardware" and even "Hey Mercedes!" and controlling almost everything electronically with your voice.

He also explained that the adoption of automotive grade chips required extensive testing

"Widespread use of neuromorphic computing will depend on many factors. The technology requires new programming and algorithms, so it will not immediately replace traditional processors. One key factor for us is that automotive-grade chips must meet extremely strict reliability requirements. However, we are already actively working to drive development and we are committed to being the first to use this technology in the automotive industry."
I wonder if Mercedes "working to drive development" of neuromorphic computing involves introducing Nvidia to the necessity of incorporating a SNN front-end to their automotive GPU?
 
  • Like
  • Fire
  • Thinking
Reactions: 19 users

RobjHunt

Regular
Cheap as chips 😉
 
  • Like
Reactions: 2 users
Not sure if already posted

Result #Document/Patent numberDisplayTitleInventor namePublication datePages
1US-11989645-B2Preview PDFEvent-based extraction of features in a convolutional spiking neural networkMcLelland; Douglas et al.2024-05-2148

1717646206482.png



@Diogenese / @TECH Is this the US filing of our existing Australian patent?
 

Attachments

  • Event-based extraction of features in a convolutional spiking neural network.pdf
    3.7 MB · Views: 147
Last edited:
  • Like
  • Love
  • Fire
Reactions: 30 users

Calsco

Regular
When is the next quarterly due out?
 

Gemmax

Regular
  • Like
Reactions: 2 users

7für7

Top 20
1717657781680.gif


Good night chirpers
 
  • Like
  • Haha
Reactions: 5 users

CHIPS

Regular
Hi Chips,

I deduced that Mercedes has had access to TeNNs software for 2 years.

The TeNNs patent was filed in mid-2022. That made it possible for BRN to disclose the tech to EAPs, including Mercedes.

On Linkedin, Magnus has made repeated references to the software defined vehicle and splitting hardware and software developments, even making reference to "new software algorithms" specifically designed to work with neuromorphic hardware" and even "Hey Mercedes!" and controlling almost everything electronically with your voice.

He also explained that the adoption of automotive grade chips required extensive testing

"Widespread use of neuromorphic computing will depend on many factors. The technology requires new programming and algorithms, so it will not immediately replace traditional processors. One key factor for us is that automotive-grade chips must meet extremely strict reliability requirements. However, we are already actively working to drive development and we are committed to being the first to use this technology in the automotive industry."

Thanks for the reply and explanation @Diogenese . (y) Now I understand.
 
  • Like
Reactions: 2 users

CHIPS

Regular
  • Haha
  • Like
Reactions: 9 users
What??? :oops:o_O I AM NOT CHEAP! :mad:

Episode 8 Beating GIF by The Simpsons
Umm yeah, you are..

In fact, here in Australia they named a business after you, with over 50 stores Australia wide.


20240606_191448.jpg



Sorry...
 
  • Haha
  • Like
  • Wow
Reactions: 15 users

Damo4

Regular
1000007397.png
 
  • Like
Reactions: 13 users

cosors

👀
“Demonstration of the Power of TENNs

BrainChip should get the catchword PoTENNs patented, even though it would be mean (as well as erroneous) to consequently diagnose the competition with ImPoTENNs. 😉
Yes, exactly!
Listen to this album:
1717675092631.png
 
Last edited:
  • Love
  • Fire
Reactions: 5 users

IMG_0074.jpeg
 
  • Like
  • Love
Reactions: 16 users

cosors

👀
The ECB cuts interest rates in the EU for the first time since 2019.
 
  • Like
  • Fire
  • Thinking
Reactions: 19 users
Was just reading the below article just released on Semi Engineering and it made me think of comments @Diogenese posted earlier about MB and the SDV and also the recent BRN vacancy, excerpt below. Made me reread the article with those contexts esp the highlighted section.....interesting imo.

Longish but worth at least a skim through to understand some of the hurdles, current and future for AI players like us.



Senior Machine Learning Engineer​

BrainChip, Inc.

Excerpt:

Working knowledge of design processes and methodology; (ie ISO, Automotive Qualification) Ensure all technical documentation is customer- friendly and consumable.



The Uncertainty Of Certifying AI For Automotive​

sharethis sharing button

Making sure systems work as expected, both individually and together, remains challenging. In many cases, standards are vague or don’t apply to the latest technology.

JUNE 6TH, 2024 - BY: ANN MUTSCHLER


Nearly every new vehicle sold uses AI to make some decisions, but so far there is no consistency in what is being developed, where it is being used, and whether it is compatible with other vehicles on the road.

This fragmentation is partially due to the fact that AI is still a nascent technology, and cars and trucks sold today may be significantly different than those that will be sold several model generations in the future. That makes it difficult to create standards because no one knows yet how this technology will evolve. It’s also partially due to the fact that new autonomous features are highly competitive, and carmakers and their suppliers are working in secret to bring the latest technology to market.
As a result, while carmakers typically adhere to standards such as ISO 26262, ASIL A-D, and AEC-Q100, there is a lot of technology that falls outside of those standards. And because AI is being used in many applications within the car, there will be different AI algorithms and AI graphs used depending on the specific application.

“Most of us know that the safety-critical ADAS applications are AI-based, and that’s when you’re doing automatic emergency braking or lane-keeping or adaptive cruise control,” said Ron DiGiuseppe, automotive IP segment manager at Synopsys. “But there are other applications in the car that many people don’t realize are also AI-based, such as the power train, and in electric vehicles, the electric motors have lots of sensors. Managing the electric motors can be an AI application. There are various benefits to having AI manage electric vehicles, and also do some predictive analytics for reducing hardware costs and removing some of those internal sensors in the electric motor and powertrain by using AI. Infotainment, while a separate application, uses AI differently, such as in a driver monitoring system that uses images from cameras to make sure the driver is awake. The AI then has to interpret if the driver is alert.”
While the auto industry for decades has utilized certifications and compliance testing, this kind of standardization hasn’t happened yet for AI.

“We cannot talk about compliance, as no standards/regulations yet exist for AI,” said Riccardo Vincelli, director of engineering for high-performance computing at Renesas Electronics. “We can only talk today about ‘suitability’ for the target application. In the case that AI systems are employed into non-safety applications, such as speech recognition, the challenge is mainly to have functions that can fully satisfy customer expectations. But for safety applications like automated driving, the challenge is big, and we need to create defensible arguments why solutions based on AI are considered to be sufficiently safe. This is still a big challenge, and effort is being spent to reach this target. In fact, I cannot say that today there are applications in the field based on AI systems that can be considered safe unless this AI is used together with functions based on conventional technology.”

This hasn’t slowed down the pace of AI development and deployment in vehicles. But to bring this new technology to market, algorithms need to be trained in a vehicle under real workloads, which can vary greatly depending on the type of inferencing chips or accelerators.

“This training happens offline,” said David Fritz, vice president of hybrid and virtual systems at Siemens EDA, “and represents itself into these neural networks, of which there might be many, and they are the same as non-automotive applications.

The process is the same, even though the inputs are different. The main point is, the results of that training are neural network configurations and weightings, and the results of that training are just like any other software — it still needs to run on a piece of hardware. That hardware could be an NPU, GPU, CPU, or a DSP. Anything that does the AI inferencing is like software running on hardware. In terms of certifying that for ASIL-D or ISO 26262 it’s the same. You want to inject faults into the hardware that’s actually performing the inferencing. You want to put false input data into the inferencing.”

Murky standards, but more of them
Where standards do exist, they tend to be very broad. ISO 21434 is a case in point. “Recently, a lot of those requirements have been flowing down to the semiconductor companies,” said Jason Oberg, chief technology officer at Cycuity.

“It’s a whole process of threat analysis and risk assessment (TARA) that says you need to go through and build out this big spreadsheet that documents all of your security requirements, which includes how you actually validated and verified that you’ve met the security requirements and the supporting data.

That’s a process that the semiconductor companies are having to go through right now. We fit into that because they have to verify the security requirements, and make sure they provide the right evidence. Given that ISO 21434 is fairly general, if it’s a new functioning of the chip, whether it’s something simple or an actual ADAS AI-type use case, they’re going to have to go through that same type of certification, and they’re going to have to document the security requirements, provide evidence, and so on.”

While IP vendors work to have their products ISO 21434-certified, SoC developers are getting entire SoCs automotive certified. “This is something where we see some companies being more proactive, because they see it as a competitive advantage,” Oberg said. “If automotive is a big market for them, they certify their products. And whether it’s an IP vendor or not, that activity is ramping up. It’s not at the point where it’s being forced, but there are a lot of folks trying to get ahead of it because they know it’s going to be mandated at some point.”

At present, many of the standards applied to AI have been in place since before AI was as ubiquitous or as well trained as it is today for a variety of applications. As a result, while chipmakers still need to prove their devices will behave reliably and within spec, the compliance testing tends to be more general. Depending on what the AI is controlling, those tests can be extremely rigorous, but they may not pick up on all the nuances of how the AI will behave on the road.

“AI plays a significant role in the automotive industry,” said Amit Kumar, product marketing director for vision, AI, radar, lidar and DSP cores at Cadence. “One would think that AI gets implemented at the vehicle level only, but AI plays a significant role in designing vehicles at the lab level and gets implemented at the design level, the production/factory level, QA and testing levels, and for predictive maintenance.

Then it eventually reaches in-vehicle, which needs to operate seamlessly and within the parameters of standards like ISO 26262 (vehicle standard), SOTIF (safety of intended functionality), and so forth. These machines trained with AI algorithms need to safely perform tasks that previously required an experienced assembly team on the production floor, and eventually an experienced driver to operate a vehicle in an on-road traffic environment and to perform driving maneuvers better than an experienced driver with full safety.”

Key steps and considerations for implementing AI into the safety and reliability in an automotive application include the understanding that safety is paramount in automotive applications, Kumar explained. “One needs to ensure that their AI systems meet safety requirements. Certification bodies like Underwriters Laboratories (UL) provide safety training for autonomous vehicles and include machine learning safety. Risk assessments are crucial. Predicting potential hazards and mitigating them is a key function of AI applied into perception systems, path planning and motion control. Companies like Tesla use Hydranets, which are used on many images coming from a vehicle perception sensor suite and sent to a single backbone and further re-distributed onto multiple network heads, each responsible for performing functions like object detection, traffic lights, lane markings, etc. These networks are then fused onto a transformer to perform either a spatial fusion or a temporal fusion. These Hydranets and the platform they are running are thoroughly designed keeping functional safety standards (FuSA) in their design architecture.”

And even before vehicles are manufactured, the machines responsible for manufacturing the vehicles, are trained. Here, ML algorithms and AI play a crucial role. “When it comes to design and manufacturing, AI plays a role in vehicle manufacturing,” Kumar said. “AI-powered solutions and ML algorithms are used to improve production processes, as well as to speed up data classification during risk assessments and vehicle damage evaluations. Here, technologies like computer vision and NLP are widely applied in manufacturing. As well, collaborative robots can handle critical tasks like material handling and inspections in a safe environment and with efficiency.”

Processes and procedures
To ensure automotive safety and security compliance, the various applications in an automotive system have to be broken down by function. “Something like ADAS is obviously safety critical, so AI applications for ADAS would have a different level of safety criticality than the infotainment generative AI where you’re talking to the car to turn on the radio, change the temperature, or make a phone call,” said Synopsys’ DiGiuseppe. “That has a different level of safety criticality, so the safety is application-based.”

Once the risk is defined, the application safety integrity level (ASIL) rating is determined. “Different applications have different ASIL safety levels, depending on the risk,” he said. “The risk is composed of, if a failure happens, what would be the severity of that failure? For instance, if a failure happens in the radio, generally that’s not considered a high severity type of failure, while an ADAS failure is. So there are different classes of severity. There are also different classes of probabilities. What is the probability that a failure would happen in the ADAS system? What are the types of failures?

That leads to the consideration of what that ASIL target is. You look at the severity of a possible failure, and the probability of that failure happening. That helps you decide what your safety integrity level is.”

Then, when the ASIL level is decided, there is a system-level challenge to break down the hardware and software components of the system, and safety assessments are done to hit those target ASIL levels.

“If it’s an AI-based application — let’s say, ADAS with high levels of possible severity — it could be life critical if the ADAS system fails,” DiGiuseppe said. “That has high possible severity. The automakers break down the system to their suppliers, and in the case of an ADAS module that one of the big Tier Ones supplied to the OEM, within that ADAS module is the ADAS semiconductor processors. You break down the system into its baseline components, and in the semiconductor chips where the ADAS processors are composed of different IP, you’re breaking down from a system to all of its component parts, including all the way down to the sub-IP functions like the AI accelerators in those ADAS processors.

You break it down to all the component parts and you have to have an ASIL assessment roll up, from the top to the bottom, and bottom to top. That’s what all of these supply chains need to do, and each supplier in the supply chain rolls up that safety information to the next higher level. The IP supplier provides the safety work products/safety assessments to semiconductor customers. Then the semiconductor company does that on an SoC level, provides it to the module supplier, the module supplier will do it on the whole module system, and then the automaker will do it on the whole application. In breaking down the systems, you have both the software components, so the AI component of software, as well as the hardware component, and you have to do both the hardware and the software safety assessments.”

The level of security is likewise determined by the risk, and it can be equally stringent. But with security, that kind of testing may involve multiple systems rather than just focusing on a particular function.

“Security is all about re-verifying along the way,” said Cycuity’s Oberg. “There’s a fundamental limitation in security where it’s not composable, meaning you can’t verify just the hardware, then just the software, and assume that once they are together it’s going to be secure. You actually have to do the hardware, then you have to do the software, and then the third part is them together. With automotive, you have to start at the beginning, make sure the IP is behaving securely, ensure the IP integrated into the system is secure, ensure the software that’s running on your system is secure. And all of this is interacting together, so as you get into the software domain, that’s why emulation is really important.

You have to run your actual firmware, your actual boot image, with the real hardware and ensure that everything that was specified in your TARA (threat assessment and remediation analysis), for example, is not being violated now. In a typical semiconductor company, they’re going to do a lot of block-level analysis, and need to make sure things are validated and working there. Then they’re going to build the SoC and maybe have ‘topple the SoC’ tests, to make sure everything is being validated.”

System-level concerns
That’s only part of the challenge. “Ultimately, you’re going to run software on that, and you need this consistency across that whole lifecycle,” Oberg said. “That’s where it becomes really important. Where it gets challenging is once the silicon ships and someone’s actually putting their own software on it, then it becomes more fragmented and a little scarier, but that’s just the reality.

Companies that are fully vertically integrated, like Tesla on the car side, control a lot of the chip design even though they buy third-party silicon. But they also build their own so they can control that whole stack, just like Apple can with their phones and tablets, and so on. It becomes more challenging as it gets more fragmented.”

There are further considerations with the hardware and software. “The aim of ISO 26262 is to guarantee the absence of unacceptable risks caused by random hardware faults (relevant just for hardware) and systematic faults (relevant for both hardware and software), whereas the aim of AEC-Q100 is to guarantee a minimum level of quality/reliability for hardware components,” Renesas’ Vincelli explained. “As such, for hardware components used to execute AI functions as an SoC, ISO 26262 and AEC-Q100 are still fully relevant and applicable. There is no need to change with respect to what was done already for hardware components not based on AI. Then, for software components involved in AI functions, it may not be possible to always apply or comply to ISO 26262 because ISO 26262 was created for traditional deterministic software, developed based on a V cycle, while the AI applications have a probabilistic nature and are trained to perform the required functions through examples. Hence, a quite different approach.”

Rather than making AI compliant with ISO 26262, Vincelli believes there is a need to extend it by considering an additional set of methods and techniques to address desired safety proprieties of AI applications, or by mandating a certain way to develop AI applications that allows their review. “Since AI systems are data-dependent, a small but not foreseen change in the environment where the AI application is operating could cause safety issues, because it is not known how the AI application will behave,” Vincelli said. “ISO 26262 could be extended by considering how to deal with the impact and severity of such unforeseeable scenarios that could lead to unacceptable risk.”

And because AI is a relatively new and fast-growing topic, existing standards like ISO 26262 are not considering AI technologies yet. “Other standards like the ISO PAS 8800, expected to be published in the middle of this year, have taken the task to provide an automotive-specific guidance on the use of AI technologies,” he said. “A possible direction is for the ISO committee to extend ISO 26262 in the next release by incorporating lessons learned with ISO PAS 8800, with potentially also normative requirements.”

Further, several additional initiatives like The Autonomous, Ground Vehicle Artificial Intelligence (GVAI) committee from SAE, SAFEXPLAIN and others are forming with the goal to identify ways to make AI systems safe by creating techniques and methods to develop and enable review of these AI systems.

Conclusion
Specific approaches and methodologies for achieving compliance with automotive safety and security standards are not fully baked when it comes to AI. That will take time, and it will require cooperation among automotive companies, as well as by different teams within those companies.

“You have the functional safety team that understands, ‘I’m going to inject a stuck-at fault. Did it recover properly?'” said Fritz. “Then we have the SOTIF (safety of the intended functionality) team, and that one is a little bit different. Then, what I like to see is a third validation team that is responsible for all of these different system-level scenarios that collect those.

And once all the other teams have done their parts, the scenario team says, ‘Okay, I have 10,000 scenarios I’m going to run tonight. All of them are corner cases. You passed them last week. Do you pass them still?’ The point about those is, they are the only ones that are system-wide. Does the system itself behave as it did before, or does it behave correctly, where all the others are very unit-based, segregated, siloed, and have no understanding of what’s happening elsewhere throughout the system or its impact on what you’re doing?”

While all OEMs do not have all three of those teams in place today, Fritz notes that it’s still a work in progress. “Currently, the Tier Twos that are producing silicon will do ASIL-D testing and say, ‘done.’

Those devices will go to the Tier One supplier and they’ll say, ‘Okay, we got our software going, we did ISO 26262, we are done.’ Then it goes to the OEM and no one knows what’s going to happen when you plug all of these hundreds of pieces together. In fact, the concept of software-defined vehicle, the concepts of virtualization, digital twins and all of that, the whole shift left paradigm is really all about those processes becoming part of a holistic methodology so that this can all be done not at the end, in what we call the integration storm, but continuously.”

But this requires continuous integration, development, and iteration, and it’s up to the OEM to orchestrate it all. “They’re just not ready,” he said. “Most don’t even understand it. They are having trouble figuring out why, when they had thousands and thousands of hours of testing of their software for their EV, it still doesn’t work. What’s needed is a methodology that comprehends that whole process, from exploring the architectures, tossing out those that stink, what the software team is doing, and how they’re impacting the hardware team. All of that iterates until you get something that works in the end, and it’s all verified against the physical platform. That’s the solution. The automotive world isn’t ready for that just yet, but they’re beginning to at least adopt fads that are pointing in that direction.”
 
  • Like
  • Fire
  • Love
Reactions: 24 users
Top Bottom