Hmmm....AI cruise control maybe doesn't like humans
Wonder whose AI?
From a few days ago.
Consumers should question the callousness of automakers using a human driver as a “component” of their safeguards. What’s at stake: If you think a faulty
www.edge-ai-vision.com
BMW Fiasco: Failed Testing, Verification, Validation of AI-driven ADAS
Consumers should question the callousness of automakers using a human driver as a “component” of their safeguards.
What’s at stake: If you think a faulty sensor triggered a BMW to automatically accelerate to 110mph on a U.K. country road, think again. The problem is systemic. The incident exposes the inability of many carmakers to understand the relationship among individual modules to ensure system-level safety.
By now, we hope
a Sunday Times of London report,
BMW cruise control ‘took over and tried to reach 110mph‘, has become required reading for every system engineer developing AI-embedded ADAS vehicles, and for consumers eager to embrace automated vehicle features. The story’s alarming subhead reads, “A motorist was sent hurtling over the limit when his car’s technology misread signs.”
Shown to be a no one-off glitch, the incident demonstrates that auto sensors can misread speed limit signs. An advanced automated feature – BMW’s Speed Limit Assist – enabled the car to act autonomously, accelerating the BMW X5 to 110mph in a 30mph road on a village road in the U.K. county of Essex.
I’m focusing on the BMW incident because the story is on many levels full of teachable moments. If we learn anything from this fiasco, the lessons should apply beyond BMW to all car OEMs and top-tier suppliers developing ADAS features.
The easy way out for carmakers is to attribute a failure to an individual component and its software. That’s BMW’s alibi. As
The Times reported, a BMW representative told the driver – who experienced the trauma of his vehicle “taking over” without permission – “there was ‘no fault with the car’.” The problem, according to BMW, involved sensor “picking up writing or numbers on the side of the road.”
Unwittingly acknowledging in its statement, BMW screwed up its system-level engineering. The incident underscores shortfalls within OEMs’ system-level designs, testing, verification, and validation of autonomous vehicles and ADAS cars loaded with AI-driven features.
Cross-checking
Among carmakers’ minimum responsibilities is cross-checking ADAS components to determine whether they function together as intended.
Missy Cummings, an engineering professor at George Mason University, told
The Ojo-Yoshida Report: “My concerns about this and
related incidents is why there is no cross-checking of the speed limit with both the known speed limit on that road….” A digital map would have provided the local speed limit and sensors would detect local conditions such as time of day and weather.
Phil Koopman, a safety expert and associate professor at Carnegie Mellon University, agreed. “A vision-based speed limit sign system will have a substantive error rate, and the OEM knew this.”
In other words, BMW was aware this could happen.
Cummings continued: “The National Highway Traffic Safety Administration’s Standing General Order is replete with ADAS cars getting into accidents where the speed is too high either for the road type or too high for the weather conditions.” The U.S. regulator issued its General Standing Order last June requiring crash reporting where automated driving or Level 2 advanced driver assistance systems are involved.
With ample data publicly available, carmakers have had time to add cross-checks to their vehicles to catch sensor errors. What have automakers done since last June? Their position remains “that the driver is responsible for mitigating dangerous failures of the feature,” noted Koopman.
The offense here is the callousness of automakers using human drivers as a safety “component.” The objective is shielding the company from liability rather than protecting drivers