Valeo will be looking after the certification and compliance for Scala 3 as described below.
I believe
@chapman89 commented on this article when it was first posted on Linkedin and received a response from Dr Henrich Gostzig from Valeo confirming they did a lot of work with neuromorphic SNN.
DVN-L Interview: Benazouz Bradaï, Valeo AD Innovation Chief
Bradaï, Valeo AD Innovation Chief
Dr. Benazouz Bradaï is Research & Innovation Director and Master Expert in Autonomous Driving at Valeo. In that role, he’s made major scientific and industrial contributions. He is also a scientific co-director of the ASTRA (Automated systems for Safe TRAnsportation) joint lab with Inria in France.
Bradaï holds a PhD in automatic control from Haute Alsace University in France. His expertise and research interests include sensor fusion, precise localization and mapping, and system architecture. He is a member of IEEE, ADASIS Forum, SENSORIS Consortium, SAE, and SIA (French SAE) as an ADAS/AD expert. He graciously granted DVN-Lidar this interview.
DVN: Will you tell us about your career and work at Valeo?
Dr. Benazouz Bradaï: In 2003, I started a PhD thesis in multisensor fusion for ADAS in collaboration with Valeo in the framework of a CIFRE industrial contract (Conventions Industrielles de Formation par la Recherche, industrial agreements on training through research). At that time, I was working on fusion of cameras and maps for lighting automation and for Intelligent Speed Assist. In 2007, I got my PhD and have been hired by Valeo as an ADAS Engineer. I was promoted R&I Project Manager in 2012, then Autonomous Driving Innovation Platform Manager position in 2019. Since 2022, I am R&I Director on ADAS and Autonomous Driving. As a Research and Innovation Engineer, I have always been passionate about research and innovation. I continued working with academics at Valeo with supervision of several PhD thesis with Mines ParisTech and Inria Lab. Since February 2022, I am the Scientific Co-Director of a joint lab with Inria called ASTRA “Automated systems for Safe TRAnsportation”. I have been evolving also as an expert from 2009 to Senior Expert in 2015 and finally a Master Expert in 2022.
DVN: What are the key milestones to achieve robust autonomous driving?
Dr. B.B.: Mercedes has achieved in 2022 the roll-out of the first homologated
L3 system in EU and soon in the US up to 60 km/h, in line with UNECE R157 (ALKS). In a few months, Hyundai-Kia will be bringing another
L3 system in various countries including South Korea, some European countries, and North America. To achieve this milestone, safety is a key element that needs to be proven at the homologation stage. The sensors’ redundancy is important for safety, and lidar is an enabler for higher autonomy. The lidar is also enabler for corner cases like ‘underdrivable’ objects, reducing false positives, and overcoming issues other sensors have—like camera that can be blinded by the sun, or radar which can have false positives with regard to tunnels. It allows increased detection range and high-fidelity 3D environment modelling.
Today,
L4 Autonomous Vehicles are very limited in series production. By
L4, we mean a safe system where the driver need not intervene at any moment and the system is ensuring the fallback in case of failure. Some experimentations or limited commercial services of
L4 systems—Robotaxis—are currently on in China and in the US in geofenced, pre-defined areas, but no real commercial use.
DVN: How important is lidar, and why?
Dr. B.B.: Lidar allows an increased range with a high accuracy of detections for highly retroreflective objects, height measurements, and 3D environment modelling capability.
From a safety point of view, lidar brings an essential technology redundancy that is important for the perception of the environment. Most approaches for
L3 automation use a triple sensor technology redundancy. Compared to the camera, the lidar cannot be blinded by the sun above the horizon, it increases the detection range, and has higher accuracy. Compared to the radar, it allows height measurements and has better angular resolution. In addition, lidar allows modelling the 3D environment and is very useful for precise localization and mapping. In urban environments when the infrastructure can be occluded by other road users or when there are no lane markings for example, it allows the availability of the function and thus extends the ODD (operational design domain). Valeo developed Drive4U Locate, a precise localization and mapping system based on our Valeo Scala lidar, which reaches 10-cm accuracy. It maps the environment and detects the change and updates the map by crowdsourcing.
VALEO DRIVE4U LOCATE: PRECISE LOCALIZATION AND MAPPING BASED ON SCALA LIDAR
DVN: Are there critical use cases which have been solved by Lidar?
Dr. B.B.: Lidar sensing technology is well suited to manage so-called ‘under-drivable’ objects such as sign gantries, bridges, or tunnels—elements of the infrastructure under which vehicles are supposed to drive freely. Traditional sensors such as radars and cameras usually perform poorly to classify these objects as under-drivable, and lidar brings the additional 3D information to reliably distinguish these objects from other road users such as cars.
One of the critical use-cases that lidar solves is the detection of debris (e.g., lost cargo) on the highway, a challenge that is directly related to the vehicle speed. The ALKS regulation has been amended and adopted in 2023 to have higher speeds on divided highways at 130 km/h, including automated lane changes. The lost cargo remains one of the challenging use-cases for this extension. With Valeo’s Scala3 lidar, these use-cases can be solved.
DVN: How does Scala3 do with these use cases, compared to the previous Scala2?
LOST CARGO (MATTRESS) ON HIGHWAY DURING VALEO CRUISE4U TEST DRIVES
Dr. B.B.: Underdrivable detection tests have been performed with Valeo Scala3 and confirm a better capability to minimize false positives (where underdrivable objects such as sign gantries or tunnels would cause unwanted braking) at sensor level, which will translate into a much better performance at system level, compensating the limitations of other sensors in such corner cases.
Of course, debris detection range will be greatly increased with Scala3, and our first tests confirm at least a doubled detection distance for objects such as a small tire on the road. This capability will be crucial to bring the top operating speed of autonomous driving functions closer to 130 km/h.
Scala3‘s potential is being evaluated as we speak, with deterministic and statistical campaigns being carried out for our first customers, especially Stellantis.
DVN: Tell us about the V&V (validation and verification) process to launch an autonomous vehicle, will you please?
Dr. B.B.: To validate an automated vehicle of
L3 and beyond, the mileage target of validation is not achievable with a realistic budget, and even if it were, it still is not sufficient. Simulation and virtual validation are key to reduce this budget and cover all the scenarios. But simulation will not be the only tool to validate. Indeed, for homologation, the assessment method combines simulation, physical tests in proving grounds, and real world tests.
The simulation is mainly to assess the system’s capacity to deal with critical situations that are not testable on proving grounds or public roads. Proving ground tests allow testing challenging scenarios that are not testable on open roads. It can be combined with simulation to test the vehicle behaviour with simulated data for repeatable scenarios and with injecting faults. This is called VIL, for vehicle-in-the-loop. Finally, validation on open roads is performed in order to assess the ability of the system to manage real-world situations especially in its interaction with other road users. It intends to verify that the system has not been overfitted to specific test scenarios.
DVN: How do Valeo sensors do in bad weather conditions?
Dr. B.B.: As it has been presented by my colleague Ahmed Yousif at the ADAS & Autonomous Vehicle Technology Expo Conference, Valeo develops three types of simulation sensor model:
- High Fidelity Sensor Model models the various components of the lidar, including the laser pulse and the optical path, in addition to effects such as blooming and noise. Each point is classified with a unique ID, class, and material to ease the development of the stack.
- Phenomenological Sensor Model is an object-based model to emulate the perception stack performance and metrics.
- AI Trained Model is an accelerator of the high-fidelity sensor model which is trained on both the simulated and real logs and data.
The high fidelity sensor model is developed taking into consideration weather conditions such as rain and spray. In addition to that, currently we are working on severe weather conditions such as fog. On the other hand, the phenomenological sensor model can emulate the lidar perception stack performance, taking into consideration all the weather conditions.
DVN: What are automakers’ expectations regarding lidar sensor models?
Dr. B.B.: Different OEMs use the sensor models in various applications. Here is a summary of the use cases:
- High Fidelity Sensor Model: perception stack and functions development; raw data fusion.
- AI Trained Model: integration to XiL (HiL/ SiL/ Overall HiL); perception stack and functions development, raw data fusion.
- Phenomenological Sensor Model: object-based fusion and integration to HiL and SiL.
The high fidelity sensor model is required for virtual validation and the verification with respect to real word test drives. At Valeo we also validate, with digital twin, the targeted autonomous driving functions where we can test with the simulation using the high fidelity sensor models and with real data as well as the tests in the real world.
VALIDATION VS REAL WORLD – DIGITAL TWIN
EXAMPLE OF THE HIGH FIDELITY SENSOR MODEL USED IN SIMULATION
DVN: What’s next to improve the ODD for L3 and L4 autonomous driving systems?
Dr. B.B.: Even as ADAS becomes increasingly a standard,
L2 and
L2+ automation will still be dominant as they will represent more than 50 per cent market share by the end of decade. These hands-off systems up to 130 km/h will be with progressive ODD extension including automated lane change, intersection support exit ramps management between highways, etc.
For
L3 and
L4 systems, the ODD is also increased progressively. The first
L3 systems on the road will be based on ALKS up to 60 km/h, then ALKS up to 130 km/h with automated lane change starting from 2026-2027.
Lidar technology is key to manage the related critical use-cases as the lost cargo for example. There is a trend in China to have lidar from
L2+ in order to prepare the next generation of
L3 and
L4 systems when the regulation is adopted. Regarding safety, at least a second sensor technology is required for managing these critical use-cases. Other sensor technologies will be introduced to extend the ODD, like the thermal camera to manage adverse weather situations and VRU (vulnerable road users). Connectivity deployment will allow more ODD extension. For example, at Valeo we are working on innovation for extending the
L4 highway speed to new challenging use-cases like toll booths and work zones, using connectivity combined with the vehicle sensor perception.
VALEO CRUISE4U HIGHWAY EXTENDED ODD USING CONNECTIVITY: WORK ZONE MANAGEMENT
VALEO CRUISE4U HIGHWAY EXTENDED ODD USING CONNECTIVITY: TOLL BOOTH MANAGEMENT
DVN: What should be the safety targets for an AV?
Dr. B.B.: Autonomous vehicle behavior must be safe whatever the potential root causes. Compliance with ISO 21448 SOTIF (Safety Of The Intended Functionality) is one of the major challenges in AV design and architecture. For that, the first difficulty automakers are facing is the definition of the acceptance criteria.
A PFA (French Automotive Platform) position paper from March 2019, “Safety Argument for SAE Automation Level 3 and 4 Automated Vehicles”, suggests using the GAME method (French acronym meaning
globally at least equivalent) to define these objectives. This method is also recommended by the ISO 21448 standard and the new coming ISO/TS 5083 on Safety Demonstration of Automated Driving Systems. The principle being that the residual risk induced by the AD system must be less than or equal to the one induced by an average human driver.
There are other methods like MEM (minimum endogenous mortality), ALARP (as low as reasonably practicable), and positive risk balance. Today there is no worldwide, nor European state of the art. However, a common approach is to take into account the accidentology statistics for similar use cases and to derive the acceptance criteria. The target of the acceptable fatal accidents rate induced by the autonomous driving system shall be lower than the fatal accidents rate induced by human driving divided by a safety factor. This safety factor can mitigate all the uncertainties arising from the calculation. It can also take into account that accident rates evolve from one year to another or that they can be different from one country to another.
Considering the GAME method, this target can be a factor of 10—a number used for decades by the safety community. There is still no consensus, but the common approaches are converging towards this factor. There are currently activities in different working groups and this factor might be updated.
https://www.drivingvisionnews.com/news/2023/07/05/dvn-l-interview-benazouz-bradai-valeo-ad-innovation-chief/