But Luminar is the fly in Valeo's ointment:
https://www.luminartech.com/updates/mb23
Expanding partnership and volumes by more than an order of magnitude to a broad range of consumer vehicles
February 22, 2023
ORLANDO, Fla / STUTTGART, Ger. –– L
uminar (Nasdaq: LAZR), a leading global automotive technology company, announced today a sweeping expansion of its partnership with Mercedes-Benz to safely enable enhanced automated driving capabilities across a broad range of next-generation production vehicle lines as part of the automaker’s next-generation lineup. Luminar’s Iris entered its first series production in October 2022 and the company’s Mercedes-Benz program has successfully completed the initial phase and the associated milestones.
After two years of close collaboration between the two companies, Mercedes-Benz now plans to integrate the next generation of Luminar’s Iris lidar and its associated software technology across a broad range of its next-generation production vehicle lines by mid-decade. The performance of the next-generation Iris is tailored to meet the demanding requirements of Mercedes-Benz for a new conditionally automated driving system that is planned to operate at higher speed for freeways, as well as for enhanced driver assistance systems for urban environments. It will also be simplifying the design integration with a sleeker profile. This multi-billion dollar deal is a milestone moment for the two companies and the industry and is poised to substantially enhance the technical capabilities and safety of conditionally automated driving systems.
“Mercedes’ standards for vehicle safety and performance are among the highest in the industry, and their decision to double down on Luminar reinforces that commitment,” said Austin Russell, Founder and CEO of Luminar. “We are now set to enable the broadest scale deployment of this technology in the industry. It’s been an incredible sprint so far, and we are fully committed to making this happen – together with Mercedes-Benz.”
“In a first step we have introduced a Level 3 system in our top line models. Next, we want to implement advanced automated driving features in a broader scale within our portfolio,” said Markus Schäfer, Member of the Board of Management of Mercedes Benz Group AG and Chief Technology Officer, Development & Procurement. “I am convinced that Luminar is a great partner to help realize our vision and roadmap for automated and accident-free driving.”
Luminar have foveated imaging, and I assume this helps with the long range imaging and focusing on items of interest.
Luminar also have the LiDaR/camera combination:
US10491885B1 Post-processing by lidar system guided by camera information
View attachment 33312
P
ost-processing in a lidar system may be guided by camera information as described herein. In one embodiment, a camera system has a camera to capture images of the scene. An image processor is configured to classify an object in the images from the camera. A lidar system generates a point cloud of the scene and a modeling processor is configured to correlate the classified object to a plurality of points of the point cloud and to model the plurality of points as the classified object over time in a 3D model of the scene.
[0030] I
n embodiments, a logic circuit controls the operation of the camera and a separate dedicated logic block performs artificial intelligence detection, classification, and localization functions. Dedicated artificial intelligence or deep neural network logic is available with memory to allow the logic to be trained to perform different artificial intelligence tasks. The classification takes an apparent image object and relates that image object to an actual physical object. The image processor provides localization of the object with within the 2D pixel array of the camera by determining which pixels correspond to the classified object. The image processor may also provide a distance or range of the object for a 3D localization. For a 2D camera, after the object is classified, its approximate size will be known. This can be compared to the size of the object on the 2D pixel array. If the object is large in terms of pixels then it is close, while if it is small in terms of pixels, then it is farther away. Alternatively, a 3D camera system may be used to estimate range or distance.
Luminar has patents which relate to NNs, but they only describe NNs in conceptual terms, not at the NPU circuit level.
The claims are couched in terms of software NNs, and NNs are described as software components.
US11361449B2 Neural network for object detection and tracking
2020-05-06
View attachment 33315
[0083] B
ecause the blocks of FIG. 3 (and various other figures described herein) depict a software architecture rather than physical components, it is understood that, when any reference is made herein to a particular neural network or other software architecture component being “trained,” or to the role of any software architecture component (e.g., sensors 102 ) in conducting such training, the operations or procedures described may have occurred on a different computing system (e.g., using specialized development software). Thus, for example, neural networks of the segmentation module 110 , classification module 112 and/or tracking module 114 may have been trained on a different computer system before being implemented within any vehicle. Put differently, the components of the sensor control architecture 100 may be included in a “final” product within a particular vehicle, without that vehicle or its physical components (sensors 102 , etc.) necessarily having been used for any training processes.
1. A
method of multi-object tracking, the method comprising:
receiving, by processing hardware, a sequence of images generated at respective times by one or more sensors configured to sense an environment through which objects are moving relative to the one or more sensors;
constructing, by the processing hardware, a message passing graph in which each of a multiplicity of layers corresponds to a respective one in the sequence of images, the constructing including:
generating, for each of the layers, a plurality of feature nodes to represent features detected in the corresponding image, and
generating edges that interconnect at least some of the feature nodes across adjacent layers of the graph neural network to represent associations between the features; and
tracking, by the processing hardware, multiple features through the sequence of images, including:
passing messages in a forward direction and a backward direction through the message passing graph to share information across time,
limiting the passing of the messages to only those layers that are currently within a rolling window of a finite size, and
advancing the rolling window in the forward direction in response to generating a new layer of the message passing graph, based on a new image.
Notably, Luminar has been working with Mercedes for a couple of years, so there is a fair chance our paths would have crossed.
"L
uminar’s Iris entered its first series production in October 2022 and the company’s Mercedes-Benz program has successfully completed the initial phase and the associated milestones.
After two years of close collaboration between the two companies, Mercedes-Benz now plans to integrate the next generation of Luminar’s Iris lidar and its associated software technology across a broad range of its next-generation production vehicle lines by mid-decade."
Given that Luminar only started working with Mercedes 2 years ago, this patent pre-dates that collaboration.
Given that Luminar thought NNs were software, they would have been very power hungry.
Given that Mercedes is very power conscious, would they have accepted a software NN?
Given that Luminar has only been working with Mercedes for 2 years, could Luminar have developed a SoC NN in that time?
Given that Mercedes has a plan to standardize on components, can we think of a suitable NN SoC to meet Mercedes' requirements?
Hi
@Diogenese
The following is an extract from Luminar's 28.2.23 Annual Report:
Adjacent Markets Adjacent markets such as last mile delivery, aerospace and defense, robotics and security offer use cases for which our technology is well suited. Our goal is to scale our core markets and utilize our robust solutions to best serve these adjacent markets where it makes sense for us and our partners.
Our Products
Our Iris and other products are described in further detail below:
Hardware Iris: Iris lidar combines laser transmitter and receiver and provides long-range, 1550 nm sensory meeting OEM specs for advanced safety and autonomy. This technology provides efficient automotive-grade, and affordable solutions that are scalable, reliable, and optional for series production. Iris lidar sensors are dynamically configurable dualaxis scan sensors that detect objects up to 600 meters away over a horizontal field of view of 120° and a software configurable vertical field of view of up to 30°, providing high point densities in excess of 200 points per square degree enable long-range detection, tracking, and classification over the whole field of view. Iris is refined to meet the size, weight, cost, power, and reliability requirements of automotive qualified series production sensors.
Iris features our vertically integrated receiver, detector, and laser solutions developed by our Advanced Technologies & Services segment companies - Freedom Photonics, Black Forest Engineering, and Optogration. The internal development of these key technologies gives us a significant advantage in the development of our product roadmap.
Software Software presently under development includes the following:
Core Sensor Software: Our lidar sensors are configurable and capture valuable information extracted from the raw point-cloud to promote the development and performance of perception software. Our core sensor software features are being designed to help our commercial partners to operate and integrate our lidar sensors and control, and enrich the sensor data stream before perception processing.
Perception Software: Our perception software is in design to transform lidar point-cloud data into actionable information about the environment surrounding the vehicle. This information includes classifying static objects such as lane markings, road surface, curbs, signs and buildings as well as other vehicles, pedestrians, cyclists and animals. Through internal development as well as the recent acquisition of certain assets of Solfice (aka Civil Maps), we expect to be able to utilize our point-cloud data to achieve precise vehicle localization and to create and provide continuous updates to a high definition map of a vehicle’s environment.
Sentinel: Sentinel is our full-stack software platform for safety and autonomy that will enable Proactive Safety and highway autonomy for cars and commercial trucks. Our software products are in designing and coding phase of the development and had not yet achieved technological feasibility as at end of 2022.
Competition
The market for lidar-enabled vehicle features, on and off road, is an emerging one with many potential applications in the development stage. As a result, we face competition for lidar hardware business from a range of companies seeking to have their products incorporated into these applications. We believe we hold a strong position based on both hardware product performance and maturity, and our growing ability to develop deeply integrated software capabilities needed to provide autonomous and safety solutions to our customers. Within the automotive autonomy software space, the competitive landscape is still nascent and primarily focused on developing robo-taxi technologies as opposed to autonomous software solutions for passenger vehicles. Other autonomous software providers include: in-house OEM software teams; automotive silicon providers; large technology companies and newer technology companies focused on autonomous software. We partner with several of these autonomous software providers to provide our lidar and other products. Beyond automotive, the adjacent markets, including delivery bots and mapping, among others, are highly competitive. There are entrenched incumbents and competitors, including from China, particularly around ultra-low cost products that are widely available."
We know as Facts:
1. Luminar partnered with Mercedes Benz in 2022 and does not expect its product to be in vehicles before 2025.
2. We know Mercedes Benz teamed with Valeo has obtained the first European and USA approvals for Level 3 Driving.
With Drive Pilot in the Vision EQXX, Mercedes-Benz last December received the world’s first approval for an L3 AD system under the UN Regulations recgonised by most of the world’s…
www.drivingvisionnews.com
3. We know Level 3 Driving involves maximum 60 kph on freeways with hands off the wheel but driver must maintain sufficient attention to retake control when warned by the vehicle so to do.
4. We know Valeo certifies its Lidar to 2OO metres.
5. We know that Luminar claims its Lidar is long range out to 600 metres on roads not exceeding certain undulations that could inhibit signals.
6. We know that Mercedes, Valeo and Bosch have proven systems for autonomous vehicle parking in parking stations.
7. We know that Valeo is claiming that Scala 3 will permit autonomous driving up to 130 kph and is coming out in 2025.
8. We know from the above SEC filing that Luminar is still not ready despite its advertising message that suggests it is a sure thing.
So my question is as Luminar does not claim to support autonomous parking or certified Level 3 driving at 60 kph but is simply promoting it can provide long range Lidar for high speed driving and from their website have been shipping one Lidar sensor unit to vehicle manufacturers for installation on/in car hoods above the windscreen why does this exclude Valeo's system which is offering 145 degree visibility and rear and side sensing from continuing to do what it presently does with Luminar increasing safety on high speed autobahns in Germany and Europe.
My opinion only DYOR
FF
AKIDA BALLISTA