BRN Discussion Ongoing

jk6199

Regular
Damn shorters, scared me into buying some more today.
Dog house fun continues 🤭
 
  • Like
  • Haha
  • Fire
Reactions: 14 users

Boab

I wish I could paint like Vincent
Looking at the paragraph below from the Forbes article on Akida 2nd Gen surely this will be helpful to Valeo?

The company has also added support for what the company calls Temporal Event-Based Neural Networks (TENNs), which reduce the memory footprint and number of operations needed for workloads, including sequence prediction and video object detection, by orders of magnitude when handling 3D data or time-series data. What makes the TENNs particularly interesting is the ability take raw sensor data without preprocessing, allowing for radically simpler audio or healthcare monitoring or predictive devices.
 
  • Like
  • Love
  • Fire
Reactions: 34 users
Looking at the paragraph below from the Forbes article on Akida 2nd Gen surely this will be helpful to Valeo?

The company has also added support for what the company calls Temporal Event-Based Neural Networks (TENNs), which reduce the memory footprint and number of operations needed for workloads, including sequence prediction and video object detection, by orders of magnitude when handling 3D data or time-series data. What makes the TENNs particularly interesting is the ability take raw sensor data without preprocessing, allowing for radically simpler audio or healthcare monitoring or predictive devices.
Hi @Boab

And going back to Anil Mankar's statement at the 2021 Ai Field Day when referencing Lidar and 3D point cloud he made a particular point of the ability to "take raw sensor (3D) data without preprocessing, allowing for radically simpler.........predictive devices."

And going to the Valeo video presentation of Scala 3 in operation locating the motor cycle rider hidden from eyesight behind a truck in the next lane and predicting where it was going to emerge and whether this presented a collision risk if its vehicle remained on the present Scala 3 set course.

Unlike the leather glove in the O.J. Simpson case this one appears to be a perfect fit.

My opinion only DYOR
FF

AKIDA BALLISTA

Anil Mankar 2021 Ai Field Day Quote:

"Today people are taking Lidar data and converting it into a 2D kind of image because it's much easier to process the image and detect the object.

There is no reason why we can't do that directly in a 3D point cloud and take advantage of that."
 
  • Like
  • Fire
  • Haha
Reactions: 34 users

Boab

I wish I could paint like Vincent
Hi @Boab

And going back to Anil Mankar's statement at the 2021 Ai Field Day when referencing Lidar and 3D point cloud he made a particular point of the ability to "take raw sensor (3D) data without preprocessing, allowing for radically simpler.........predictive devices."

And going to the Valeo video presentation of Scala 3 in operation locating the motor cycle rider hidden from eyesight behind a truck in the next lane and predicting where it was going to emerge and whether this presented a collision risk if its vehicle remained on the present course Scarla 3 set course.

Unlike the leather glove in the O.J. Simpson case this one appears to be a perfect fit.

My opinion only DYOR
FF

AKIDA BALLISTA
A match made in heaven and this is only one of so many we are dealing with.
I think we are all going to be fat and happy in our old age.
 
  • Like
  • Fire
  • Haha
Reactions: 18 users

Diogenese

Top 20
Anyone else aware that Mercedes Benz alone does 3 million automobiles each year so if Scala 3 becomes a standard this first 2 million units is but a drop in the ocean.

My opinion only DYOR
FF

AKIDA BALLISTA

But Luminar is the fly in Valeo's ointment:

https://www.luminartech.com/updates/mb23

Expanding partnership and volumes by more than an order of magnitude to a broad range of consumer vehicles​

February 22, 2023
ORLANDO, Fla / STUTTGART, Ger. –– Luminar (Nasdaq: LAZR), a leading global automotive technology company, announced today a sweeping expansion of its partnership with Mercedes-Benz to safely enable enhanced automated driving capabilities across a broad range of next-generation production vehicle lines as part of the automaker’s next-generation lineup. Luminar’s Iris entered its first series production in October 2022 and the company’s Mercedes-Benz program has successfully completed the initial phase and the associated milestones.
After two years of close collaboration between the two companies, Mercedes-Benz now plans to integrate the next generation of Luminar’s Iris lidar and its associated software technology across a broad range of its next-generation production vehicle lines by mid-decade. The performance of the next-generation Iris is tailored to meet the demanding requirements of Mercedes-Benz for a new conditionally automated driving system that is planned to operate at higher speed for freeways, as well as for enhanced driver assistance systems for urban environments. It will also be simplifying the design integration with a sleeker profile. This multi-billion dollar deal is a milestone moment for the two companies and the industry and is poised to substantially enhance the technical capabilities and safety of conditionally automated driving systems.
“Mercedes’ standards for vehicle safety and performance are among the highest in the industry, and their decision to double down on Luminar reinforces that commitment,” said Austin Russell, Founder and CEO of Luminar. “We are now set to enable the broadest scale deployment of this technology in the industry. It’s been an incredible sprint so far, and we are fully committed to making this happen – together with Mercedes-Benz.”
“In a first step we have introduced a Level 3 system in our top line models. Next, we want to implement advanced automated driving features in a broader scale within our portfolio,” said Markus Schäfer, Member of the Board of Management of Mercedes Benz Group AG and Chief Technology Officer, Development & Procurement. “I am convinced that Luminar is a great partner to help realize our vision and roadmap for automated and accident-free driving
.”

Luminar have foveated imaging, and I assume this helps with the long range imaging and focusing on items of interest.

Luminar also have the LiDaR/camera combination:

US10491885B1 Post-processing by lidar system guided by camera information

1680227321242.png


Post-processing in a lidar system may be guided by camera information as described herein. In one embodiment, a camera system has a camera to capture images of the scene. An image processor is configured to classify an object in the images from the camera. A lidar system generates a point cloud of the scene and a modeling processor is configured to correlate the classified object to a plurality of points of the point cloud and to model the plurality of points as the classified object over time in a 3D model of the scene.

[0030] In embodiments, a logic circuit controls the operation of the camera and a separate dedicated logic block performs artificial intelligence detection, classification, and localization functions. Dedicated artificial intelligence or deep neural network logic is available with memory to allow the logic to be trained to perform different artificial intelligence tasks. The classification takes an apparent image object and relates that image object to an actual physical object. The image processor provides localization of the object with within the 2D pixel array of the camera by determining which pixels correspond to the classified object. The image processor may also provide a distance or range of the object for a 3D localization. For a 2D camera, after the object is classified, its approximate size will be known. This can be compared to the size of the object on the 2D pixel array. If the object is large in terms of pixels then it is close, while if it is small in terms of pixels, then it is farther away. Alternatively, a 3D camera system may be used to estimate range or distance.

Luminar has patents which relate to NNs, but they only describe NNs in conceptual terms, not at the NPU circuit level.

The claims are couched in terms of software NNs, and NNs are described as software components.

US11361449B2 Neural network for object detection and tracking 2020-05-06

1680229572082.png


[0083] Because the blocks of FIG. 3 (and various other figures described herein) depict a software architecture rather than physical components, it is understood that, when any reference is made herein to a particular neural network or other software architecture component being “trained,” or to the role of any software architecture component (e.g., sensors 102 ) in conducting such training, the operations or procedures described may have occurred on a different computing system (e.g., using specialized development software). Thus, for example, neural networks of the segmentation module 110 , classification module 112 and/or tracking module 114 may have been trained on a different computer system before being implemented within any vehicle. Put differently, the components of the sensor control architecture 100 may be included in a “final” product within a particular vehicle, without that vehicle or its physical components (sensors 102 , etc.) necessarily having been used for any training processes.


1. A method of multi-object tracking, the method comprising:
receiving, by processing hardware, a sequence of images generated at respective times by one or more sensors configured to sense an environment through which objects are moving relative to the one or more sensors;
constructing, by the processing hardware, a message passing graph in which each of a multiplicity of layers corresponds to a respective one in the sequence of images, the constructing including:
generating, for each of the layers, a plurality of feature nodes to represent features detected in the corresponding image, and
generating edges that interconnect at least some of the feature nodes across adjacent layers of the graph neural network to represent associations between the features; and
tracking, by the processing hardware, multiple features through the sequence of images, including:
passing messages in a forward direction and a backward direction through the message passing graph to share information across time,
limiting the passing of the messages to only those layers that are currently within a rolling window of a finite size, and
advancing the rolling window in the forward direction in response to generating a new layer of the message passing graph, based on a new imag
e.

Notably, Luminar has been working with Mercedes for a couple of years, so there is a fair chance our paths would have crossed.

"Luminar’s Iris entered its first series production in October 2022 and the company’s Mercedes-Benz program has successfully completed the initial phase and the associated milestones.
After two years of close collaboration between the two companies, Mercedes-Benz now plans to integrate the next generation of Luminar’s Iris lidar and its associated software technology across a broad range of its next-generation production vehicle lines by mid-decade
."


Given that Luminar only started working with Mercedes 2 years ago, this patent pre-dates that collaboration.

Given that Luminar thought NNs were software, they would have been very power hungry.

Given that Mercedes is very power conscious, would they have accepted a software NN?

Given that Luminar has only been working with Mercedes for 2 years, could Luminar have developed a SoC NN in that time?

Given that Mercedes has a plan to standardize on components, can we think of a suitable NN SoC to meet Mercedes' requirements?
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 43 users

skutza

Regular
But Luminar is the fly in Valeo's ointment:

https://www.luminartech.com/updates/mb23

Expanding partnership and volumes by more than an order of magnitude to a broad range of consumer vehicles​

February 22, 2023
ORLANDO, Fla / STUTTGART, Ger. –– Luminar (Nasdaq: LAZR), a leading global automotive technology company, announced today a sweeping expansion of its partnership with Mercedes-Benz to safely enable enhanced automated driving capabilities across a broad range of next-generation production vehicle lines as part of the automaker’s next-generation lineup. Luminar’s Iris entered its first series production in October 2022 and the company’s Mercedes-Benz program has successfully completed the initial phase and the associated milestones.
After two years of close collaboration between the two companies, Mercedes-Benz now plans to integrate the next generation of Luminar’s Iris lidar and its associated software technology across a broad range of its next-generation production vehicle lines by mid-decade. The performance of the next-generation Iris is tailored to meet the demanding requirements of Mercedes-Benz for a new conditionally automated driving system that is planned to operate at higher speed for freeways, as well as for enhanced driver assistance systems for urban environments. It will also be simplifying the design integration with a sleeker profile. This multi-billion dollar deal is a milestone moment for the two companies and the industry and is poised to substantially enhance the technical capabilities and safety of conditionally automated driving systems.
“Mercedes’ standards for vehicle safety and performance are among the highest in the industry, and their decision to double down on Luminar reinforces that commitment,” said Austin Russell, Founder and CEO of Luminar. “We are now set to enable the broadest scale deployment of this technology in the industry. It’s been an incredible sprint so far, and we are fully committed to making this happen – together with Mercedes-Benz.”
“In a first step we have introduced a Level 3 system in our top line models. Next, we want to implement advanced automated driving features in a broader scale within our portfolio,” said Markus Schäfer, Member of the Board of Management of Mercedes Benz Group AG and Chief Technology Officer, Development & Procurement. “I am convinced that Luminar is a great partner to help realize our vision and roadmap for automated and accident-free driving
.”

Luminar have foveated imaging, and I assume this helps with the long range imaging and focusing on items of interest.

Luminar also have the LiDaR/camera combination:

US10491885B1 Post-processing by lidar system guided by camera information

View attachment 33312

Post-processing in a lidar system may be guided by camera information as described herein. In one embodiment, a camera system has a camera to capture images of the scene. An image processor is configured to classify an object in the images from the camera. A lidar system generates a point cloud of the scene and a modeling processor is configured to correlate the classified object to a plurality of points of the point cloud and to model the plurality of points as the classified object over time in a 3D model of the scene.

[0030] In embodiments, a logic circuit controls the operation of the camera and a separate dedicated logic block performs artificial intelligence detection, classification, and localization functions. Dedicated artificial intelligence or deep neural network logic is available with memory to allow the logic to be trained to perform different artificial intelligence tasks. The classification takes an apparent image object and relates that image object to an actual physical object. The image processor provides localization of the object with within the 2D pixel array of the camera by determining which pixels correspond to the classified object. The image processor may also provide a distance or range of the object for a 3D localization. For a 2D camera, after the object is classified, its approximate size will be known. This can be compared to the size of the object on the 2D pixel array. If the object is large in terms of pixels then it is close, while if it is small in terms of pixels, then it is farther away. Alternatively, a 3D camera system may be used to estimate range or distance.

Luminar has patents which relate to NNs, but they only describe NNs in conceptual terms, not at the NPU circuit level.

The claims are couched in terms of software NNs, and NNs are described as software components.

US11361449B2 Neural network for object detection and tracking 2020-05-06

View attachment 33315

[0083] Because the blocks of FIG. 3 (and various other figures described herein) depict a software architecture rather than physical components, it is understood that, when any reference is made herein to a particular neural network or other software architecture component being “trained,” or to the role of any software architecture component (e.g., sensors 102 ) in conducting such training, the operations or procedures described may have occurred on a different computing system (e.g., using specialized development software). Thus, for example, neural networks of the segmentation module 110 , classification module 112 and/or tracking module 114 may have been trained on a different computer system before being implemented within any vehicle. Put differently, the components of the sensor control architecture 100 may be included in a “final” product within a particular vehicle, without that vehicle or its physical components (sensors 102 , etc.) necessarily having been used for any training processes.


1. A method of multi-object tracking, the method comprising:
receiving, by processing hardware, a sequence of images generated at respective times by one or more sensors configured to sense an environment through which objects are moving relative to the one or more sensors;
constructing, by the processing hardware, a message passing graph in which each of a multiplicity of layers corresponds to a respective one in the sequence of images, the constructing including:
generating, for each of the layers, a plurality of feature nodes to represent features detected in the corresponding image, and
generating edges that interconnect at least some of the feature nodes across adjacent layers of the graph neural network to represent associations between the features; and
tracking, by the processing hardware, multiple features through the sequence of images, including:
passing messages in a forward direction and a backward direction through the message passing graph to share information across time,
limiting the passing of the messages to only those layers that are currently within a rolling window of a finite size, and
advancing the rolling window in the forward direction in response to generating a new layer of the message passing graph, based on a new imag
e.

Notably, Luminar has been working with Mercedes for a couple of years, so there is a fair chance our paths would have crossed.

"Luminar’s Iris entered its first series production in October 2022 and the company’s Mercedes-Benz program has successfully completed the initial phase and the associated milestones.
After two years of close collaboration between the two companies, Mercedes-Benz now plans to integrate the next generation of Luminar’s Iris lidar and its associated software technology across a broad range of its next-generation production vehicle lines by mid-decade
."


Given that Luminar only started working with Mercedes 2 years ago, this patent pre-dates that collaboration.

Given that Luminar thought NNs were software, they would have been very power hungry.

Given that Mercedes is very power conscious, would they have accepted a software NN?

Given that Luminar has only been working with Mercedes for 2 years, could Luminar have developed a SoC NN in that time?

Given that Mercedes has a plan to standardize on components, can we think of a suitable NN SoC to meet Mercedes's requirements?
What's with all the questions, just give me answers will you!!!! :)
 
  • Haha
  • Like
  • Fire
Reactions: 23 users

Diogenese

Top 20
What's with all the questions, just give me answers will you!!!! :)
It's one of those tick-the-box quizzes, but there is only one box.
 
  • Like
  • Haha
  • Fire
Reactions: 23 users

Steve10

Regular

Qualcomm Snapdragon 8cx Gen 4: New Apple M series competitor surfaces in leaked benchmark listing​


The Qualcomm Snapdragon 8cx Gen 4 has surfaced again, this time courtesy of a Geekbench listing. Not only does the leaked listing confirm the chipset's codename, but also its CPU core arrangement, among other things.

Alex Alderson, Published 03/29/2023 🇵🇱 🇫🇷 ...
ARM Laptop Leaks / Rumors

Gustave Monce has discovered a Geekbench listing for the Snapdragon 8cx Gen 4, details of which Kuba Wojciechowski revealed in two leaks earlier this year. While the new Geekbench listing does not outline performance, the Snapdragon 8cx Gen 4 is rumoured to rival Apple's M series SoCs, it confirms a few aspects of Wojciechowski's earlier leaks. For reference, the Snapdragon 8cx Gen 4 is not expected to ship until next year at the earliest.

As the images below show, Geekbench reports that 'Snapdragon 8cx Next Gen' is 'HAMOA', which Wojciechowski claimed is the codename for the Snapdragon 8cx Gen 4. Additionally, the listing reveals that the SoC has two CPU clusters totalling 12 cores. Moreover, the cluster arrangement, 8+4, matches Wojciechowski's earliest report on the Snapdragon 8cx Gen 4. Furthermore, Geekbench reports that the second cluster operates at 2.38 GHz, with the first cluster capable of reaching 3.0 GHz. However, final CPU clock speeds are anticipated to be 2.5 GHz and 3.4 GHz, respectively, with the former clock speeds representative of an engineering prototype.

Unsurprisingly, the second cluster is thought to contain efficiency cores. In comparison, the 8 other cores are rumoured to be the Snapdragon 8cx Gen 4's performance cores. The Snapdragon 8cx Gen 4 should also feature the Adreno 740, the same GPU found in the Snapdragon 8 Gen 2. Currently, the Snapdragon 8cx Gen 4 is only expected to be available in future laptops, with the Snapdragon 8 Gen 3 and Snapdragon 8 Gen 4 serving flagship smartphones.

 
  • Like
  • Fire
Reactions: 10 users

IloveLamp

Top 20
Screenshot_20230331_131116_LinkedIn.jpg
 
  • Like
  • Thinking
  • Fire
Reactions: 13 users

Dhm

Regular
Is this a real lead or just speculation? I would really like people to give reason for their postings if there isn’t some immediate linkage.
Who can argue with AI?

View attachment 33313
Likewise can you please give a degree of legitimacy for this AI linkage? I’d hate to see someone taking a position in Brainchip based on this.

I’m now going to have a Bex and a good lie down.
 
  • Like
  • Haha
Reactions: 7 users

IloveLamp

Top 20
Is this a real lead or just speculation? I would really like people to give reason for their postings if there isn’t some immediate linkage.

Likewise can you please give a degree of legitimacy for this AI linkage? I’d hate to see someone taking a position in Brainchip based on this.

I’m now going to have a Bex and a good lie down.
It's real speculation
tenor-7.gif
 
  • Haha
  • Like
Reactions: 11 users
But Luminar is the fly in Valeo's ointment:

https://www.luminartech.com/updates/mb23

Expanding partnership and volumes by more than an order of magnitude to a broad range of consumer vehicles​

February 22, 2023
ORLANDO, Fla / STUTTGART, Ger. –– Luminar (Nasdaq: LAZR), a leading global automotive technology company, announced today a sweeping expansion of its partnership with Mercedes-Benz to safely enable enhanced automated driving capabilities across a broad range of next-generation production vehicle lines as part of the automaker’s next-generation lineup. Luminar’s Iris entered its first series production in October 2022 and the company’s Mercedes-Benz program has successfully completed the initial phase and the associated milestones.
After two years of close collaboration between the two companies, Mercedes-Benz now plans to integrate the next generation of Luminar’s Iris lidar and its associated software technology across a broad range of its next-generation production vehicle lines by mid-decade. The performance of the next-generation Iris is tailored to meet the demanding requirements of Mercedes-Benz for a new conditionally automated driving system that is planned to operate at higher speed for freeways, as well as for enhanced driver assistance systems for urban environments. It will also be simplifying the design integration with a sleeker profile. This multi-billion dollar deal is a milestone moment for the two companies and the industry and is poised to substantially enhance the technical capabilities and safety of conditionally automated driving systems.
“Mercedes’ standards for vehicle safety and performance are among the highest in the industry, and their decision to double down on Luminar reinforces that commitment,” said Austin Russell, Founder and CEO of Luminar. “We are now set to enable the broadest scale deployment of this technology in the industry. It’s been an incredible sprint so far, and we are fully committed to making this happen – together with Mercedes-Benz.”
“In a first step we have introduced a Level 3 system in our top line models. Next, we want to implement advanced automated driving features in a broader scale within our portfolio,” said Markus Schäfer, Member of the Board of Management of Mercedes Benz Group AG and Chief Technology Officer, Development & Procurement. “I am convinced that Luminar is a great partner to help realize our vision and roadmap for automated and accident-free driving
.”

Luminar have foveated imaging, and I assume this helps with the long range imaging and focusing on items of interest.

Luminar also have the LiDaR/camera combination:

US10491885B1 Post-processing by lidar system guided by camera information

View attachment 33312

Post-processing in a lidar system may be guided by camera information as described herein. In one embodiment, a camera system has a camera to capture images of the scene. An image processor is configured to classify an object in the images from the camera. A lidar system generates a point cloud of the scene and a modeling processor is configured to correlate the classified object to a plurality of points of the point cloud and to model the plurality of points as the classified object over time in a 3D model of the scene.

[0030] In embodiments, a logic circuit controls the operation of the camera and a separate dedicated logic block performs artificial intelligence detection, classification, and localization functions. Dedicated artificial intelligence or deep neural network logic is available with memory to allow the logic to be trained to perform different artificial intelligence tasks. The classification takes an apparent image object and relates that image object to an actual physical object. The image processor provides localization of the object with within the 2D pixel array of the camera by determining which pixels correspond to the classified object. The image processor may also provide a distance or range of the object for a 3D localization. For a 2D camera, after the object is classified, its approximate size will be known. This can be compared to the size of the object on the 2D pixel array. If the object is large in terms of pixels then it is close, while if it is small in terms of pixels, then it is farther away. Alternatively, a 3D camera system may be used to estimate range or distance.

Luminar has patents which relate to NNs, but they only describe NNs in conceptual terms, not at the NPU circuit level.

The claims are couched in terms of software NNs, and NNs are described as software components.

US11361449B2 Neural network for object detection and tracking 2020-05-06

View attachment 33315

[0083] Because the blocks of FIG. 3 (and various other figures described herein) depict a software architecture rather than physical components, it is understood that, when any reference is made herein to a particular neural network or other software architecture component being “trained,” or to the role of any software architecture component (e.g., sensors 102 ) in conducting such training, the operations or procedures described may have occurred on a different computing system (e.g., using specialized development software). Thus, for example, neural networks of the segmentation module 110 , classification module 112 and/or tracking module 114 may have been trained on a different computer system before being implemented within any vehicle. Put differently, the components of the sensor control architecture 100 may be included in a “final” product within a particular vehicle, without that vehicle or its physical components (sensors 102 , etc.) necessarily having been used for any training processes.


1. A method of multi-object tracking, the method comprising:
receiving, by processing hardware, a sequence of images generated at respective times by one or more sensors configured to sense an environment through which objects are moving relative to the one or more sensors;
constructing, by the processing hardware, a message passing graph in which each of a multiplicity of layers corresponds to a respective one in the sequence of images, the constructing including:
generating, for each of the layers, a plurality of feature nodes to represent features detected in the corresponding image, and
generating edges that interconnect at least some of the feature nodes across adjacent layers of the graph neural network to represent associations between the features; and
tracking, by the processing hardware, multiple features through the sequence of images, including:
passing messages in a forward direction and a backward direction through the message passing graph to share information across time,
limiting the passing of the messages to only those layers that are currently within a rolling window of a finite size, and
advancing the rolling window in the forward direction in response to generating a new layer of the message passing graph, based on a new imag
e.

Notably, Luminar has been working with Mercedes for a couple of years, so there is a fair chance our paths would have crossed.

"Luminar’s Iris entered its first series production in October 2022 and the company’s Mercedes-Benz program has successfully completed the initial phase and the associated milestones.
After two years of close collaboration between the two companies, Mercedes-Benz now plans to integrate the next generation of Luminar’s Iris lidar and its associated software technology across a broad range of its next-generation production vehicle lines by mid-decade
."


Given that Luminar only started working with Mercedes 2 years ago, this patent pre-dates that collaboration.

Given that Luminar thought NNs were software, they would have been very power hungry.

Given that Mercedes is very power conscious, would they have accepted a software NN?

Given that Luminar has only been working with Mercedes for 2 years, could Luminar have developed a SoC NN in that time?

Given that Mercedes has a plan to standardize on components, can we think of a suitable NN SoC to meet Mercedes' requirements?
Hi @Diogenese

The following is an extract from Luminar's 28.2.23 Annual Report:

Adjacent Markets Adjacent markets such as last mile delivery, aerospace and defense, robotics and security offer use cases for which our technology is well suited. Our goal is to scale our core markets and utilize our robust solutions to best serve these adjacent markets where it makes sense for us and our partners.

Our Products

Our Iris and other products are described in further detail below:

Hardware Iris: Iris lidar combines laser transmitter and receiver and provides long-range, 1550 nm sensory meeting OEM specs for advanced safety and autonomy. This technology provides efficient automotive-grade, and affordable solutions that are scalable, reliable, and optional for series production. Iris lidar sensors are dynamically configurable dualaxis scan sensors that detect objects up to 600 meters away over a horizontal field of view of 120° and a software configurable vertical field of view of up to 30°, providing high point densities in excess of 200 points per square degree enable long-range detection, tracking, and classification over the whole field of view. Iris is refined to meet the size, weight, cost, power, and reliability requirements of automotive qualified series production sensors.
Iris features our vertically integrated receiver, detector, and laser solutions developed by our Advanced Technologies & Services segment companies - Freedom Photonics, Black Forest Engineering, and Optogration. The internal development of these key technologies gives us a significant advantage in the development of our product roadmap.

Software Software presently under development includes the following:

Core Sensor Software:
Our lidar sensors are configurable and capture valuable information extracted from the raw point-cloud to promote the development and performance of perception software. Our core sensor software features are being designed to help our commercial partners to operate and integrate our lidar sensors and control, and enrich the sensor data stream before perception processing.

Perception Software: Our perception software is in design to transform lidar point-cloud data into actionable information about the environment surrounding the vehicle. This information includes classifying static objects such as lane markings, road surface, curbs, signs and buildings as well as other vehicles, pedestrians, cyclists and animals. Through internal development as well as the recent acquisition of certain assets of Solfice (aka Civil Maps), we expect to be able to utilize our point-cloud data to achieve precise vehicle localization and to create and provide continuous updates to a high definition map of a vehicle’s environment.

Sentinel: Sentinel is our full-stack software platform for safety and autonomy that will enable Proactive Safety and highway autonomy for cars and commercial trucks. Our software products are in designing and coding phase of the development and had not yet achieved technological feasibility as at end of 2022.

Competition
The market for lidar-enabled vehicle features, on and off road, is an emerging one with many potential applications in the development stage. As a result, we face competition for lidar hardware business from a range of companies seeking to have their products incorporated into these applications. We believe we hold a strong position based on both hardware product performance and maturity, and our growing ability to develop deeply integrated software capabilities needed to provide autonomous and safety solutions to our customers. Within the automotive autonomy software space, the competitive landscape is still nascent and primarily focused on developing robo-taxi technologies as opposed to autonomous software solutions for passenger vehicles. Other autonomous software providers include: in-house OEM software teams; automotive silicon providers; large technology companies and newer technology companies focused on autonomous software. We partner with several of these autonomous software providers to provide our lidar and other products. Beyond automotive, the adjacent markets, including delivery bots and mapping, among others, are highly competitive. There are entrenched incumbents and competitors, including from China, particularly around ultra-low cost products that are widely available."

We know as Facts:

1. Luminar partnered with Mercedes Benz in 2022 and does not expect its product to be in vehicles before 2025.

2. We know Mercedes Benz teamed with Valeo has obtained the first European and USA approvals for Level 3 Driving.

3. We know Level 3 Driving involves maximum 60 kph on freeways with hands off the wheel but driver must maintain sufficient attention to retake control when warned by the vehicle so to do.

4. We know Valeo certifies its Lidar to 2OO metres.

5. We know that Luminar claims its Lidar is long range out to 600 metres on roads not exceeding certain undulations that could inhibit signals.

6. We know that Mercedes, Valeo and Bosch have proven systems for autonomous vehicle parking in parking stations.

7. We know that Valeo is claiming that Scala 3 will permit autonomous driving up to 130 kph and is coming out in 2025.

8. We know from the above SEC filing that Luminar is still not ready despite its advertising message that suggests it is a sure thing.

So my question is as Luminar does not claim to support autonomous parking or certified Level 3 driving at 60 kph but is simply promoting it can provide long range Lidar for high speed driving and from their website have been shipping one Lidar sensor unit to vehicle manufacturers for installation on/in car hoods above the windscreen why does this exclude Valeo's system which is offering 145 degree visibility and rear and side sensing from continuing to do what it presently does with Luminar increasing safety on high speed autobahns in Germany and Europe.

My opinion only DYOR
FF

AKIDA BALLISTA
 
Last edited:
  • Like
  • Fire
  • Thinking
Reactions: 27 users

skutza

Regular
Is this a real lead or just speculation? I would really like people to give reason for their postings if there isn’t some immediate linkage.

Likewise can you please give a degree of legitimacy for this AI linkage? I’d hate to see someone taking a position in Brainchip based on this.

I’m now going to have a Bex and a good lie down.
This is from Chat Ai.
 
  • Like
Reactions: 2 users

Tothemoon24

Top 20
SUBSCRIBE SEARCH
MARCH 30, 2023

Luminar announces new lidar technology and startup acquisition​

A series of technology announcements were made during a recent investor day event, illustrating Luminar’s ambitious roadmap for bringing safety and autonomy solutions to the automotive industry.

Eric van Rees​


luminar%20fig-2.jpg.large.1024x1024.jpg
via Luminar
During CES 2023, Luminar announced a special news event scheduled for the end of February. Dubbed “Luminar Day”, a series of technology releases and partnership announcements were made during a live stream, showing Luminar’s ambitious plans and roadmap for the future. In addition to developing lidar sensors, Luminar plans to integrate different hardware and software components for improved vehicle safety and autonomous driving capabilities through acquisitions and partnerships with multiple car manufacturers and technology providers, as well as scaling up production of these solutions, anticipating large-scale market adoption of autonomous vehicles in the next couple of years.

A new lidar sensor​

Luminar’s current sensor portfolio has been extended with the new Iris+ sensor (and associated software), which comes with a range of 300 meters. This is 50 meters more than the current maximum range of the Iris sensor. The sensor design is such that it can be integrated seamlessly into the roofline of production vehicle models. Mercedes-Benz announced it will integrate IRIS+ into its next-generation vehicle lineup. The sensor will enable greater performance and collision avoidance of small objects at up to autobahn-level speeds, enhancing vehicle safety and the autonomous capabilities of a vehicle. Luminar has plans for an additional manufacturing facility in Asia with a local partner to support the vast global scale of upcoming vehicle launches with Iris+, as well as a production plant in Mexico that will be operated by contract manufacturer Celestica.
luminar%20fig-1.jpg.medium.800x800.jpg
New Iris+ sensor, via Luminar

Software development: Luminar AI Engine release​

The live stream featured the premiere of Luminar’s machine learning-based AI Engine for object detection in 3D data captured by lidar sensors. Since 2017, Luminar has been working on AI capabilities on 3D lidar data to improve the performance and functionality of next-generation safety and autonomy in automotive. The company plans to capture lidar data with more than a million vehicles that will provide input for its AI engine and build a 3D model of the drivable world. To accelerate Luminar’s AI Engine efforts, an exclusive partnership was announced with Scale.ai, a San Francisco-headquartered AI applications developer that will provide data labeling and AI tools. Luminar is not the first lidar tech company to work with Scale.ai: in the past, it has worked with Velodyne to find edge cases in 3D data and curate valuable data for annotation.

Seagate lidar division acquisition​

Just as with the Civil Maps acquisition announced at CES 2023 to accelerate its lidar production process, Luminar recently acquired the lidar division of data storage company Seagate. That company develops high-capacity, modular storage solutions (hard drives) for capturing massive amounts of data created by autonomous cars. Specifically, Luminar acquired lidar-related IP (internet protocols), assets and a technical team.

Additional announcements​

Apart from these three lidar-related announcements, multiple announcements were made that show the scale of Luminar's ambition to provide lidar-based solutions for automotive. Take for example the new commercial agreement with Pony.ai, an autonomous driving technology company. The partnership is meant to further improve the performance and accuracy of Luminar’s AI engine for Pony.ai’s next-generation commercial trucking and robotaxi platforms. Luminar also announced the combination of three semiconductor subsidiaries into a new entity named Luminar Semiconductor. The advanced receiver, laser and processing chip technologies provided by this entity are not limited to lidar-based applications but are also used in the aerospace, medical and communications sectors.
 
  • Like
  • Love
  • Fire
Reactions: 11 users

Hi @Tothemoon24 depends where you look you get different answers your article has 300 metres these specs from the Luminar website has 600 metres. What is interesting is the power draw being 25 watts. Seems like that would reduce the EQXX range considerably.​

Specifications​

Please contact us for a full datasheet

Maximum range600 metres

250 metres at < 10% reflectivity
Minimum range0.5 metres
Horizontal field of view120°
Vertical field of view0 – 26°
Horizontal resolution0.05°
Vertical resolution0.05°
Range precision1 cm
Scan rate1-30 fps
Reflectance resolution7 bits
Maximum range returns per point6

Environmental​

Water and dust ingressIP69K
VibrationISO 16750-3
ShockIEC 60068-2-27
Operating temperature-40°C – 85°C
Storage temperature-40°C – 105°C

Classifications​

Laser safetyClass 1
Export controlEAR99 (US DoC)

Electrical​

Input voltage9-16 V or 18-32 V
Power consumption~25 W
 
  • Like
  • Fire
Reactions: 16 users

Yak52

Regular
We know as Facts:

1. Luminar partnered with Mercedes Benz in 2022 and does not expect its product to be in vehicles before 2025.

2. We know Mercedes Benz teamed with Valeo has obtained the first European and USA approvals for Level 3 Driving.

3. We know Level 3 Driving involves maximum 60 kph on freeways with hands off the wheel but driver must maintain sufficient attention to retake control when warned by the vehicle so to do.

4. We know Valeo certifies its Lidar to 2OO metres.

5. We know that Luminar claims its Lidar is long range out to 600 metres on roads not exceeding certain undulations that could inhibit signals.

6. We know that Mercedes, Valeo and Bosch have proven systems for autonomous vehicle parking in parking stations.

7. We know that Valeo is claiming that Scala 3 will permit autonomous driving up to 130 kph and is coming out in 2025.

8. We know from the above SEC filing that Luminar is still not ready despite its advertising message that suggests it is a sure thing.

So my question is as Luminar does not claim to support autonomous parking or certified Level 3 driving at 60 kph but is simply promoting it can provide long range Lidar for high speed driving and from their website have been shipping one Lidar sensor unit to vehicle manufacturers for installation on/in car hoods above the windscreen why does this exclude Valeo's system which is offering 145 degree visibility and rear and side sensing from continuing to do what it presently does with Luminar increasing safety on high speed autobahns in Germany and Europe.

My opinion only DYOR
FF

AKIDA BALLISTA


SELF DRIVING CAR SENSORS BASED ON TRIPLE REDUNDANCY FOR SAFER MOBILITY​

The automotive industry uses the triple redundancy system to guarantee the safety of using autonomous cars. Every item of information received by a sensor must be confirmed by two other sensors of different types. Valeo offers the broadest range of automotive sensors on the market.

--------------------------------------------------------------------------------------------------------------------------------------------------

Considering the above info that "Triple" redundancy is used in Autonomous cars what is the possiblity of Luminar having a single Lidar above the windscreen which reaches beyond 200 mtrs (600 mtrs??) and VALEO having Lidar sensors around the vehicle and front for short (close) distance use? Luminar announced new Lidar sensors recently stating "Streamlined" and suitable for use above the windscreen (hood).

Maybe "BOTH" Valeo and Luminar sensors combined in a vehicle all around for "Redundancy" purposes?

Yak52 :cool:
 
  • Like
  • Love
Reactions: 4 users

M_C

Founding Member
Is this a real lead or just speculation? I would really like people to give reason for their postings if there isn’t some immediate linkage.

Likewise can you please give a degree of legitimacy for this AI linkage? I’d hate to see someone taking a position in Brainchip based on this.

I’m now going to have a Bex and a good lie down.
Hey DHM,

CARIAD (aka VOLKSWAGEN) and BOSCH are givens imo. Below is what I have on VW/CARIAD. If you go to the search function for the thread you should find a plethora of posts relating to VW and BOSCH (both of which were confirmed above the iceberg if i'm not mistaken?)

Renesas are also in tight with VW.........


Any post on tse, if not presented with solid facts should be treated as speculation imo

DYOR OPINION ONLY

Screenshot_20220721-171810_LinkedIn.jpg
Screenshot_20220521-204241_Twitter.jpg
Capture2.PNG
Capture.PNG
Screenshot_20220721-165729_LinkedIn.jpg
Screen Shot 2021-11-28 at 7.26.22 am.png
Rob Telson Likes 5.jpg
Capture12.PNG
Capture11.PNG
Capture10.PNG
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 68 users

M_C

Founding Member
Hey DHM,

CARIAD aka VOLKSWAGEN and BOSCH are givens imo. Below is what I have on VW/CARIAD. If you go to the search function for the thread you should find a plethora of posts relating to VW and BOSCH (both of which were confirmed above the iceberg if i'm not mistaken?

Any post on tse, if not presented with solid facts should be treated as speculation imo

Renesas are also in tight with VW.........


View attachment 33324 View attachment 33325 View attachment 33326 View attachment 33327 View attachment 33328 View attachment 33329 View attachment 33330 View attachment 33331 View attachment 33332 View attachment 33333
Capture9.PNG
Capture8.PNG
Capture7.PNG
Capture4.PNG
Capture3.PNG
Capture2.PNG
Capture1.PNG
Capture.PNG
Capture5.PNG
 

Attachments

  • Capture6.PNG
    Capture6.PNG
    712.1 KB · Views: 56
  • Like
  • Fire
  • Love
Reactions: 48 users

Tothemoon24

Top 20

Hi @Tothemoon24 depends where you look you get different answers your article has 300 metres these specs from the Luminar website has 600 metres. What is interesting is the power draw being 25 watts. Seems like that would reduce the EQXX range considerably.​

Specifications​

Please contact us for a full datasheet

Maximum range600 metres

250 metres at < 10% reflectivity
Minimum range0.5 metres
Horizontal field of view120°
Vertical field of view0 – 26°
Horizontal resolution0.05°
Vertical resolution0.05°
Range precision1 cm
Scan rate1-30 fps
Reflectance resolution7 bits
Maximum range returns per point6

Environmental​

Water and dust ingressIP69K
VibrationISO 16750-3
ShockIEC 60068-2-27
Operating temperature-40°C – 85°C
Storage temperature-40°C – 105°C

Classifications​

Laser safetyClass 1
Export controlEAR99 (US DoC)

Electrical​

Input voltage9-16 V or 18-32 V
Power consumption~25 W

This article is 12 months old FF , seems like the maximum range is 600 metres which is bloody amazing 🤩

Luminar’s Iris lidar technology to be included in both Volvo and Mercedes cars in 2022​

Both car manufacturers plan to include Luminar sensor technology to improve safety and autonomous driving capabilities.
eric-100x100

Eric van Rees​


3312d8499fbf60e630dadb708f8c1adc.jpg


Luminar announced a partnership with Mercedes-Benz to accelerate the development of future highly automated driving technologies for Mercedes passenger cars. Luminar’s Iris lidar technology, which is currently being prepared for series production, is expected to improve vehicle safety and the technical capabilities of highly automated driving systems. This recent announcement follows a similar one made during a special press event during CES 2022 in earlier this month, where Volvo Cars and Luminar announced plans for the next-generation SUV, to be revealed later this year. At CES 2022, Luminar also introduced Blade-integration for commercial trucks. Blade is a fully integrated housing system for multiple Iris sensors, that was first introduced in 2021.
By including Luminar’s Iris lidar as a standard feature into the roofline of the car, the car manufacturer wants to improve safety, as well as offering more autonomous driving capabilities to assist the driver: together with this announcement of the Luminar sensor inclusion, Volvo Cars announced Ride Pilot, an add-on subscription for that same upcoming car.


Ride Pilot is an unsupervised driving capability for highways, that comes equipped with lidar hardware with Luminar’s Iris and perception from its Sentinel Solution. Sentinel is a full-stack autonomous solution for series production that combines sensor technology and software for object detection and decision making. The Swedish autonomous driving software company Zenseact worked together with Volvo and Luminar on the software and sensor technology to be included for this new capability.

Luminar’s iris sensor capabilities

Luminar’s Iris sensor is an auto-grade package for series-production programs of 10,000 to 1+ million vehicles. It features camera-like resolution greater than 300 points per square degree and high data fidelity to reliably see where objects are and what they are. Combined with software updates over the air, Iris unlocks more autonomous capabilities for a vehicle. It offers a 120 degrees FoV, a maximum range of 600 meters, a data fidelity of 1cm range precision and camera-like resolution of more than 300 points per square degree. Objects and vehicles can be detected and tracked from 250m, lane markings from 150m and road/drivable space from 80m.
c804823a67c1af0d85fa0aa1af6c75e9.jpg

Ride Pilot testing and rollout

Currently, Volvo Cars is testing autonomous driving functionalities in roads in Sweden together with Zenseact, and collecting data across Europe and the US. By the middle of this year, the company intends to begin testing on the roads in California, where the climate, traffic conditions and regulatory framework provide a favorable environment for the introduction of autonomous driving. After all necessary approvals, Volvo intends to introduce Ride Pilot in California first, before gradually rolling out in other markets and regions around the world.
Luminar’s lidar sensors will complement five radars, eight cameras and sixteen ultrasonic sensors in Volvo Car’s upcoming fully electric SUV. This standard sensor setup will provide excellent vision and perception reliability. Together with the continuous, over-the-air software rollouts, the system will ensure full redundancy and enable Volvo Cars to achieve safe autonomous driving with Ride Pilot.
 
  • Like
Reactions: 11 users

Slade

Top 20
That was a very nice week for BrainChip. We are set up nicely for next week. Very interesting times. Good luck to you all.
 
  • Like
  • Love
  • Fire
Reactions: 36 users
Top Bottom