BRN Discussion Ongoing

I wish we had footage from when they were asked about Brainchip at the CES 2022 presentation. from all reports that was telling in some way.

From memory it was early 2021 at the Q & A from the official Valeo Scala 3 launch!

This was before MB were known customers and the car used is MB which in hindsight was a hint of what was to come!

After the direct question the Q & A was shut down pretty quickly. The CTO recoiled like he was shot! The good thing was that he did not say no, Brainchip was not involved which he could easily have done if that was the case. Disappointingly the CTO did say they would reveal the Technology in a few months time but that never happened. Guessing due to the multi-billion dollar market they’re in they wanted to keep their advantage for as long as possible; which is fair enough!

 
  • Like
  • Fire
  • Love
Reactions: 16 users
Given Valeo a topic at the mo.

Older May 2022 article / interview and nothing new per se but extra additional background info and comments on using AI software....trusting it's ours but yet tbc unfortunately :cautious:

Also wondered on the electric power train supply to MB and the e-axles and whether this is also an area that Akida could slot into for something like vibration analysis :unsure:



Valeo Sees Big Opportunity in EVs​

By Austin Weber
aem0522insight1.jpg


Valeo is a leading supplier of high-voltage electric drives and 48-volt electrical systems. Photo courtesy Valeo
aem0522insight2.jpg


Valeo recently supplied the entire electric power train of the Mercedes-Benz EQS sedan. Illustration courtesy Daimler AG
aem0522insight3.jpg

One-third of all new vehicles produced worldwide contain advanced driver-assistance systems made by Valeo. Photo courtesy Valeo
aem0522insight4.jpg


Valeo is a leading producer of lidar systems, which it assembles at a state-of-the-art factory in Germany. Photo courtesy Valeo
aem0522insight5.jpg

Electronics form the heart of many Valeo products. Photo courtesy Valeo
aem0522insight6.jpg

In addition to producing long-range sensors, Valeo recently unveiled a near-field lidar device. Photo courtesy Valeo
aem0522insight7.jpg


NFL is designed for use on autonomous vehicles, such as driverless delivery pods. Illustration courtesy Valeo
aem0522insight8.jpg

Valeo operates 187 plants in 33 countries around the world that support both traditional automakers and startups. Photo courtesy Valeo
aem0522insight9.jpg

Autonomous technology is an increasingly important part of Valeo’s diverse product portfolio. Photo courtesy Valeo

Valeo is a Tier One automotive supplier that specializes in advanced driver-assistance systems (ADAS), interior, lighting, power train and thermal management systems. The $17 billion French company operates 184 plants in 31 countries around the world that support both traditional automakers and startups. Valeo is based in Paris, but its North American headquarters is located in Troy, MI.

Next year, Valeo will celebrate its centennial. The company traces its roots to a small workshop outside of Paris that made brake linings and friction materials. By the 1930s, Valeo expanded into clutches, followed by thermal systems in the 1960s. In the 1970s, the company branched out into electrical components and lighting, following the acquisition of Cibie and Marchal.

Today, Valeo claims that 25 percent of all new vehicles produced worldwide contain its ADAS technology, which includes state-of-the-art cameras and sensors. In recent years, the company has doubled in size and become a leader in autonomous and electric mobility technology.

For instance, Valeo recently supplied the entire electric power train of the Mercedes-Benz EQS sedan, including dual motors (the rear e-axle provides 300 kilowatts of power, while the front axle generates 170 kilowatts), an inverter and a reducer.

Valeo is also a leading manufacturer of lidar systems. In fact, it has already produced more than 160,000 units, and a wide variety of cars equipped with laser scanners and lidar use the company’s products.

Valeo recently unveiled a third-generation lidar system that enables Level 3 automation and is set to debut on production vehicles in 2024. It offers significantly enhanced performance, makes autonomous mobility a reality and provides previously unseen levels of road safety due to cutting-edge range, resolution and frame rate.

This laser scanner can detect objects located at distances of more than 200 meters. It reconstructs a 3D real-time image of the vehicle's surroundings at a rate of 4.5 million pixels and 25 frames per second. Because of its unique perception capabilities, the device can see things that humans, cameras and radars cannot.

Together with software based on artificial intelligence (AI), the system combines collected data and enables the vehicle to instantly make the right decision. It automatically adapts to the environment and improves its performance over time through regular updates.

Earlier this year, at the CES Show in Las Vegas, the company demonstrated a new short-range lidar system dubbed Valeo NFL (Near Field Lidar).

When used on driverless delivery pods and other vehicles, it creates a safety “bubble” that provides peripheral vision, eliminating blind spots.

Valeo's lidar units are produced at the company’s state-of-the-art factory in Wemding, Germany, where components are assembled and tested with a micron level of precision.

Autonomous and Electric Mobility recently asked Michel Forissier, chief engineering and marketing officer at Valeo, to outline his company’s strategy for next-generation vehicles.

AEM: Valeo’s motto is “smart technology for smarter mobility.” Why is this strategy important in today’s rapidly evolving auto industry?

Forissier: Most functions in an automobile today are turning electronic. All components now comprise electronic hardware and software, which allows new functions that make vehicles smarter. For instance, ADAS systems and intelligent lighting make vehicles safer, while electric and electronic systems make vehicles more efficient.

AEM: Does electrification require a new production mindset or a new way for suppliers to approach manufacturing?

Forissier: Electric motors are very different than internal combustion engines, because they are less complex and much simpler. But, there are new challenges that must be addressed, such as managing balance, sound and vibration-related issues due to the elimination of traditional engine noise. Power electronics become more important while dealing with high voltages and high currents. Battery management and temperature control must also be carefully addressed in EVs. Current efficiency and safety become critical, which requires automation and tighter quality control. In particular, robots are necessary for handling EV parts that tend to be heavier and bulkier.

AEM: How is the transition from internal combustion engine vehicles to electric vehicles affecting plant floor operations in your factories?

Forissier: We’ve made a progressive evolution in our factories. For instance, some of our plants in France that have traditionally produced alternators and clutches have slowly shifted to making traction motors and other components used in electric vehicles.

AEM: Have you invested in any Industry 4.0 technology to prepare your factories for the EV era?

Forissier: Yes, we have installed a lot of new automation in our plants. For instance, we currently have more than 1,000 collaborative robots in operation. Most applications involve material handling. We also use AI technology to improve quality as we produce more advanced electronics, which are used in many of our products.

AEM: How will your experience from producing low-voltage products during the past 10 years help as you produce more high-voltage devices during the next 10 years?

Forissier: Most of the technology is the same. However, wire diameter is different for high-voltage products. Power electronics are also more complicated. We’ve learned a lot through our joint-venture partnership with Siemens, which focused on e-motors, inverters and power electronics (the company recently announced that it will buy 100 percent of the shares of the joint venture in July). End-of-line testing and quality are increasingly critical, but the big challenge is to do it fast.

AEM: ADAS technology has evolved from relatively simple mechanical devices to complex mechatronic products. Has that changed how your products are assembled and tested?

Forissier: Yes. Products such as lidar require extremely precise machining, assembly and quality control. All parts are controlled to the micron. And, because software is a key attribute of product performance, end-of-line testing has become much more sophisticated to ensure performance and compliance.

AEM: How is Valeo preparing for the future era of autonomous vehicles?

Forissier: This is one of the key areas that we are focusing on today, with many exciting opportunities for growth. We are the largest producer of ADAS sensors in the world, supplying many of the top automakers. We’re also developing a new 360-degree system, including cameras and chips, for automatic parking applications.

SCALA is the automotive industry's first commercial 3D lidar sensor for AV applications. It provides a wide field of view up to 145 degrees. Its AI-based integrated software detects, recognizes and classifies static and dynamic objects up to a distance of 200 meters in all weather and lighting conditions.

Our third-generation laser scanner technology, which is scheduled to hit the market in 2024, will go even further, making it possible to delegate driving in many situations, including at speeds of up to 130 kilometers per hour on the highway.
Thank you FMF.

@ Diogenese the following when combined with all of the above information posted this morning is why I believe Valeo is using AKIDA in Scala 3 and because it is all better done in hardware:

“This laser scanner can detect objects located at distances of more than 200 meters. It reconstructs a 3D real-time image of the vehicle's surroundings at a rate of 4.5 million pixels and 25 frames per second. Because of its unique perception capabilities, the device can see things that humans, cameras and radars cannot.”

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Love
  • Fire
Reactions: 31 users

Taproot

Regular
Well keeping the speculation rolling Blind Freddie has worked out if Brainchip receives a percentage of the product price - $500 at 2.5% it would be $12.50 x 2,000,000 units which would be $US25 million.

At 2.5% this would be the bottom number going by all previous research on this subject without any allowance for uniqueness of the AKIDA technology and based on the notion that the IP equivalent of only one AKD1000 is required for all the processing in a SCALA 3 Lidar unit. I personally doubt that would be the case and would expect at least two AKD1000 equivalents would be necessary allowing the need for redundancy so Blind Freddie's number would double to $US50 million at 5% to account for the amount of IP and the uniqueness factor.

But Blind Freddie thinks he went on a bus trip yesterday so what would he know anyway. You would think he would have picked up it was Sir David Attenborough doing the commentary.

My opinion only DYOR
FF

AKIDA BALLISTA

SELF DRIVING CAR SENSORS BASED ON TRIPLE REDUNDANCY FOR SAFER MOBILITY​

The automotive industry uses the triple redundancy system to guarantee the safety of using autonomous cars. Every item of information received by a sensor must be confirmed by two other sensors of different types. Valeo offers the broadest range of automotive sensors on the market.

 
  • Like
  • Fire
  • Love
Reactions: 15 users

skutza

Regular
Who can argue with AI?

1680228325871.png
 
  • Like
  • Haha
  • Fire
Reactions: 27 users

jk6199

Regular
Damn shorters, scared me into buying some more today.
Dog house fun continues 🤭
 
  • Like
  • Haha
  • Fire
Reactions: 14 users

Boab

I wish I could paint like Vincent
Looking at the paragraph below from the Forbes article on Akida 2nd Gen surely this will be helpful to Valeo?

The company has also added support for what the company calls Temporal Event-Based Neural Networks (TENNs), which reduce the memory footprint and number of operations needed for workloads, including sequence prediction and video object detection, by orders of magnitude when handling 3D data or time-series data. What makes the TENNs particularly interesting is the ability take raw sensor data without preprocessing, allowing for radically simpler audio or healthcare monitoring or predictive devices.
 
  • Like
  • Love
  • Fire
Reactions: 34 users
Looking at the paragraph below from the Forbes article on Akida 2nd Gen surely this will be helpful to Valeo?

The company has also added support for what the company calls Temporal Event-Based Neural Networks (TENNs), which reduce the memory footprint and number of operations needed for workloads, including sequence prediction and video object detection, by orders of magnitude when handling 3D data or time-series data. What makes the TENNs particularly interesting is the ability take raw sensor data without preprocessing, allowing for radically simpler audio or healthcare monitoring or predictive devices.
Hi @Boab

And going back to Anil Mankar's statement at the 2021 Ai Field Day when referencing Lidar and 3D point cloud he made a particular point of the ability to "take raw sensor (3D) data without preprocessing, allowing for radically simpler.........predictive devices."

And going to the Valeo video presentation of Scala 3 in operation locating the motor cycle rider hidden from eyesight behind a truck in the next lane and predicting where it was going to emerge and whether this presented a collision risk if its vehicle remained on the present Scala 3 set course.

Unlike the leather glove in the O.J. Simpson case this one appears to be a perfect fit.

My opinion only DYOR
FF

AKIDA BALLISTA

Anil Mankar 2021 Ai Field Day Quote:

"Today people are taking Lidar data and converting it into a 2D kind of image because it's much easier to process the image and detect the object.

There is no reason why we can't do that directly in a 3D point cloud and take advantage of that."
 
  • Like
  • Fire
  • Haha
Reactions: 34 users

Boab

I wish I could paint like Vincent
Hi @Boab

And going back to Anil Mankar's statement at the 2021 Ai Field Day when referencing Lidar and 3D point cloud he made a particular point of the ability to "take raw sensor (3D) data without preprocessing, allowing for radically simpler.........predictive devices."

And going to the Valeo video presentation of Scala 3 in operation locating the motor cycle rider hidden from eyesight behind a truck in the next lane and predicting where it was going to emerge and whether this presented a collision risk if its vehicle remained on the present course Scarla 3 set course.

Unlike the leather glove in the O.J. Simpson case this one appears to be a perfect fit.

My opinion only DYOR
FF

AKIDA BALLISTA
A match made in heaven and this is only one of so many we are dealing with.
I think we are all going to be fat and happy in our old age.
 
  • Like
  • Fire
  • Haha
Reactions: 18 users

Diogenese

Top 20
Anyone else aware that Mercedes Benz alone does 3 million automobiles each year so if Scala 3 becomes a standard this first 2 million units is but a drop in the ocean.

My opinion only DYOR
FF

AKIDA BALLISTA

But Luminar is the fly in Valeo's ointment:

https://www.luminartech.com/updates/mb23

Expanding partnership and volumes by more than an order of magnitude to a broad range of consumer vehicles​

February 22, 2023
ORLANDO, Fla / STUTTGART, Ger. –– Luminar (Nasdaq: LAZR), a leading global automotive technology company, announced today a sweeping expansion of its partnership with Mercedes-Benz to safely enable enhanced automated driving capabilities across a broad range of next-generation production vehicle lines as part of the automaker’s next-generation lineup. Luminar’s Iris entered its first series production in October 2022 and the company’s Mercedes-Benz program has successfully completed the initial phase and the associated milestones.
After two years of close collaboration between the two companies, Mercedes-Benz now plans to integrate the next generation of Luminar’s Iris lidar and its associated software technology across a broad range of its next-generation production vehicle lines by mid-decade. The performance of the next-generation Iris is tailored to meet the demanding requirements of Mercedes-Benz for a new conditionally automated driving system that is planned to operate at higher speed for freeways, as well as for enhanced driver assistance systems for urban environments. It will also be simplifying the design integration with a sleeker profile. This multi-billion dollar deal is a milestone moment for the two companies and the industry and is poised to substantially enhance the technical capabilities and safety of conditionally automated driving systems.
“Mercedes’ standards for vehicle safety and performance are among the highest in the industry, and their decision to double down on Luminar reinforces that commitment,” said Austin Russell, Founder and CEO of Luminar. “We are now set to enable the broadest scale deployment of this technology in the industry. It’s been an incredible sprint so far, and we are fully committed to making this happen – together with Mercedes-Benz.”
“In a first step we have introduced a Level 3 system in our top line models. Next, we want to implement advanced automated driving features in a broader scale within our portfolio,” said Markus Schäfer, Member of the Board of Management of Mercedes Benz Group AG and Chief Technology Officer, Development & Procurement. “I am convinced that Luminar is a great partner to help realize our vision and roadmap for automated and accident-free driving
.”

Luminar have foveated imaging, and I assume this helps with the long range imaging and focusing on items of interest.

Luminar also have the LiDaR/camera combination:

US10491885B1 Post-processing by lidar system guided by camera information

1680227321242.png


Post-processing in a lidar system may be guided by camera information as described herein. In one embodiment, a camera system has a camera to capture images of the scene. An image processor is configured to classify an object in the images from the camera. A lidar system generates a point cloud of the scene and a modeling processor is configured to correlate the classified object to a plurality of points of the point cloud and to model the plurality of points as the classified object over time in a 3D model of the scene.

[0030] In embodiments, a logic circuit controls the operation of the camera and a separate dedicated logic block performs artificial intelligence detection, classification, and localization functions. Dedicated artificial intelligence or deep neural network logic is available with memory to allow the logic to be trained to perform different artificial intelligence tasks. The classification takes an apparent image object and relates that image object to an actual physical object. The image processor provides localization of the object with within the 2D pixel array of the camera by determining which pixels correspond to the classified object. The image processor may also provide a distance or range of the object for a 3D localization. For a 2D camera, after the object is classified, its approximate size will be known. This can be compared to the size of the object on the 2D pixel array. If the object is large in terms of pixels then it is close, while if it is small in terms of pixels, then it is farther away. Alternatively, a 3D camera system may be used to estimate range or distance.

Luminar has patents which relate to NNs, but they only describe NNs in conceptual terms, not at the NPU circuit level.

The claims are couched in terms of software NNs, and NNs are described as software components.

US11361449B2 Neural network for object detection and tracking 2020-05-06

1680229572082.png


[0083] Because the blocks of FIG. 3 (and various other figures described herein) depict a software architecture rather than physical components, it is understood that, when any reference is made herein to a particular neural network or other software architecture component being “trained,” or to the role of any software architecture component (e.g., sensors 102 ) in conducting such training, the operations or procedures described may have occurred on a different computing system (e.g., using specialized development software). Thus, for example, neural networks of the segmentation module 110 , classification module 112 and/or tracking module 114 may have been trained on a different computer system before being implemented within any vehicle. Put differently, the components of the sensor control architecture 100 may be included in a “final” product within a particular vehicle, without that vehicle or its physical components (sensors 102 , etc.) necessarily having been used for any training processes.


1. A method of multi-object tracking, the method comprising:
receiving, by processing hardware, a sequence of images generated at respective times by one or more sensors configured to sense an environment through which objects are moving relative to the one or more sensors;
constructing, by the processing hardware, a message passing graph in which each of a multiplicity of layers corresponds to a respective one in the sequence of images, the constructing including:
generating, for each of the layers, a plurality of feature nodes to represent features detected in the corresponding image, and
generating edges that interconnect at least some of the feature nodes across adjacent layers of the graph neural network to represent associations between the features; and
tracking, by the processing hardware, multiple features through the sequence of images, including:
passing messages in a forward direction and a backward direction through the message passing graph to share information across time,
limiting the passing of the messages to only those layers that are currently within a rolling window of a finite size, and
advancing the rolling window in the forward direction in response to generating a new layer of the message passing graph, based on a new imag
e.

Notably, Luminar has been working with Mercedes for a couple of years, so there is a fair chance our paths would have crossed.

"Luminar’s Iris entered its first series production in October 2022 and the company’s Mercedes-Benz program has successfully completed the initial phase and the associated milestones.
After two years of close collaboration between the two companies, Mercedes-Benz now plans to integrate the next generation of Luminar’s Iris lidar and its associated software technology across a broad range of its next-generation production vehicle lines by mid-decade
."


Given that Luminar only started working with Mercedes 2 years ago, this patent pre-dates that collaboration.

Given that Luminar thought NNs were software, they would have been very power hungry.

Given that Mercedes is very power conscious, would they have accepted a software NN?

Given that Luminar has only been working with Mercedes for 2 years, could Luminar have developed a SoC NN in that time?

Given that Mercedes has a plan to standardize on components, can we think of a suitable NN SoC to meet Mercedes' requirements?
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 43 users

skutza

Regular
But Luminar is the fly in Valeo's ointment:

https://www.luminartech.com/updates/mb23

Expanding partnership and volumes by more than an order of magnitude to a broad range of consumer vehicles​

February 22, 2023
ORLANDO, Fla / STUTTGART, Ger. –– Luminar (Nasdaq: LAZR), a leading global automotive technology company, announced today a sweeping expansion of its partnership with Mercedes-Benz to safely enable enhanced automated driving capabilities across a broad range of next-generation production vehicle lines as part of the automaker’s next-generation lineup. Luminar’s Iris entered its first series production in October 2022 and the company’s Mercedes-Benz program has successfully completed the initial phase and the associated milestones.
After two years of close collaboration between the two companies, Mercedes-Benz now plans to integrate the next generation of Luminar’s Iris lidar and its associated software technology across a broad range of its next-generation production vehicle lines by mid-decade. The performance of the next-generation Iris is tailored to meet the demanding requirements of Mercedes-Benz for a new conditionally automated driving system that is planned to operate at higher speed for freeways, as well as for enhanced driver assistance systems for urban environments. It will also be simplifying the design integration with a sleeker profile. This multi-billion dollar deal is a milestone moment for the two companies and the industry and is poised to substantially enhance the technical capabilities and safety of conditionally automated driving systems.
“Mercedes’ standards for vehicle safety and performance are among the highest in the industry, and their decision to double down on Luminar reinforces that commitment,” said Austin Russell, Founder and CEO of Luminar. “We are now set to enable the broadest scale deployment of this technology in the industry. It’s been an incredible sprint so far, and we are fully committed to making this happen – together with Mercedes-Benz.”
“In a first step we have introduced a Level 3 system in our top line models. Next, we want to implement advanced automated driving features in a broader scale within our portfolio,” said Markus Schäfer, Member of the Board of Management of Mercedes Benz Group AG and Chief Technology Officer, Development & Procurement. “I am convinced that Luminar is a great partner to help realize our vision and roadmap for automated and accident-free driving
.”

Luminar have foveated imaging, and I assume this helps with the long range imaging and focusing on items of interest.

Luminar also have the LiDaR/camera combination:

US10491885B1 Post-processing by lidar system guided by camera information

View attachment 33312

Post-processing in a lidar system may be guided by camera information as described herein. In one embodiment, a camera system has a camera to capture images of the scene. An image processor is configured to classify an object in the images from the camera. A lidar system generates a point cloud of the scene and a modeling processor is configured to correlate the classified object to a plurality of points of the point cloud and to model the plurality of points as the classified object over time in a 3D model of the scene.

[0030] In embodiments, a logic circuit controls the operation of the camera and a separate dedicated logic block performs artificial intelligence detection, classification, and localization functions. Dedicated artificial intelligence or deep neural network logic is available with memory to allow the logic to be trained to perform different artificial intelligence tasks. The classification takes an apparent image object and relates that image object to an actual physical object. The image processor provides localization of the object with within the 2D pixel array of the camera by determining which pixels correspond to the classified object. The image processor may also provide a distance or range of the object for a 3D localization. For a 2D camera, after the object is classified, its approximate size will be known. This can be compared to the size of the object on the 2D pixel array. If the object is large in terms of pixels then it is close, while if it is small in terms of pixels, then it is farther away. Alternatively, a 3D camera system may be used to estimate range or distance.

Luminar has patents which relate to NNs, but they only describe NNs in conceptual terms, not at the NPU circuit level.

The claims are couched in terms of software NNs, and NNs are described as software components.

US11361449B2 Neural network for object detection and tracking 2020-05-06

View attachment 33315

[0083] Because the blocks of FIG. 3 (and various other figures described herein) depict a software architecture rather than physical components, it is understood that, when any reference is made herein to a particular neural network or other software architecture component being “trained,” or to the role of any software architecture component (e.g., sensors 102 ) in conducting such training, the operations or procedures described may have occurred on a different computing system (e.g., using specialized development software). Thus, for example, neural networks of the segmentation module 110 , classification module 112 and/or tracking module 114 may have been trained on a different computer system before being implemented within any vehicle. Put differently, the components of the sensor control architecture 100 may be included in a “final” product within a particular vehicle, without that vehicle or its physical components (sensors 102 , etc.) necessarily having been used for any training processes.


1. A method of multi-object tracking, the method comprising:
receiving, by processing hardware, a sequence of images generated at respective times by one or more sensors configured to sense an environment through which objects are moving relative to the one or more sensors;
constructing, by the processing hardware, a message passing graph in which each of a multiplicity of layers corresponds to a respective one in the sequence of images, the constructing including:
generating, for each of the layers, a plurality of feature nodes to represent features detected in the corresponding image, and
generating edges that interconnect at least some of the feature nodes across adjacent layers of the graph neural network to represent associations between the features; and
tracking, by the processing hardware, multiple features through the sequence of images, including:
passing messages in a forward direction and a backward direction through the message passing graph to share information across time,
limiting the passing of the messages to only those layers that are currently within a rolling window of a finite size, and
advancing the rolling window in the forward direction in response to generating a new layer of the message passing graph, based on a new imag
e.

Notably, Luminar has been working with Mercedes for a couple of years, so there is a fair chance our paths would have crossed.

"Luminar’s Iris entered its first series production in October 2022 and the company’s Mercedes-Benz program has successfully completed the initial phase and the associated milestones.
After two years of close collaboration between the two companies, Mercedes-Benz now plans to integrate the next generation of Luminar’s Iris lidar and its associated software technology across a broad range of its next-generation production vehicle lines by mid-decade
."


Given that Luminar only started working with Mercedes 2 years ago, this patent pre-dates that collaboration.

Given that Luminar thought NNs were software, they would have been very power hungry.

Given that Mercedes is very power conscious, would they have accepted a software NN?

Given that Luminar has only been working with Mercedes for 2 years, could Luminar have developed a SoC NN in that time?

Given that Mercedes has a plan to standardize on components, can we think of a suitable NN SoC to meet Mercedes's requirements?
What's with all the questions, just give me answers will you!!!! :)
 
  • Haha
  • Like
  • Fire
Reactions: 23 users

Diogenese

Top 20
What's with all the questions, just give me answers will you!!!! :)
It's one of those tick-the-box quizzes, but there is only one box.
 
  • Like
  • Haha
  • Fire
Reactions: 23 users

Steve10

Regular

Qualcomm Snapdragon 8cx Gen 4: New Apple M series competitor surfaces in leaked benchmark listing​


The Qualcomm Snapdragon 8cx Gen 4 has surfaced again, this time courtesy of a Geekbench listing. Not only does the leaked listing confirm the chipset's codename, but also its CPU core arrangement, among other things.

Alex Alderson, Published 03/29/2023 🇵🇱 🇫🇷 ...
ARM Laptop Leaks / Rumors

Gustave Monce has discovered a Geekbench listing for the Snapdragon 8cx Gen 4, details of which Kuba Wojciechowski revealed in two leaks earlier this year. While the new Geekbench listing does not outline performance, the Snapdragon 8cx Gen 4 is rumoured to rival Apple's M series SoCs, it confirms a few aspects of Wojciechowski's earlier leaks. For reference, the Snapdragon 8cx Gen 4 is not expected to ship until next year at the earliest.

As the images below show, Geekbench reports that 'Snapdragon 8cx Next Gen' is 'HAMOA', which Wojciechowski claimed is the codename for the Snapdragon 8cx Gen 4. Additionally, the listing reveals that the SoC has two CPU clusters totalling 12 cores. Moreover, the cluster arrangement, 8+4, matches Wojciechowski's earliest report on the Snapdragon 8cx Gen 4. Furthermore, Geekbench reports that the second cluster operates at 2.38 GHz, with the first cluster capable of reaching 3.0 GHz. However, final CPU clock speeds are anticipated to be 2.5 GHz and 3.4 GHz, respectively, with the former clock speeds representative of an engineering prototype.

Unsurprisingly, the second cluster is thought to contain efficiency cores. In comparison, the 8 other cores are rumoured to be the Snapdragon 8cx Gen 4's performance cores. The Snapdragon 8cx Gen 4 should also feature the Adreno 740, the same GPU found in the Snapdragon 8 Gen 2. Currently, the Snapdragon 8cx Gen 4 is only expected to be available in future laptops, with the Snapdragon 8 Gen 3 and Snapdragon 8 Gen 4 serving flagship smartphones.

 
  • Like
  • Fire
Reactions: 10 users

IloveLamp

Top 20
Screenshot_20230331_131116_LinkedIn.jpg
 
  • Like
  • Thinking
  • Fire
Reactions: 13 users

Dhm

Regular
  • Like
  • Haha
Reactions: 7 users

IloveLamp

Top 20
Is this a real lead or just speculation? I would really like people to give reason for their postings if there isn’t some immediate linkage.

Likewise can you please give a degree of legitimacy for this AI linkage? I’d hate to see someone taking a position in Brainchip based on this.

I’m now going to have a Bex and a good lie down.
It's real speculation
tenor-7.gif
 
  • Haha
  • Like
Reactions: 11 users
But Luminar is the fly in Valeo's ointment:

https://www.luminartech.com/updates/mb23

Expanding partnership and volumes by more than an order of magnitude to a broad range of consumer vehicles​

February 22, 2023
ORLANDO, Fla / STUTTGART, Ger. –– Luminar (Nasdaq: LAZR), a leading global automotive technology company, announced today a sweeping expansion of its partnership with Mercedes-Benz to safely enable enhanced automated driving capabilities across a broad range of next-generation production vehicle lines as part of the automaker’s next-generation lineup. Luminar’s Iris entered its first series production in October 2022 and the company’s Mercedes-Benz program has successfully completed the initial phase and the associated milestones.
After two years of close collaboration between the two companies, Mercedes-Benz now plans to integrate the next generation of Luminar’s Iris lidar and its associated software technology across a broad range of its next-generation production vehicle lines by mid-decade. The performance of the next-generation Iris is tailored to meet the demanding requirements of Mercedes-Benz for a new conditionally automated driving system that is planned to operate at higher speed for freeways, as well as for enhanced driver assistance systems for urban environments. It will also be simplifying the design integration with a sleeker profile. This multi-billion dollar deal is a milestone moment for the two companies and the industry and is poised to substantially enhance the technical capabilities and safety of conditionally automated driving systems.
“Mercedes’ standards for vehicle safety and performance are among the highest in the industry, and their decision to double down on Luminar reinforces that commitment,” said Austin Russell, Founder and CEO of Luminar. “We are now set to enable the broadest scale deployment of this technology in the industry. It’s been an incredible sprint so far, and we are fully committed to making this happen – together with Mercedes-Benz.”
“In a first step we have introduced a Level 3 system in our top line models. Next, we want to implement advanced automated driving features in a broader scale within our portfolio,” said Markus Schäfer, Member of the Board of Management of Mercedes Benz Group AG and Chief Technology Officer, Development & Procurement. “I am convinced that Luminar is a great partner to help realize our vision and roadmap for automated and accident-free driving
.”

Luminar have foveated imaging, and I assume this helps with the long range imaging and focusing on items of interest.

Luminar also have the LiDaR/camera combination:

US10491885B1 Post-processing by lidar system guided by camera information

View attachment 33312

Post-processing in a lidar system may be guided by camera information as described herein. In one embodiment, a camera system has a camera to capture images of the scene. An image processor is configured to classify an object in the images from the camera. A lidar system generates a point cloud of the scene and a modeling processor is configured to correlate the classified object to a plurality of points of the point cloud and to model the plurality of points as the classified object over time in a 3D model of the scene.

[0030] In embodiments, a logic circuit controls the operation of the camera and a separate dedicated logic block performs artificial intelligence detection, classification, and localization functions. Dedicated artificial intelligence or deep neural network logic is available with memory to allow the logic to be trained to perform different artificial intelligence tasks. The classification takes an apparent image object and relates that image object to an actual physical object. The image processor provides localization of the object with within the 2D pixel array of the camera by determining which pixels correspond to the classified object. The image processor may also provide a distance or range of the object for a 3D localization. For a 2D camera, after the object is classified, its approximate size will be known. This can be compared to the size of the object on the 2D pixel array. If the object is large in terms of pixels then it is close, while if it is small in terms of pixels, then it is farther away. Alternatively, a 3D camera system may be used to estimate range or distance.

Luminar has patents which relate to NNs, but they only describe NNs in conceptual terms, not at the NPU circuit level.

The claims are couched in terms of software NNs, and NNs are described as software components.

US11361449B2 Neural network for object detection and tracking 2020-05-06

View attachment 33315

[0083] Because the blocks of FIG. 3 (and various other figures described herein) depict a software architecture rather than physical components, it is understood that, when any reference is made herein to a particular neural network or other software architecture component being “trained,” or to the role of any software architecture component (e.g., sensors 102 ) in conducting such training, the operations or procedures described may have occurred on a different computing system (e.g., using specialized development software). Thus, for example, neural networks of the segmentation module 110 , classification module 112 and/or tracking module 114 may have been trained on a different computer system before being implemented within any vehicle. Put differently, the components of the sensor control architecture 100 may be included in a “final” product within a particular vehicle, without that vehicle or its physical components (sensors 102 , etc.) necessarily having been used for any training processes.


1. A method of multi-object tracking, the method comprising:
receiving, by processing hardware, a sequence of images generated at respective times by one or more sensors configured to sense an environment through which objects are moving relative to the one or more sensors;
constructing, by the processing hardware, a message passing graph in which each of a multiplicity of layers corresponds to a respective one in the sequence of images, the constructing including:
generating, for each of the layers, a plurality of feature nodes to represent features detected in the corresponding image, and
generating edges that interconnect at least some of the feature nodes across adjacent layers of the graph neural network to represent associations between the features; and
tracking, by the processing hardware, multiple features through the sequence of images, including:
passing messages in a forward direction and a backward direction through the message passing graph to share information across time,
limiting the passing of the messages to only those layers that are currently within a rolling window of a finite size, and
advancing the rolling window in the forward direction in response to generating a new layer of the message passing graph, based on a new imag
e.

Notably, Luminar has been working with Mercedes for a couple of years, so there is a fair chance our paths would have crossed.

"Luminar’s Iris entered its first series production in October 2022 and the company’s Mercedes-Benz program has successfully completed the initial phase and the associated milestones.
After two years of close collaboration between the two companies, Mercedes-Benz now plans to integrate the next generation of Luminar’s Iris lidar and its associated software technology across a broad range of its next-generation production vehicle lines by mid-decade
."


Given that Luminar only started working with Mercedes 2 years ago, this patent pre-dates that collaboration.

Given that Luminar thought NNs were software, they would have been very power hungry.

Given that Mercedes is very power conscious, would they have accepted a software NN?

Given that Luminar has only been working with Mercedes for 2 years, could Luminar have developed a SoC NN in that time?

Given that Mercedes has a plan to standardize on components, can we think of a suitable NN SoC to meet Mercedes' requirements?
Hi @Diogenese

The following is an extract from Luminar's 28.2.23 Annual Report:

Adjacent Markets Adjacent markets such as last mile delivery, aerospace and defense, robotics and security offer use cases for which our technology is well suited. Our goal is to scale our core markets and utilize our robust solutions to best serve these adjacent markets where it makes sense for us and our partners.

Our Products

Our Iris and other products are described in further detail below:

Hardware Iris: Iris lidar combines laser transmitter and receiver and provides long-range, 1550 nm sensory meeting OEM specs for advanced safety and autonomy. This technology provides efficient automotive-grade, and affordable solutions that are scalable, reliable, and optional for series production. Iris lidar sensors are dynamically configurable dualaxis scan sensors that detect objects up to 600 meters away over a horizontal field of view of 120° and a software configurable vertical field of view of up to 30°, providing high point densities in excess of 200 points per square degree enable long-range detection, tracking, and classification over the whole field of view. Iris is refined to meet the size, weight, cost, power, and reliability requirements of automotive qualified series production sensors.
Iris features our vertically integrated receiver, detector, and laser solutions developed by our Advanced Technologies & Services segment companies - Freedom Photonics, Black Forest Engineering, and Optogration. The internal development of these key technologies gives us a significant advantage in the development of our product roadmap.

Software Software presently under development includes the following:

Core Sensor Software:
Our lidar sensors are configurable and capture valuable information extracted from the raw point-cloud to promote the development and performance of perception software. Our core sensor software features are being designed to help our commercial partners to operate and integrate our lidar sensors and control, and enrich the sensor data stream before perception processing.

Perception Software: Our perception software is in design to transform lidar point-cloud data into actionable information about the environment surrounding the vehicle. This information includes classifying static objects such as lane markings, road surface, curbs, signs and buildings as well as other vehicles, pedestrians, cyclists and animals. Through internal development as well as the recent acquisition of certain assets of Solfice (aka Civil Maps), we expect to be able to utilize our point-cloud data to achieve precise vehicle localization and to create and provide continuous updates to a high definition map of a vehicle’s environment.

Sentinel: Sentinel is our full-stack software platform for safety and autonomy that will enable Proactive Safety and highway autonomy for cars and commercial trucks. Our software products are in designing and coding phase of the development and had not yet achieved technological feasibility as at end of 2022.

Competition
The market for lidar-enabled vehicle features, on and off road, is an emerging one with many potential applications in the development stage. As a result, we face competition for lidar hardware business from a range of companies seeking to have their products incorporated into these applications. We believe we hold a strong position based on both hardware product performance and maturity, and our growing ability to develop deeply integrated software capabilities needed to provide autonomous and safety solutions to our customers. Within the automotive autonomy software space, the competitive landscape is still nascent and primarily focused on developing robo-taxi technologies as opposed to autonomous software solutions for passenger vehicles. Other autonomous software providers include: in-house OEM software teams; automotive silicon providers; large technology companies and newer technology companies focused on autonomous software. We partner with several of these autonomous software providers to provide our lidar and other products. Beyond automotive, the adjacent markets, including delivery bots and mapping, among others, are highly competitive. There are entrenched incumbents and competitors, including from China, particularly around ultra-low cost products that are widely available."

We know as Facts:

1. Luminar partnered with Mercedes Benz in 2022 and does not expect its product to be in vehicles before 2025.

2. We know Mercedes Benz teamed with Valeo has obtained the first European and USA approvals for Level 3 Driving.

3. We know Level 3 Driving involves maximum 60 kph on freeways with hands off the wheel but driver must maintain sufficient attention to retake control when warned by the vehicle so to do.

4. We know Valeo certifies its Lidar to 2OO metres.

5. We know that Luminar claims its Lidar is long range out to 600 metres on roads not exceeding certain undulations that could inhibit signals.

6. We know that Mercedes, Valeo and Bosch have proven systems for autonomous vehicle parking in parking stations.

7. We know that Valeo is claiming that Scala 3 will permit autonomous driving up to 130 kph and is coming out in 2025.

8. We know from the above SEC filing that Luminar is still not ready despite its advertising message that suggests it is a sure thing.

So my question is as Luminar does not claim to support autonomous parking or certified Level 3 driving at 60 kph but is simply promoting it can provide long range Lidar for high speed driving and from their website have been shipping one Lidar sensor unit to vehicle manufacturers for installation on/in car hoods above the windscreen why does this exclude Valeo's system which is offering 145 degree visibility and rear and side sensing from continuing to do what it presently does with Luminar increasing safety on high speed autobahns in Germany and Europe.

My opinion only DYOR
FF

AKIDA BALLISTA
 
Last edited:
  • Like
  • Fire
  • Thinking
Reactions: 27 users

skutza

Regular
Is this a real lead or just speculation? I would really like people to give reason for their postings if there isn’t some immediate linkage.

Likewise can you please give a degree of legitimacy for this AI linkage? I’d hate to see someone taking a position in Brainchip based on this.

I’m now going to have a Bex and a good lie down.
This is from Chat Ai.
 
  • Like
Reactions: 2 users

Tothemoon24

Top 20
SUBSCRIBE SEARCH
MARCH 30, 2023

Luminar announces new lidar technology and startup acquisition​

A series of technology announcements were made during a recent investor day event, illustrating Luminar’s ambitious roadmap for bringing safety and autonomy solutions to the automotive industry.

Eric van Rees​


luminar%20fig-2.jpg.large.1024x1024.jpg
via Luminar
During CES 2023, Luminar announced a special news event scheduled for the end of February. Dubbed “Luminar Day”, a series of technology releases and partnership announcements were made during a live stream, showing Luminar’s ambitious plans and roadmap for the future. In addition to developing lidar sensors, Luminar plans to integrate different hardware and software components for improved vehicle safety and autonomous driving capabilities through acquisitions and partnerships with multiple car manufacturers and technology providers, as well as scaling up production of these solutions, anticipating large-scale market adoption of autonomous vehicles in the next couple of years.

A new lidar sensor​

Luminar’s current sensor portfolio has been extended with the new Iris+ sensor (and associated software), which comes with a range of 300 meters. This is 50 meters more than the current maximum range of the Iris sensor. The sensor design is such that it can be integrated seamlessly into the roofline of production vehicle models. Mercedes-Benz announced it will integrate IRIS+ into its next-generation vehicle lineup. The sensor will enable greater performance and collision avoidance of small objects at up to autobahn-level speeds, enhancing vehicle safety and the autonomous capabilities of a vehicle. Luminar has plans for an additional manufacturing facility in Asia with a local partner to support the vast global scale of upcoming vehicle launches with Iris+, as well as a production plant in Mexico that will be operated by contract manufacturer Celestica.
luminar%20fig-1.jpg.medium.800x800.jpg
New Iris+ sensor, via Luminar

Software development: Luminar AI Engine release​

The live stream featured the premiere of Luminar’s machine learning-based AI Engine for object detection in 3D data captured by lidar sensors. Since 2017, Luminar has been working on AI capabilities on 3D lidar data to improve the performance and functionality of next-generation safety and autonomy in automotive. The company plans to capture lidar data with more than a million vehicles that will provide input for its AI engine and build a 3D model of the drivable world. To accelerate Luminar’s AI Engine efforts, an exclusive partnership was announced with Scale.ai, a San Francisco-headquartered AI applications developer that will provide data labeling and AI tools. Luminar is not the first lidar tech company to work with Scale.ai: in the past, it has worked with Velodyne to find edge cases in 3D data and curate valuable data for annotation.

Seagate lidar division acquisition​

Just as with the Civil Maps acquisition announced at CES 2023 to accelerate its lidar production process, Luminar recently acquired the lidar division of data storage company Seagate. That company develops high-capacity, modular storage solutions (hard drives) for capturing massive amounts of data created by autonomous cars. Specifically, Luminar acquired lidar-related IP (internet protocols), assets and a technical team.

Additional announcements​

Apart from these three lidar-related announcements, multiple announcements were made that show the scale of Luminar's ambition to provide lidar-based solutions for automotive. Take for example the new commercial agreement with Pony.ai, an autonomous driving technology company. The partnership is meant to further improve the performance and accuracy of Luminar’s AI engine for Pony.ai’s next-generation commercial trucking and robotaxi platforms. Luminar also announced the combination of three semiconductor subsidiaries into a new entity named Luminar Semiconductor. The advanced receiver, laser and processing chip technologies provided by this entity are not limited to lidar-based applications but are also used in the aerospace, medical and communications sectors.
 
  • Like
  • Love
  • Fire
Reactions: 11 users

Hi @Tothemoon24 depends where you look you get different answers your article has 300 metres these specs from the Luminar website has 600 metres. What is interesting is the power draw being 25 watts. Seems like that would reduce the EQXX range considerably.​

Specifications​

Please contact us for a full datasheet

Maximum range600 metres

250 metres at < 10% reflectivity
Minimum range0.5 metres
Horizontal field of view120°
Vertical field of view0 – 26°
Horizontal resolution0.05°
Vertical resolution0.05°
Range precision1 cm
Scan rate1-30 fps
Reflectance resolution7 bits
Maximum range returns per point6

Environmental​

Water and dust ingressIP69K
VibrationISO 16750-3
ShockIEC 60068-2-27
Operating temperature-40°C – 85°C
Storage temperature-40°C – 105°C

Classifications​

Laser safetyClass 1
Export controlEAR99 (US DoC)

Electrical​

Input voltage9-16 V or 18-32 V
Power consumption~25 W
 
  • Like
  • Fire
Reactions: 16 users

Yak52

Regular
We know as Facts:

1. Luminar partnered with Mercedes Benz in 2022 and does not expect its product to be in vehicles before 2025.

2. We know Mercedes Benz teamed with Valeo has obtained the first European and USA approvals for Level 3 Driving.

3. We know Level 3 Driving involves maximum 60 kph on freeways with hands off the wheel but driver must maintain sufficient attention to retake control when warned by the vehicle so to do.

4. We know Valeo certifies its Lidar to 2OO metres.

5. We know that Luminar claims its Lidar is long range out to 600 metres on roads not exceeding certain undulations that could inhibit signals.

6. We know that Mercedes, Valeo and Bosch have proven systems for autonomous vehicle parking in parking stations.

7. We know that Valeo is claiming that Scala 3 will permit autonomous driving up to 130 kph and is coming out in 2025.

8. We know from the above SEC filing that Luminar is still not ready despite its advertising message that suggests it is a sure thing.

So my question is as Luminar does not claim to support autonomous parking or certified Level 3 driving at 60 kph but is simply promoting it can provide long range Lidar for high speed driving and from their website have been shipping one Lidar sensor unit to vehicle manufacturers for installation on/in car hoods above the windscreen why does this exclude Valeo's system which is offering 145 degree visibility and rear and side sensing from continuing to do what it presently does with Luminar increasing safety on high speed autobahns in Germany and Europe.

My opinion only DYOR
FF

AKIDA BALLISTA


SELF DRIVING CAR SENSORS BASED ON TRIPLE REDUNDANCY FOR SAFER MOBILITY​

The automotive industry uses the triple redundancy system to guarantee the safety of using autonomous cars. Every item of information received by a sensor must be confirmed by two other sensors of different types. Valeo offers the broadest range of automotive sensors on the market.

--------------------------------------------------------------------------------------------------------------------------------------------------

Considering the above info that "Triple" redundancy is used in Autonomous cars what is the possiblity of Luminar having a single Lidar above the windscreen which reaches beyond 200 mtrs (600 mtrs??) and VALEO having Lidar sensors around the vehicle and front for short (close) distance use? Luminar announced new Lidar sensors recently stating "Streamlined" and suitable for use above the windscreen (hood).

Maybe "BOTH" Valeo and Luminar sensors combined in a vehicle all around for "Redundancy" purposes?

Yak52 :cool:
 
  • Like
  • Love
Reactions: 4 users
Top Bottom