BRN Discussion Ongoing

buena suerte :-)

BOB Bank of Brainchip
  • Like
Reactions: 3 users

robsmark

Regular
I can tell you're upset.

Take a deep breath and repeat with me.

"WOOOOSAAAAAAH"


Nope, very chill. Iā€™m just donā€™t think Robs likes mean shit.
 
  • Like
  • Thinking
  • Love
Reactions: 10 users

IloveLamp

Top 20
Nope, very chill. Iā€™m just donā€™t think Robs likes mean shit.
eddie-murphy-shocked.gif
 
  • Haha
Reactions: 9 users

robsmark

Regular
  • Like
  • Haha
Reactions: 5 users

IloveLamp

Top 20
I feel like someone should let him know the real reason his report was actually so popular #brnfanatics šŸ˜‚
Screenshot_20230713_160214_LinkedIn.jpg
 
Last edited:
  • Like
  • Haha
  • Love
Reactions: 23 users

buena suerte :-)

BOB Bank of Brainchip
A bit of after market action


4:18:22 PM0.360560,762201,874.320ASXSX XT
 
  • Like
  • Wow
  • Fire
Reactions: 19 users

Proga

Regular
Can you elaborate on Mercedes seem to have pivoted? They are still listed under Brainchip website as trusted by.
I get the feeling, nothing definitive, MB is moving away from Valeo's Scalia and moving towards Luminar's IRIS which uses Qualcomm. And also moving away from "Hey Mercedes" key word activation towards a system which recognises phrases or tones.

As you know, I was very bullish on MB using Akida in Valeo's Scala 3 and Hey Mercedes featured in the EQXX in the new 2024 EV MMA platform for small and medium cars. Those cars are due for release in the next couple of months.

Statements since the AGM from BRN have been very dower of late about the lack of progress. I would have thought BRN would be more upbeat if Akida IP was going to be in the new soon to be released 2024 MB small to medium electric line-up. I'm talking about body language and tone not breaking NDA's. They're not exactly bubbling over with excitement about anything.

There is a big difference between trusted and used by.
 
Last edited:
  • Like
  • Haha
  • Thinking
Reactions: 16 users

buena suerte :-)

BOB Bank of Brainchip
I get the feeling, nothing definitive, MB is moving away from Valeo's Scalia and moving towards Luminar's IRIS which uses Qualcomm. And also moving away from "Hey Mercedes" key word activation towards a system which recognises phrases or tones.

As you know, I was very bullish on MB using Akida in Valeo's Scala 3 and Hey Mercedes featured in the EQXX in the new 2024 EV MMA platform for small and medium cars. Those cars are due for release in the next couple of months.

Statements since the AGM from BRN have been very dower of late about the lack of progress. I would have thought BRN would be more upbeat if Akida IP was going to be in the new soon to be released 2024 MB small to medium electric line-up. I'm talking about body language and tone not breaking NDA's. They're not exactly bubbling over with excitement about anything.
I still feel very confident that we are still and will be servicing MB ... they also get a mention along side us in the 'latest' Edge Ai 2023 Technology Report!!

This led NASA to select BrainChipā€™s first silicon platform in 2021 to demonstrate in-space autonomy and cognition in one of the most extreme power- and thermally-constrained applications. Similarly, Mercedes Benz demonstrated BrainChip in their EQXX concept vehicle that can go over 1000 km on a single charge.

šŸ“¢ Something soon hopefully šŸ¤žšŸ¤ž
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 30 users

rgupta

Regular
I get the feeling, nothing definitive, MB is moving away from Valeo's Scalia and moving towards Luminar's IRIS which uses Qualcomm. And also moving away from "Hey Mercedes" key word activation towards a system which recognises phrases or tones.

As you know, I was very bullish on MB using Akida in Valeo's Scala 3 and Hey Mercedes featured in the EQXX in the new 2024 EV MMA platform for small and medium cars. Those cars are due for release in the next couple of months.

Statements since the AGM from BRN have been very dower of late about the lack of progress. I would have thought BRN would be more upbeat if Akida IP was going to be in the new soon to be released 2024 MB small to medium electric line-up. I'm talking about body language and tone not breaking NDA's. They're not exactly bubbling over with excitement about anything.
Anything can be true in this world especially if the same is not provided in writing.
On the other hand Qualcomm was always in picture with MB even when MB announced they are using brainchip akida platform for their concept car EQXX.
I don't know why MB will say the same and then back out. On the whole the market is becoming too hot as far as edge and AI is concerned and if we cannot get our dues it will be considered a very plan.
On the other end there is a lot of hype created and everyone is expecting results should be out in no time. But to get a tasty and healthy meal we have to cook it for required time.
I don't know why but I mostly considers Qualcomm may also be one of our customers.
Dyor
 
  • Like
  • Thinking
Reactions: 17 users

Kachoo

Regular
Anything can be true in this world especially if the same is not provided in writing.
On the other hand Qualcomm was always in picture with MB even when MB announced they are using brainchip akida platform for their concept car EQXX.
I don't know why MB will say the same and then back out. On the whole the market is becoming too hot as far as edge and AI is concerned and if we cannot get our dues it will be considered a very plan.
On the other end there is a lot of hype created and everyone is expecting results should be out in no time. But to get a tasty and healthy meal we have to cook it for required time.
I don't know why but I mostly considers Qualcomm may also be one of our customers.
Dyor
I don't think Mercedes would change direction with in 1.5 months like that these products are out soon in 6 months lol. They would likely be starting up production very soon lol. I feel some may have been convinced by the shorters that 20 cents is on the cards and now the price moved north.
 
  • Like
  • Thinking
Reactions: 13 users

rgupta

Regular
I don't think Mercedes would change direction with in 1.5 months like that these products are out soon in 6 months lol. They would likely be starting up production very soon lol. I feel some may have been convinced by the shorters that 20 cents is on the cards and now the price moved north.
Shorts sometimes play to get direct benefit/loss but sometimes they fulfill assignment of a bigger client with bigger picture than current sp.
On the other hand I could not find a single tech stock which was not beaten down to hell before it comes out flying. Is that a necessity? I assume yes coz we theory say money and water have same life cycle and it all ends in bigger water bodies or the ocean.
 
  • Like
  • Love
Reactions: 5 users

cosors

šŸ‘€

Can someone please give this guy a buzz or we will all literally run out of water and have to start drinking our own pee-pee?ā˜Žļø



Elon Musk Launches X.AI To Fight ChatGPT​

Martine Paris
Apr 16, 2023,05:51pm EDT
https://policies.google.com/privacy
Elon Musk and son X Ɔ A-12 (Photo by Theo Wargo/Getty Images for TIME)

Elon Musk and son X Ɔ A-12 (Photo by Theo Wargo/Getty Images for TIME)
GETTY IMAGES FOR TIME
X is for everything in the world of tech billionaire Elon Musk. Itā€™s the name of his child with pop star Grimes. It was the name of his startup X.com which later became PayPal. Itā€™s the corporate name of Twitter as disclosed in court documents last week. And itā€™s the name of the his new company X.AI for which he has been recruiting AI engineers from competitors and possibly buying thousands of GPUs.

Hereā€™s what is known about X.AI so far:

  • The company was incorporated in Nevada on March 9, where Twitterā€™s parent X Corp. is now registered, as first reported by the Wall Street Journal.
  • Elon Musk is listed as the sole director on the state filing.
  • Jared Birchall is listed as secretary. Birchall is the CEO of Muskā€™s chip-implant company Neuralink. He serves on the board of Muskā€™s tunneling entity The Boring Company and is managing director of Muskā€™s family office Excession, which is also the name of a science fiction novel by Iain Banks about a society ruled by hyper-intelligent AI that Musk tweeted about in 2019. Birchall began his career as a financial analyst at Goldman Sachs and later became a wealth adviser at Morgan Stanley before joining Musk.
  • Igor Babuschkin, a Silicon Valley AI senior research engineer from Googleā€™s sister company Deepmind has been hired to head the effort at X.AI, according to Insider. Babuschkin worked at OpenAI during the pandemic and prior to that was part of the Deepmind London team. He interned at the revered international research institute, CERN in Geneva, Switzerland, according to his LinkedIn profile.
  • Deepmind engineer Manuel Kroiss is another key X.AI hire who co-authored the 2021 Deepmind paper, ā€œLaunchpad: A programming model for distributed machine learning research.ā€
  • Musk might be planning on using Twitter data to train X.AIā€™s large language model, according to sources reported by the Financial Times. In December, Musk tweeted that he had learned that OpenAI had access to Twitterā€™s database for training and put that on pause.
  • Musk recently purchased roughly 10,000 GPUs typically used for large language models for Twitter, sources told Insider. In a Twitter Spaces with the BBC last week where participants said more than three million people tuned in, Musk neither confirmed nor denied the Insider report, rather he responded that Tesla and Twitter are buying GPUs like all other companies and gave a shoutout for Tesla's Dojo supercomputer platform which he said has a lot of potential.
  • One challenge Musk is going to face is the vast amount of electricity and water that large language models are consuming in the face of global shortages. "ChatGPT needs to ā€œdrinkā€ a 500ml bottle of water for a simple conversation of roughly 20-50 questions and answers," noted a paper by University of California and University of Texas professors.
  • To fund X.AI, the company has authorized 100 million shares for sale and is courting interest from SpaceX and Tesla investors, reported the Financial Times. The more powerful the AI models, the more expensive it gets to operate. OpenAI estimated its 2022 costs to be around $544.5 million, according to documents seen by Fortune.

"Announcing xAI
July 12th 2023​

Today we announce the formation of xAI.
The goal of xAI is to understand the true nature of the universe.
You can meet the team and ask us questions during a Twitter Spaces chat on Friday, July 14th.

Team​

Our team is led by Elon Musk, CEO of Tesla and SpaceX. We have previously worked at DeepMind, OpenAI, Google Research, Microsoft Research, Tesla, and the University of Toronto. Collectively we contributed some of the most widely used methods in the field, in particular the Adam optimizer, Batch Normalization, Layer Normalization, and the discovery of adversarial examples. We further introduced innovative techniques and analyses such as Transformer-XL, Autoformalization, the Memorizing Transformer, Batch Size Scaling, and Ī¼Transfer. We have worked on and led the development of some of the largest breakthroughs in the field including AlphaStar, AlphaCode, Inception, Minerva, GPT-3.5, and GPT-4."

https://x.ai/
 
  • Like
  • Fire
Reactions: 10 users

Frangipani

Regular
Interesting last part of the link :ROFLMAO::ROFLMAO::ROFLMAO: https://lnkd.in/ gFckm-ZB :love:

As well as an interesting first part of that link @Sirod69 posted, but for other reasons:

00A117B7-9E66-41F5-89BA-B0FE7D928B1D.jpeg


So is our mysterious Todd Gack now seriously promoting an Edge AI technology report that is speaking highly of Brainchip?! šŸ§ šŸ¤”

Are we witnessing a modern-day secular Conversion of Saul? šŸ¤£

Or is it just a Foolish attempt to get into the right position for making heaps more money around BRN by doing a 180 now in order to profit from the soon expected SP turnaround?

Who knows, maybe the person coding that link @buena suerte :-) joked about even hid an Easter egg for us with the letter m representing a certain name?! šŸ˜‚
 
  • Like
  • Fire
  • Love
Reactions: 10 users

Proga

Regular
I don't think Mercedes would change direction with in 1.5 months like that these products are out soon in 6 months lol. They would likely be starting up production very soon lol. I feel some may have been convinced by the shorters that 20 cents is on the cards and now the price moved north.
Nobody is talking about changing direction with in 1.5 months. This was done over 12 mths ago which is about the same period where nothing new has been said or reported by MB, Valero, Auto Journo's etc about Akida.
 
  • Like
  • Fire
Reactions: 3 users

TECH

Regular
Good afternoon/evening,

That was an interesting prospective given by Stocks Down Under on Brainchip.

It came across as more reactory, they declared their hand disclosing they own shares and entered at a higher price, well
join the party a lot of shareholders would say.

Talking about the ASX and how they view technology stocks or self marketing material has got absolutely no bearing on what
the company does, the company has clearly explained how and why they take the approach that they do because of previous
behaviour/performance a few years back, the warning was issued (maybe wrong) and the Board took and have continued to take
the approach that they do.

Sean politely responded to a question/s from the floor at the AGM, stated what he and the company would address, and from where
I sit, he/they have very quickly, (sarcasm) Don't you like all the additional material posted on the home website ? Twitter and LinkedIn
etc are providing lots of articles, Nandan is earning his keep, the question then turns inward, do you wish to revert back to a company
that coughs up dribble or fluff ?

We've been down the smoke and mirrors avenue before, we've matured, well, I think we have, a CEO must come to Australia at least 3 times
a year, who are you kidding, just to appease you and the so-called mushroom brigade ? Sean isn't coming down to Australia, wasting
shareholders funds when he can be jetting around the world where the biggest markets are, selling our story and solidifying his network
moving forward, that's what he's paid for in my opinion, not junkets down under talking fluff at the bar in the Hilton.

I could go on, but I'm getting a little wound up, so I'll say cheers from the Far North.

Tech ;)
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 52 users

Diogenese

Top 20
Nobody is talking about changing direction with in 1.5 months. This was done over 12 mths ago which is about the same period where nothing new has been said or reported by MB, Valero, Auto Journo's etc about Akida.
There are no published patent applications indicating Luminar has a NN in silicon.

Their latest published application:

US2022309685A1 NEURAL NETWORK FOR OBJECT DETECTION AND TRACKING

claims a method of tracking multiple objects using a processor, ie, it is software running on a processor. Their processing system includes a segmentation module, a classification module, and a tracking module. each of which can include a NN:


1689236870952.png



[0076] As seen in FIG. 3, the vehicle includes N different sensors 102 , with N being any suitable integer (e.g., 1, 2, 3, 5, 10, 20, etc.). At least ā€œSensor 1ā€ of the sensors 102 is configured to sense the environment of the autonomous vehicle by physically interacting with the environment in some way, such as transmitting and receiving lasers that reflect off of objects in the environment (e.g., if the sensor is a lidar device), transmitting and receiving acoustic signals that reflect off of objects in the environment (e.g., if the sensor is a radio detection and ranging (radar) device), simply receiving light waves generated or reflected from different areas of the environment (e.g., if the sensor is a camera), and so on. Depending on the embodiment, all of the sensors 102 may be configured to sense portions of the environment, or one or more of the sensors 102 may not physically interact with the external environment (e.g., if one of the sensors 102 is an inertial measurement unit (IMU)). The sensors 102 may all be of the same type, or may include a number of different sensor types (e.g., multiple lidar devices with different viewing perspectives, and/or a combination of lidar, camera, radar, and thermal imaging devices, etc.).

[0077] The data generated by the sensors 102 is input to a perception component 104 of the sensor control architecture 100 , and is processed by the perception component 104 to generate perception signals 106 descriptive of a current state of the vehicle's environment. It is understood that the term ā€œcurrentā€ may actually refer to a very short time prior to the generation of any given perception signals 106 , e.g., due to the short processing delay introduced by the perception component 104 and other factors. To generate the perception signals 106 , the perception component 104 may include a segmentation module 110 , a classification module 112 and a tracking module 114 .

[0078] The segmentation module 110 is generally configured to identify distinct objects within the environment, as represented by the sensor data (or a portion of the sensor data). Depending on the embodiment and/or scenario, the segmentation task may be performed separately for each of a number of different types of sensor data (e.g., the segmentation module 110 may include a number of modules operating in parallel), or may be performed jointly on a fusion of multiple types of sensor data. In some embodiments where lidar devices are used, the segmentation module 110 analyzes point cloud frames to identify subsets of points within each frame that correspond to probable physical objects in the environment. In other embodiments, the segmentation module 110 jointly analyzes lidar point cloud frames in conjunction with camera (and/or other) image frames to identify objects in the environment. Examples of lidar devices/systems and point clouds are discussed in further detail below. Other suitable techniques, and/or data from other suitable sensor types, may also be used to identify objects. As used herein, references to different or distinct ā€œobjectsā€ may encompass physical things that are entirely disconnected (e.g., with two vehicles being two different ā€œobjectsā€), as well as physical things that are connected or partially connected (e.g., with a vehicle being a first ā€œobjectā€ and the vehicle's hitched trailer being a second ā€œobjectā€).

[0079] The segmentation module 110 may use predetermined rules or algorithms to identify objects. For example, the segmentation module 110 may identify as distinct objects, within a point cloud, any clusters of points that meet certain criteria (e.g., having no more than a certain maximum distance between all points in the cluster, etc.). Alternatively, the segmentation module 110 may utilize a neural network that has been trained to identify distinct objects within the environment (e.g., using supervised learning with manually generated labels for different objects within test data point clouds, etc.), or another suitable type of machine learning based model. Example operation of the segmentation module 110 is discussed in more detail below in FIG. 5B, for an embodiment in which the perception component 104 processes point cloud data.

[0080] The classification module 112 is generally configured to determine classes (labels, categories, etc.) for different objects that have been identified by the segmentation module 110 . Like the segmentation module 110 , the classification module 112 may perform classification separately for different sets of the sensor data (e.g., the classification module 112 may include a number of modules operating in parallel), or may classify objects based on a fusion of data from multiple sensors, etc. Moreover, and also similar to the segmentation module 110 , the classification module 112 may execute predetermined rules or algorithms to classify objects, use a neural network that has been trained to classify identified objects within the environment (e.g., using supervised learning with manually generated labels for different point cloud representations of distinct objects, etc.), or use another suitable machine learning based model to classify objects. Example operation of the classification module 112 is discussed in more detail below in FIG. 5B, for an embodiment in which the perception component 104 processes point cloud data.

[0081] The tracking module 114 is generally configured to track distinct objects over time (e.g., across multiple lidar point cloud or camera image frames). The tracked objects are generally objects that have been identified by the segmentation module 110 , but may or may not be objects that were classified by the classification module 112 , depending on the embodiment and/or scenario. The segmentation module 110 may assign identifiers to identified objects, and the tracking module 114 may associate existing identifiers with specific objects where appropriate (e.g., for lidar data, by associating the same identifier with different clusters of points, at different locations, in successive point cloud frames). Like the segmentation module 110 and the classification module 112 , the tracking module 114 may perform separate object tracking based on different sets of the sensor data (e.g., the tracking module 114 may include a number of modules operating in parallel), or may track objects based on a fusion of data from multiple sensors. Moreover, and also similar to the segmentation module 110 and the classification module 112 , the tracking module 114 may execute predetermined rules or algorithms to track objects, may use a neural network that has been trained to track identified (and possibly classified) objects within the environment (e.g., using supervised learning with manually generated labels for different pairs or sets of point cloud frames, etc.), or another suitable machine learning model to track objects.

[0082] Because the blocks of FIG. 3 (and various other figures described herein) depict a software architecture rather than physical components, it is understood that, when any reference is made herein to a particular neural network or other software architecture component being ā€œtrained,ā€ or to the role of any software architecture component (e.g., sensors 102 ) in conducting such training, the operations or procedures described may have occurred on a different computing system (e.g., using specialized development software). Thus, for example, neural networks of the segmentation module 110 , classification module 112 and/or tracking module 114 may have been trained on a different computer system before being implemented within any vehicle. Put differently, the components of the sensor control architecture 100 may be included in a ā€œfinalā€ product within a particular vehicle, without that vehicle or its physical components (sensors 102 , etc.) necessarily having been used for any training processes
.

I think it is improbable that Mercedes will revert to a software NN. Luminar need Akida.

Now there are some interesting features of Luminar's laser projector for lidar, such as foveation which enables the laser pulses to be concentrated more densely on objects of interest, and this could well attract MB, but I doubt they would adopt software to process the reflected pulses.
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 33 users

Tothemoon24

Top 20
emotion3D

emotion3Dā€™s First Half of 2023​

Jul 13, 2023 | Blog
Certified-2-1080x675.jpg

As we reflect on the first half of 2023, the emotion3D team is thrilled to highlight a number of achievements and milestones. From winning new customers, forging new collaborations to securing certifications and participating in prestigious trade shows, our company has been making solid progress in advancing the field of driver and occupant analysis.
We kicked off the year with a trip to Las Vegas. CES, as every year, is our first trip after our Christmas holidays. Not only did we have countless highly productive meetings with customers, partners and other industry stakeholders, but we also announced our collaboration with SAT & Garmin. Together, we devised an innovative solution to drowsiness detection by integrating our Cabin Eye software stack, SATā€™s sleep onset prediction algorithm, and Garminā€™s smartwatch technology.
In February, we embarked on an exciting collaboration with Brainchip, a leading provider of neuromorphic processors. By combining our expertise in driver and occupant analysis with Brainchipā€™s state-of-the-art processor, we aimed to revolutionize driving safety and enhance the overall driving experience. Our joint efforts focused on maximizing efficiency, precision, and minimizing energy consumption to deliver unparalleled results.
In March, we were awarded another large series production project with a new customer (stay tuned for exciting announcements).
While we dedicated considerable efforts to external collaborations, we also prioritized the enhancement of our internal processes and quality management. We are delighted to announce that our commitment has been recognized through our successful achievement of the TISAX Level 3 certification, which also encompasses the protection of prototype components. This certification proves our dedication to delivering top-tier solutions while upholding the highest industry standards.
Furthermore, we received invitations from prestigious international trade shows and events. These platforms provided us with a remarkable opportunity to showcase our technologies and expertise. Notable among these events were EAS IMS Tech 2023 and InCabin Brussels 2023, where our CTO, Michael Hƶdlmoser, delivered speeches outlining our expertise in deriving 3D occupant information.
Through our participation in trade shows such as the Automotive Testing Expo 2023 in Stuttgart and the highly specialized InCabin Brussels 2023 event, we had the opportunity of showcasingour newest solutions for driver and occupant monitoring to key stakeholders in the industry. Our technology was also present in our partnersā€™ booths; as BHTC, Varroc and SAT showcased our joint demos. Moreover, during InCabin Brussels 2023, our CEO, Florian Seitner, joined the press briefing to announce our latest partnership with SAT and Chuhang Tech. This collaboration aims to create a multi-sensor fusion solution that combines camera and radar technologies, complemented by SATā€™s sleep onset prediction algorithms. Together, we strive to deliver highly accurate drowsiness detection solutions, ensuring utmost safety for drivers and passengers.
With a successful first half of 2023 behind us, we look forward to the next half of the year when many exciting news and events such as IAA Mobility 2023 and CES 2024 are already being planned. Stay tuned as we share the newest developments for emotion3D and the automotive in-cabin industry!
 
  • Like
  • Fire
  • Love
Reactions: 51 users

Slade

Top 20
There are no published patent applications indicating Luminar has a NN in silicon.

Their latest published application:

US2022309685A1 NEURAL NETWORK FOR OBJECT DETECTION AND TRACKING

claims a method of tracking multiple objects using a processor, ie, it is software running on a processor. Their processing hardware includes a segmentation module, a classification module, and a tracking module. each of which can include a NN:


View attachment 39891


[0076] As seen in FIG. 3, the vehicle includes N different sensors 102 , with N being any suitable integer (e.g., 1, 2, 3, 5, 10, 20, etc.). At least ā€œSensor 1ā€ of the sensors 102 is configured to sense the environment of the autonomous vehicle by physically interacting with the environment in some way, such as transmitting and receiving lasers that reflect off of objects in the environment (e.g., if the sensor is a lidar device), transmitting and receiving acoustic signals that reflect off of objects in the environment (e.g., if the sensor is a radio detection and ranging (radar) device), simply receiving light waves generated or reflected from different areas of the environment (e.g., if the sensor is a camera), and so on. Depending on the embodiment, all of the sensors 102 may be configured to sense portions of the environment, or one or more of the sensors 102 may not physically interact with the external environment (e.g., if one of the sensors 102 is an inertial measurement unit (IMU)). The sensors 102 may all be of the same type, or may include a number of different sensor types (e.g., multiple lidar devices with different viewing perspectives, and/or a combination of lidar, camera, radar, and thermal imaging devices, etc.).

[0077] The data generated by the sensors 102 is input to a perception component 104 of the sensor control architecture 100 , and is processed by the perception component 104 to generate perception signals 106 descriptive of a current state of the vehicle's environment. It is understood that the term ā€œcurrentā€ may actually refer to a very short time prior to the generation of any given perception signals 106 , e.g., due to the short processing delay introduced by the perception component 104 and other factors. To generate the perception signals 106 , the perception component 104 may include a segmentation module 110 , a classification module 112 and a tracking module 114 .

[0078] The segmentation module 110 is generally configured to identify distinct objects within the environment, as represented by the sensor data (or a portion of the sensor data). Depending on the embodiment and/or scenario, the segmentation task may be performed separately for each of a number of different types of sensor data (e.g., the segmentation module 110 may include a number of modules operating in parallel), or may be performed jointly on a fusion of multiple types of sensor data. In some embodiments where lidar devices are used, the segmentation module 110 analyzes point cloud frames to identify subsets of points within each frame that correspond to probable physical objects in the environment. In other embodiments, the segmentation module 110 jointly analyzes lidar point cloud frames in conjunction with camera (and/or other) image frames to identify objects in the environment. Examples of lidar devices/systems and point clouds are discussed in further detail below. Other suitable techniques, and/or data from other suitable sensor types, may also be used to identify objects. As used herein, references to different or distinct ā€œobjectsā€ may encompass physical things that are entirely disconnected (e.g., with two vehicles being two different ā€œobjectsā€), as well as physical things that are connected or partially connected (e.g., with a vehicle being a first ā€œobjectā€ and the vehicle's hitched trailer being a second ā€œobjectā€).

[0079] The segmentation module 110 may use predetermined rules or algorithms to identify objects. For example, the segmentation module 110 may identify as distinct objects, within a point cloud, any clusters of points that meet certain criteria (e.g., having no more than a certain maximum distance between all points in the cluster, etc.). Alternatively, the segmentation module 110 may utilize a neural network that has been trained to identify distinct objects within the environment (e.g., using supervised learning with manually generated labels for different objects within test data point clouds, etc.), or another suitable type of machine learning based model. Example operation of the segmentation module 110 is discussed in more detail below in FIG. 5B, for an embodiment in which the perception component 104 processes point cloud data.

[0080] The classification module 112 is generally configured to determine classes (labels, categories, etc.) for different objects that have been identified by the segmentation module 110 . Like the segmentation module 110 , the classification module 112 may perform classification separately for different sets of the sensor data (e.g., the classification module 112 may include a number of modules operating in parallel), or may classify objects based on a fusion of data from multiple sensors, etc. Moreover, and also similar to the segmentation module 110 , the classification module 112 may execute predetermined rules or algorithms to classify objects, use a neural network that has been trained to classify identified objects within the environment (e.g., using supervised learning with manually generated labels for different point cloud representations of distinct objects, etc.), or use another suitable machine learning based model to classify objects. Example operation of the classification module 112 is discussed in more detail below in FIG. 5B, for an embodiment in which the perception component 104 processes point cloud data.

[0081] The tracking module 114 is generally configured to track distinct objects over time (e.g., across multiple lidar point cloud or camera image frames). The tracked objects are generally objects that have been identified by the segmentation module 110 , but may or may not be objects that were classified by the classification module 112 , depending on the embodiment and/or scenario. The segmentation module 110 may assign identifiers to identified objects, and the tracking module 114 may associate existing identifiers with specific objects where appropriate (e.g., for lidar data, by associating the same identifier with different clusters of points, at different locations, in successive point cloud frames). Like the segmentation module 110 and the classification module 112 , the tracking module 114 may perform separate object tracking based on different sets of the sensor data (e.g., the tracking module 114 may include a number of modules operating in parallel), or may track objects based on a fusion of data from multiple sensors. Moreover, and also similar to the segmentation module 110 and the classification module 112 , the tracking module 114 may execute predetermined rules or algorithms to track objects, may use a neural network that has been trained to track identified (and possibly classified) objects within the environment (e.g., using supervised learning with manually generated labels for different pairs or sets of point cloud frames, etc.), or another suitable machine learning model to track objects.

[0082] Because the blocks of FIG. 3 (and various other figures described herein) depict a software architecture rather than physical components, it is understood that, when any reference is made herein to a particular neural network or other software architecture component being ā€œtrained,ā€ or to the role of any software architecture component (e.g., sensors 102 ) in conducting such training, the operations or procedures described may have occurred on a different computing system (e.g., using specialized development software). Thus, for example, neural networks of the segmentation module 110 , classification module 112 and/or tracking module 114 may have been trained on a different computer system before being implemented within any vehicle. Put differently, the components of the sensor control architecture 100 may be included in a ā€œfinalā€ product within a particular vehicle, without that vehicle or its physical components (sensors 102 , etc.) necessarily having been used for any training processes
.

I think it is improbable that Mercedes will revert to a software NN.

Now there are some interesting features of Luminar's laser projector for lidar, such as foveation which enables the laser pulses to be concentrated more densely on objects of interest, and this could well attract MB, but I doubt they would adopt software to process the reflected pulses.

Does Luminar even have a chip?

On or around March 17, 2023, various media outlets reported that Lidwave, an Israeli start-up, had accused Luminar of attempting to pass off a Lidwave chip as Luminarā€™s own technology after Luminar displayed an image of the processor at a recent investor conference, as well as in materials on its website. As a result, Lidwave threatened Luminar with legal action. Luminar subsequently removed the images in question from its investor presentation, website, and a YouTube video. On this news, Luminarā€™s stock price fell $0.68 per share, or 8%, to close at $7.80 per share on March 20, 2023, the next trading day.

 
  • Like
  • Wow
  • Fire
Reactions: 12 users

chapman89

Founding Member
Nobody is talking about changing direction with in 1.5 months. This was done over 12 mths ago which is about the same period where nothing new has been said or reported by MB, Valero, Auto Journo's etc about Akida.
Proga it takes at the bare minimum 3 years, sometimes 4-5 years for automotive.
Here is a video from RENESAS CEO from 2 months ago outlining how long it takes.
I encourage all here to listen to what he says about how long it takes for automotive.

Have a listen from 55 min mark-

 
  • Like
  • Fire
  • Love
Reactions: 44 users
Top Bottom