RobjHunt
Regular
Once again mate, great darts!!Good Morning Chippers,
Getting that strange feeling again, though I have been wrong in the past.
Announcement ?????? Maybe..... Maybe not
?
Regards,
Esq.
Once again mate, great darts!!Good Morning Chippers,
Getting that strange feeling again, though I have been wrong in the past.
Announcement ?????? Maybe..... Maybe not
?
Regards,
Esq.
one hundred and eightyOnce again mate, great darts!!
Per share is fine with meone hundred and eighty
SparkFun has it as a soft cover, it seems. Now the Brainchip Premium Gold Edition on vellum@Esq.111 do you have already the hardcover edition yet?
___
Does this exist as a hard cover as in the picture? I would love to have it for my library.
Nice one ! Things actually are heating up.
emotion3Dās First Half of 2023
Jul 13, 2023 | Blog
As we reflect on the first half of 2023, the emotion3D team is thrilled to highlight a number of achievements and milestones. From winning new customers, forging new collaborations to securing certifications and participating in prestigious trade shows, our company has been making solid progress in advancing the field of driver and occupant analysis.
We kicked off the year with a trip to Las Vegas. CES, as every year, is our first trip after our Christmas holidays. Not only did we have countless highly productive meetings with customers, partners and other industry stakeholders, but we also announced our collaboration with SAT & Garmin. Together, we devised an innovative solution to drowsiness detection by integrating our Cabin Eye software stack, SATās sleep onset prediction algorithm, and Garminās smartwatch technology.
In February, we embarked on an exciting collaboration with Brainchip, a leading provider of neuromorphic processors. By combining our expertise in driver and occupant analysis with Brainchipās state-of-the-art processor, we aimed to revolutionize driving safety and enhance the overall driving experience. Our joint efforts focused on maximizing efficiency, precision, and minimizing energy consumption to deliver unparalleled results.
In March, we were awarded another large series production project with a new customer (stay tuned for exciting announcements).
While we dedicated considerable efforts to external collaborations, we also prioritized the enhancement of our internal processes and quality management. We are delighted to announce that our commitment has been recognized through our successful achievement of the TISAX Level 3 certification, which also encompasses the protection of prototype components. This certification proves our dedication to delivering top-tier solutions while upholding the highest industry standards.
Furthermore, we received invitations from prestigious international trade shows and events. These platforms provided us with a remarkable opportunity to showcase our technologies and expertise. Notable among these events were EAS IMS Tech 2023 and InCabin Brussels 2023, where our CTO, Michael Hƶdlmoser, delivered speeches outlining our expertise in deriving 3D occupant information.
Through our participation in trade shows such as the Automotive Testing Expo 2023 in Stuttgart and the highly specialized InCabin Brussels 2023 event, we had the opportunity of showcasingour newest solutions for driver and occupant monitoring to key stakeholders in the industry. Our technology was also present in our partnersā booths; as BHTC, Varroc and SAT showcased our joint demos. Moreover, during InCabin Brussels 2023, our CEO, Florian Seitner, joined the press briefing to announce our latest partnership with SAT and Chuhang Tech. This collaboration aims to create a multi-sensor fusion solution that combines camera and radar technologies, complemented by SATās sleep onset prediction algorithms. Together, we strive to deliver highly accurate drowsiness detection solutions, ensuring utmost safety for drivers and passengers.
With a successful first half of 2023 behind us, we look forward to the next half of the year ! when many exciting news and events such as IAA Mobility 2023 and CES 2024 are already being planned. Stay tuned as we share the newest developments for emotion3D and the automotive in-cabin industry!
Bang on the money Hop!
Institutional Investors & Mutual Funds have increased their holdings.....
View attachment 39704
Hopefully buying up big before Akida 2.0 is released.. soon hopefullyInsto's and Mutual Funds have just bought up approx another 5 million shares since Tuesday!
They now hold approx 316 million BRN shares.
View attachment 39916
MSN
www.msn.com
G'day mate,There are no published patent applications indicating Luminar has a NN in silicon.
Their latest published application:
US2022309685A1 NEURAL NETWORK FOR OBJECT DETECTION AND TRACKING
claims a method of tracking multiple objects using a processor, ie, it is software running on a processor. Their processing hardware includes a segmentation module, a classification module, and a tracking module. each of which can include a NN:
View attachment 39891
[0076] As seen in FIG. 3, the vehicle includes N different sensors 102 , with N being any suitable integer (e.g., 1, 2, 3, 5, 10, 20, etc.). At least āSensor 1ā of the sensors 102 is configured to sense the environment of the autonomous vehicle by physically interacting with the environment in some way, such as transmitting and receiving lasers that reflect off of objects in the environment (e.g., if the sensor is a lidar device), transmitting and receiving acoustic signals that reflect off of objects in the environment (e.g., if the sensor is a radio detection and ranging (radar) device), simply receiving light waves generated or reflected from different areas of the environment (e.g., if the sensor is a camera), and so on. Depending on the embodiment, all of the sensors 102 may be configured to sense portions of the environment, or one or more of the sensors 102 may not physically interact with the external environment (e.g., if one of the sensors 102 is an inertial measurement unit (IMU)). The sensors 102 may all be of the same type, or may include a number of different sensor types (e.g., multiple lidar devices with different viewing perspectives, and/or a combination of lidar, camera, radar, and thermal imaging devices, etc.).
[0077] The data generated by the sensors 102 is input to a perception component 104 of the sensor control architecture 100 , and is processed by the perception component 104 to generate perception signals 106 descriptive of a current state of the vehicle's environment. It is understood that the term ācurrentā may actually refer to a very short time prior to the generation of any given perception signals 106 , e.g., due to the short processing delay introduced by the perception component 104 and other factors. To generate the perception signals 106 , the perception component 104 may include a segmentation module 110 , a classification module 112 and a tracking module 114 .
[0078] The segmentation module 110 is generally configured to identify distinct objects within the environment, as represented by the sensor data (or a portion of the sensor data). Depending on the embodiment and/or scenario, the segmentation task may be performed separately for each of a number of different types of sensor data (e.g., the segmentation module 110 may include a number of modules operating in parallel), or may be performed jointly on a fusion of multiple types of sensor data. In some embodiments where lidar devices are used, the segmentation module 110 analyzes point cloud frames to identify subsets of points within each frame that correspond to probable physical objects in the environment. In other embodiments, the segmentation module 110 jointly analyzes lidar point cloud frames in conjunction with camera (and/or other) image frames to identify objects in the environment. Examples of lidar devices/systems and point clouds are discussed in further detail below. Other suitable techniques, and/or data from other suitable sensor types, may also be used to identify objects. As used herein, references to different or distinct āobjectsā may encompass physical things that are entirely disconnected (e.g., with two vehicles being two different āobjectsā), as well as physical things that are connected or partially connected (e.g., with a vehicle being a first āobjectā and the vehicle's hitched trailer being a second āobjectā).
[0079] The segmentation module 110 may use predetermined rules or algorithms to identify objects. For example, the segmentation module 110 may identify as distinct objects, within a point cloud, any clusters of points that meet certain criteria (e.g., having no more than a certain maximum distance between all points in the cluster, etc.). Alternatively, the segmentation module 110 may utilize a neural network that has been trained to identify distinct objects within the environment (e.g., using supervised learning with manually generated labels for different objects within test data point clouds, etc.), or another suitable type of machine learning based model. Example operation of the segmentation module 110 is discussed in more detail below in FIG. 5B, for an embodiment in which the perception component 104 processes point cloud data.
[0080] The classification module 112 is generally configured to determine classes (labels, categories, etc.) for different objects that have been identified by the segmentation module 110 . Like the segmentation module 110 , the classification module 112 may perform classification separately for different sets of the sensor data (e.g., the classification module 112 may include a number of modules operating in parallel), or may classify objects based on a fusion of data from multiple sensors, etc. Moreover, and also similar to the segmentation module 110 , the classification module 112 may execute predetermined rules or algorithms to classify objects, use a neural network that has been trained to classify identified objects within the environment (e.g., using supervised learning with manually generated labels for different point cloud representations of distinct objects, etc.), or use another suitable machine learning based model to classify objects. Example operation of the classification module 112 is discussed in more detail below in FIG. 5B, for an embodiment in which the perception component 104 processes point cloud data.
[0081] The tracking module 114 is generally configured to track distinct objects over time (e.g., across multiple lidar point cloud or camera image frames). The tracked objects are generally objects that have been identified by the segmentation module 110 , but may or may not be objects that were classified by the classification module 112 , depending on the embodiment and/or scenario. The segmentation module 110 may assign identifiers to identified objects, and the tracking module 114 may associate existing identifiers with specific objects where appropriate (e.g., for lidar data, by associating the same identifier with different clusters of points, at different locations, in successive point cloud frames). Like the segmentation module 110 and the classification module 112 , the tracking module 114 may perform separate object tracking based on different sets of the sensor data (e.g., the tracking module 114 may include a number of modules operating in parallel), or may track objects based on a fusion of data from multiple sensors. Moreover, and also similar to the segmentation module 110 and the classification module 112 , the tracking module 114 may execute predetermined rules or algorithms to track objects, may use a neural network that has been trained to track identified (and possibly classified) objects within the environment (e.g., using supervised learning with manually generated labels for different pairs or sets of point cloud frames, etc.), or another suitable machine learning model to track objects.
[0082] Because the blocks of FIG. 3 (and various other figures described herein) depict a software architecture rather than physical components, it is understood that, when any reference is made herein to a particular neural network or other software architecture component being ātrained,ā or to the role of any software architecture component (e.g., sensors 102 ) in conducting such training, the operations or procedures described may have occurred on a different computing system (e.g., using specialized development software). Thus, for example, neural networks of the segmentation module 110 , classification module 112 and/or tracking module 114 may have been trained on a different computer system before being implemented within any vehicle. Put differently, the components of the sensor control architecture 100 may be included in a āfinalā product within a particular vehicle, without that vehicle or its physical components (sensors 102 , etc.) necessarily having been used for any training processes.
I think it is improbable that Mercedes will revert to a software NN.
Now there are some interesting features of Luminar's laser projector for lidar, such as foveation which enables the laser pulses to be concentrated more densely on objects of interest, and this could well attract MB, but I doubt they would adopt software to process the reflected pulses.
They know all about it. Daimler AG truck unit took a minority stake in Luminar in October 2020, investing in the company as part of its efforts to develop self-driving trucksProga it takes at the bare minimum 3 years, sometimes 4-5 years for automotive.
Here is a video from RENESAS CEO from 2 months ago outlining how long it takes.
I encourage all here to listen to what he says about how long it takes for automotive.
Have a listen from 55 min mark-
3rd futureDESIGN Roadshow under SemiconIndia Design Linked Incentive (DLI) Scheme
03rd Roadshow at IIT Delhi on 12th May 2023 with Shri Rajeev Chandrasekhar, Hon'ble Minister of State for Electronics & Information Technology and Skill Deve...www.youtube.com
page 87 is something about brainchip....
As I said, the Luminar laser pulse transmitter can be controlled to focus more pulses (foveation) on objects of interest to get a better point cloud picture. That aspect of their system would be of interest to auto makers. It probably contributes to a longer range.G'day mate,
Their processing hardware includes a segmentation module, a classification module, and a tracking module. each of which can include a NN: I thought they were using SnapDragon?
The question is does Luminar's Lidar work? They're in cahoots with Volvo and In April of 2022, Nissan announced they would use Luminar technology to integrate advanced autonomous functionality in all their cars by 2030.
Luminar's Technologies
Luminar's Iris and Iris+ lidar, built from the chip-up, are high performing, long-range sensors that unlock safety and autonomy for cars, commercial trucks and more.www.luminartech.com
Maybe a few extra kilowatts isn't a problem for a truck or ICE, but for EVs, every Watt counts. With a CPU or GPU, speed is traded for power. To speed up software processing, they run several processors in parallel, to get a faster result, burning more power in the process.They know all about it. Daimler AG truck unit took a minority stake in Luminar in October 2020, investing in the company as part of its efforts to develop self-driving trucks
As I said to Dio in my last post, I don't even know if Luminar's Lidar works. It's supposed to be plug-in according to their website.
Agree, it takes 2 years or more to test products in vehicles. No way MB will be changing now.I don't think Mercedes would change direction with in 1.5 months like that these products are out soon in 6 months lol. They would likely be starting up production very soon lol. I feel some may have been convinced by the shorters that 20 cents is on the cards and now the price moved north.
Agree, it takes 2 years or more to test products in vehicles. No way MB will be changing now.
I think this type of ' I feel' negative post is generated by shorters hoping to sow seeds of doubt.
Take no notice.
I was just replying to @chapman89. MB have known, researched and tested Luminar for 4 years so it isn't new to them as he was trying to suggest. From memory, someone posted a fortnight ago MB was always going to use Luminar in one of their vehicles but may have increased the number which prompted me to take another look.Maybe a few extra kilowatts isn't a problem for a truck or ICE, but for EVs, every Watt counts. With a CPU or GPU, speed is traded for power. To speed up software processing, they run several processors in parallel, to get a faster result, burning more power in the process.
Their software patent for object tracking was only filed in mid-2020.
Thanks Proga,I was just replying to @chapman89. MB have known, researched and tested Luminar for 4 years so it isn't new to them as he was trying to suggest. From memory, someone posted a fortnight ago MB was always going to use Luminar in one of their vehicles but may have increased the number which prompted me to take another look.
Volvo are already using it in their SUV EX90 so this isn't new tech being developed or only for trucks. The fully electric successor to Volvo Carsā XC90, to be revealed in 2022, will come with state-of-the-art sensors, including LiDAR technology developed by Luminar and an autonomous driving computer powered by the NVIDIA DRIVE Orinā¢ system-on-a-chip, as standard.
Do they ever have a chip @Slade but what a mistake using the wrong pictures on your presentations and website. @Diogenese you looked into NVIDIA DRIVE Orinā¢ system-on-a-chip last year.