Boab
I wish I could paint like Vincent
Intresting little video of why ChatGPT need Neuromorphic Computer.
View attachment 33358
View attachment 33359
Learning 🏖
He's very bullish on the whole neuromorphic scene.
Intresting little video of why ChatGPT need Neuromorphic Computer.
View attachment 33358
View attachment 33359
Learning 🏖
DR Brad works at Sandia National Laboratories.He's very bullish on the whole neuromorphic scene.![]()
'pologies - internet dropped outHi @Diogenese
The following is an extract from Luminar's 28.2.23 Annual Report:
Adjacent Markets Adjacent markets such as last mile delivery, aerospace and defense, robotics and security offer use cases for which our technology is well suited. Our goal is to scale our core markets and utilize our robust solutions to best serve these adjacent markets where it makes sense for us and our partners.
Our Products
Our Iris and other products are described in further detail below:
Hardware Iris: Iris lidar combines laser transmitter and receiver and provides long-range, 1550 nm sensory meeting OEM specs for advanced safety and autonomy. This technology provides efficient automotive-grade, and affordable solutions that are scalable, reliable, and optional for series production. Iris lidar sensors are dynamically configurable dualaxis scan sensors that detect objects up to 600 meters away over a horizontal field of view of 120° and a software configurable vertical field of view of up to 30°, providing high point densities in excess of 200 points per square degree enable long-range detection, tracking, and classification over the whole field of view. Iris is refined to meet the size, weight, cost, power, and reliability requirements of automotive qualified series production sensors.
Iris features our vertically integrated receiver, detector, and laser solutions developed by our Advanced Technologies & Services segment companies - Freedom Photonics, Black Forest Engineering, and Optogration. The internal development of these key technologies gives us a significant advantage in the development of our product roadmap.
Software Software presently under development includes the following:
Core Sensor Software: Our lidar sensors are configurable and capture valuable information extracted from the raw point-cloud to promote the development and performance of perception software. Our core sensor software features are being designed to help our commercial partners to operate and integrate our lidar sensors and control, and enrich the sensor data stream before perception processing.
Perception Software: Our perception software is in design to transform lidar point-cloud data into actionable information about the environment surrounding the vehicle. This information includes classifying static objects such as lane markings, road surface, curbs, signs and buildings as well as other vehicles, pedestrians, cyclists and animals. Through internal development as well as the recent acquisition of certain assets of Solfice (aka Civil Maps), we expect to be able to utilize our point-cloud data to achieve precise vehicle localization and to create and provide continuous updates to a high definition map of a vehicle’s environment.
Sentinel: Sentinel is our full-stack software platform for safety and autonomy that will enable Proactive Safety and highway autonomy for cars and commercial trucks. Our software products are in designing and coding phase of the development and had not yet achieved technological feasibility as at end of 2022.
Competition
The market for lidar-enabled vehicle features, on and off road, is an emerging one with many potential applications in the development stage. As a result, we face competition for lidar hardware business from a range of companies seeking to have their products incorporated into these applications. We believe we hold a strong position based on both hardware product performance and maturity, and our growing ability to develop deeply integrated software capabilities needed to provide autonomous and safety solutions to our customers. Within the automotive autonomy software space, the competitive landscape is still nascent and primarily focused on developing robo-taxi technologies as opposed to autonomous software solutions for passenger vehicles. Other autonomous software providers include: in-house OEM software teams; automotive silicon providers; large technology companies and newer technology companies focused on autonomous software. We partner with several of these autonomous software providers to provide our lidar and other products. Beyond automotive, the adjacent markets, including delivery bots and mapping, among others, are highly competitive. There are entrenched incumbents and competitors, including from China, particularly around ultra-low cost products that are widely available."
We know as Facts:
1. Luminar partnered with Mercedes Benz in 2022 and does not expect its product to be in vehicles before 2025.
2. We know Mercedes Benz teamed with Valeo has obtained the first European and USA approvals for Level 3 Driving.
![]()
Mercedes, Valeo on Drive Pilot L3 - DVN
With Drive Pilot in the Vision EQXX, Mercedes-Benz last December received the world’s first approval for an L3 AD system under the UN Regulations recgonised by most of the world’s…www.drivingvisionnews.com
3. We know Level 3 Driving involves maximum 60 kph on freeways with hands off the wheel but driver must maintain sufficient attention to retake control when warned by the vehicle so to do.
4. We know Valeo certifies its Lidar to 2OO metres.
5. We know that Luminar claims its Lidar is long range out to 600 metres on roads not exceeding certain undulations that could inhibit signals.
6. We know that Mercedes, Valeo and Bosch have proven systems for autonomous vehicle parking in parking stations.
7. We know that Valeo is claiming that Scala 3 will permit autonomous driving up to 130 kph and is coming out in 2025.
8. We know from the above SEC filing that Luminar is still not ready despite its advertising message that suggests it is a sure thing.
So my question is as Luminar does not claim to support autonomous parking or certified Level 3 driving at 60 kph but is simply promoting it can provide long range Lidar for high speed driving and from their website have been shipping one Lidar sensor unit to vehicle manufacturers for installation on/in car hoods above the windscreen why does this exclude Valeo's system which is offering 145 degree visibility and rear and side sensing from continuing to do what it presently does with Luminar increasing safety on high speed autobahns in Germany and Europe.
My opinion only DYOR
FF
AKIDA BALLISTA
Shows you the reality of the markets.All this good news from Luminar hasn't helped their share price. Back below where they started.
View attachment 33360
www.therobotreport.com
Suffice to say, I'M SO EXCITED!!!Holy cow!
1 billion euros in orders for Scala 3 ...
... and BrainChip is in a Joint Development with Valeo!
https://smallcaps.com.au/brainchip-joint-development-agreement-akida-neuromorphic-chip-valeo/
BrainChip signs joint development agreement for Akida neuromorphic chip with Valeo
By George Tchetvertakov - June 9, 2020
In a JD, it is likely that there would be no licence, just a share of receipts based on relative contribution.
LdN said we had a sweet spot for LiDaR, sort of like Prophesee, nViso, SiFive ...
It's time one filled one's boots ...
Specifically, when you don't understand, don't explain:SUBSCRIBE SEARCH
MARCH 30, 2023
Luminar announces new lidar technology and startup acquisition
A series of technology announcements were made during a recent investor day event, illustrating Luminar’s ambitious roadmap for bringing safety and autonomy solutions to the automotive industry.
Eric van Rees
via Luminar![]()
During CES 2023, Luminar announced a special news event scheduled for the end of February. Dubbed “Luminar Day”, a series of technology releases and partnership announcements were made during a live stream, showing Luminar’s ambitious plans and roadmap for the future. In addition to developing lidar sensors, Luminar plans to integrate different hardware and software components for improved vehicle safety and autonomous driving capabilities through acquisitions and partnerships with multiple car manufacturers and technology providers, as well as scaling up production of these solutions, anticipating large-scale market adoption of autonomous vehicles in the next couple of years.
A new lidar sensor
Luminar’s current sensor portfolio has been extended with the new Iris+ sensor (and associated software), which comes with a range of 300 meters. This is 50 meters more than the current maximum range of the Iris sensor. The sensor design is such that it can be integrated seamlessly into the roofline of production vehicle models. Mercedes-Benz announced it will integrate IRIS+ into its next-generation vehicle lineup. The sensor will enable greater performance and collision avoidance of small objects at up to autobahn-level speeds, enhancing vehicle safety and the autonomous capabilities of a vehicle. Luminar has plans for an additional manufacturing facility in Asia with a local partner to support the vast global scale of upcoming vehicle launches with Iris+, as well as a production plant in Mexico that will be operated by contract manufacturer Celestica.
New Iris+ sensor, via Luminar![]()
Software development: Luminar AI Engine release
The live stream featured the premiere of Luminar’s machine learning-based AI Engine for object detection in 3D data captured by lidar sensors. Since 2017, Luminar has been working on AI capabilities on 3D lidar data to improve the performance and functionality of next-generation safety and autonomy in automotive. The company plans to capture lidar data with more than a million vehicles that will provide input for its AI engine and build a 3D model of the drivable world. To accelerate Luminar’s AI Engine efforts, an exclusive partnership was announced with Scale.ai, a San Francisco-headquartered AI applications developer that will provide data labeling and AI tools. Luminar is not the first lidar tech company to work with Scale.ai: in the past, it has worked with Velodyne to find edge cases in 3D data and curate valuable data for annotation.
Seagate lidar division acquisition
Just as with the Civil Maps acquisition announced at CES 2023 to accelerate its lidar production process, Luminar recently acquired the lidar division of data storage company Seagate. That company develops high-capacity, modular storage solutions (hard drives) for capturing massive amounts of data created by autonomous cars. Specifically, Luminar acquired lidar-related IP (internet protocols), assets and a technical team.
Additional announcements
Apart from these three lidar-related announcements, multiple announcements were made that show the scale of Luminar's ambition to provide lidar-based solutions for automotive. Take for example the new commercial agreement with Pony.ai, an autonomous driving technology company. The partnership is meant to further improve the performance and accuracy of Luminar’s AI engine for Pony.ai’s next-generation commercial trucking and robotaxi platforms. Luminar also announced the combination of three semiconductor subsidiaries into a new entity named Luminar Semiconductor. The advanced receiver, laser and processing chip technologies provided by this entity are not limited to lidar-based applications but are also used in the aerospace, medical and communications sectors.
Those of you on twitter can you please report this piece of shit also. Yes off topic feel free to report my post but your just as bad as this filth if you do.
He's referring to a picture of a child
Thankyou mate. I saw it and I will not ignore stuff like thatReported
Same here.Thankyou mate. I saw it and I will not ignore stuff like that
They might find Jimmy Hoffa!Just on this subject did anyone else read about all the bodies they are finding as the Colorado river dries up due to the drought. Giving all the impression of having been an organised crime disposal site.
It then got me thinking about how autonomous AKIDA powered submersibles could be used to search dams and waterways.
My opinion only DYOR
FF
AKIDA BALLISTA
So basically things are set to pop now. And BRN SP is ready and loaded. Glad I bought a bunch more in the last few weeks.This weekly chart set up has been around since 'manual pen and quill only' days. No system is infallible but on this occasion has support from possible bullish long and short term divergences on the retail popular short term MACH (26,12,19) and the long term MACH (100,50,30).
Interestingly this buy set up has also been completed on the XJO 200 this week. Has been completed a week or so ago on the US 500.
Price action suggests that a close equal to or above last weeks high is bullish. Also last weeks low cannot be breached.
Weekend scans by traders will throw up BRN as a buy candidate for consideration. Shorters will note.
Its wait and see now.
There's those 2 words again.So....we all read Akida Gen 2 adds support for VIT...that's cool.
I now just read the team at Valeo AI had a Jan 23 paper accepted for CVPR'23 titled...
View attachment 33378
Hmmmm
Not that I really understand it haha
Paper here:
RangeViT: Towards Vision Transformers for 3D Semantic Segmentation in Autonomous Driving
Casting semantic segmentation of outdoor LiDAR point clouds as a 2D problem, e.g., via range projection, is an effective and popular approach. These projection-based methods usually benefit from fast computations and, when combined with techniques which use other point cloud representations...arxiv.org
Bit about the team.
valeo.ai
ptrckprz.github.io
Main research themes
Multi-sensor scene understanding and forecasting — Driving monitoring and automatization relies first on a variety of sensors (cameras, radars, laser scanners) that deliver complementary information on the surroundings of a vehicle and on its cabin. Exploiting at best the outputs of each of these sensors at any instant is fundamental to understand the complex environment of the vehicle and to anticipate its evolution in the next seconds. To this end, we explore various machine learning approaches where sensors are considered either in isolation or collectively.
Data/annotation-efficient learning — Collecting diverse enough real data, and annotating it precisely, is complex, costly, time-consuming and doomed insufficient for complex open-world applications. To reduce dramatically these needs, we explore various alternatives to full supervision, in particular for perception tasks: self-supervised representation learning for images and point clouds, visual object detection with no or weak supervision only, unsupervised domain adaptation for semantic segmentation of images and point clouds, for instance. We also investigate training with fully-synthetic or generatively-augmented data.
Dependable models — When the unexpected happens, when the weather badly degrades, when a sensor gets blocked, embedded safety-critical models should continue working or, at least, diagnose the situation to react accordingly, e.g., by calling an alternative system or human oversight. With this in mind, we investigate ways to assess and improve the robustness of neural nets to perturbations, corner cases and various distribution shifts. Making their inner workings more interpretable, by design or in a post-hoc way, is also an important and challenging venue that we explore towards more trust-worthy models.