BRN Discussion Ongoing

Diogenese

Top 20
Hi @Diogenese

The following is an extract from Luminar's 28.2.23 Annual Report:

Adjacent Markets Adjacent markets such as last mile delivery, aerospace and defense, robotics and security offer use cases for which our technology is well suited. Our goal is to scale our core markets and utilize our robust solutions to best serve these adjacent markets where it makes sense for us and our partners.

Our Products

Our Iris and other products are described in further detail below:

Hardware Iris: Iris lidar combines laser transmitter and receiver and provides long-range, 1550 nm sensory meeting OEM specs for advanced safety and autonomy. This technology provides efficient automotive-grade, and affordable solutions that are scalable, reliable, and optional for series production. Iris lidar sensors are dynamically configurable dualaxis scan sensors that detect objects up to 600 meters away over a horizontal field of view of 120° and a software configurable vertical field of view of up to 30°, providing high point densities in excess of 200 points per square degree enable long-range detection, tracking, and classification over the whole field of view. Iris is refined to meet the size, weight, cost, power, and reliability requirements of automotive qualified series production sensors.
Iris features our vertically integrated receiver, detector, and laser solutions developed by our Advanced Technologies & Services segment companies - Freedom Photonics, Black Forest Engineering, and Optogration. The internal development of these key technologies gives us a significant advantage in the development of our product roadmap.

Software Software presently under development includes the following:

Core Sensor Software:
Our lidar sensors are configurable and capture valuable information extracted from the raw point-cloud to promote the development and performance of perception software. Our core sensor software features are being designed to help our commercial partners to operate and integrate our lidar sensors and control, and enrich the sensor data stream before perception processing.

Perception Software: Our perception software is in design to transform lidar point-cloud data into actionable information about the environment surrounding the vehicle. This information includes classifying static objects such as lane markings, road surface, curbs, signs and buildings as well as other vehicles, pedestrians, cyclists and animals. Through internal development as well as the recent acquisition of certain assets of Solfice (aka Civil Maps), we expect to be able to utilize our point-cloud data to achieve precise vehicle localization and to create and provide continuous updates to a high definition map of a vehicle’s environment.

Sentinel: Sentinel is our full-stack software platform for safety and autonomy that will enable Proactive Safety and highway autonomy for cars and commercial trucks. Our software products are in designing and coding phase of the development and had not yet achieved technological feasibility as at end of 2022.

Competition
The market for lidar-enabled vehicle features, on and off road, is an emerging one with many potential applications in the development stage. As a result, we face competition for lidar hardware business from a range of companies seeking to have their products incorporated into these applications. We believe we hold a strong position based on both hardware product performance and maturity, and our growing ability to develop deeply integrated software capabilities needed to provide autonomous and safety solutions to our customers. Within the automotive autonomy software space, the competitive landscape is still nascent and primarily focused on developing robo-taxi technologies as opposed to autonomous software solutions for passenger vehicles. Other autonomous software providers include: in-house OEM software teams; automotive silicon providers; large technology companies and newer technology companies focused on autonomous software. We partner with several of these autonomous software providers to provide our lidar and other products. Beyond automotive, the adjacent markets, including delivery bots and mapping, among others, are highly competitive. There are entrenched incumbents and competitors, including from China, particularly around ultra-low cost products that are widely available."

We know as Facts:

1. Luminar partnered with Mercedes Benz in 2022 and does not expect its product to be in vehicles before 2025.

2. We know Mercedes Benz teamed with Valeo has obtained the first European and USA approvals for Level 3 Driving.

3. We know Level 3 Driving involves maximum 60 kph on freeways with hands off the wheel but driver must maintain sufficient attention to retake control when warned by the vehicle so to do.

4. We know Valeo certifies its Lidar to 2OO metres.

5. We know that Luminar claims its Lidar is long range out to 600 metres on roads not exceeding certain undulations that could inhibit signals.

6. We know that Mercedes, Valeo and Bosch have proven systems for autonomous vehicle parking in parking stations.

7. We know that Valeo is claiming that Scala 3 will permit autonomous driving up to 130 kph and is coming out in 2025.

8. We know from the above SEC filing that Luminar is still not ready despite its advertising message that suggests it is a sure thing.

So my question is as Luminar does not claim to support autonomous parking or certified Level 3 driving at 60 kph but is simply promoting it can provide long range Lidar for high speed driving and from their website have been shipping one Lidar sensor unit to vehicle manufacturers for installation on/in car hoods above the windscreen why does this exclude Valeo's system which is offering 145 degree visibility and rear and side sensing from continuing to do what it presently does with Luminar increasing safety on high speed autobahns in Germany and Europe.

My opinion only DYOR
FF

AKIDA BALLISTA
'pologies - internet dropped out

Certainly the 2024 Mercedes lidar will be Valeo Scala 3. However, with its extra long range foveated lidar, Luminar may have an edge for level 5 which must be able to handle a 400kph closing speed which, if I recall correctly, has been set at 400 m range.

My point was that whether it's Valeo, which we know about, or Luminar, about which we have no confirmed association, I think we have a good shot at being in both because Luminar cannot meet Mercedes SWaP requirements with software classification. Luminar have been working with Mercedes for 2 years, and, during that time, Mercedes would have been fully aware of what Akida can do.

So even if the two projects were behind Chinese walls, Mercedes would have been within their rights to mention the public information on Akida to Luminar. Any on-going relationship between BrainChip and Luminar would have necessitated a Chinese wall within BrainChip to ensure there were no leaks of Valeo information to Luminar or vice-versa.
 
  • Like
  • Fire
  • Love
Reactions: 41 users

Kachoo

Regular
All this good news from Luminar hasn't helped their share price. Back below where they started.
View attachment 33360
Shows you the reality of the markets.

The think is BRN is different they have many avenues to apply their tech. Where Luminar is focused on one specific business. Not to take away from thwm but Akida is way more divers. Its the heart well the brain of the tech. Ha ba
 
  • Like
  • Fire
  • Love
Reactions: 14 users

Learning

Learning to the Top 🕵‍♂️
On the subject of Lidar.
Maybe Brainchip sales team could contact these guys.

As Akida has a sweet spot for Lidar.
SOSLAB are in joint development with HYUNDAI for Lidar.

Tony if you are reading this; its a sale lead. 😀 No commission require 😎
Screenshot_20230331_203929_Chrome.jpg
Screenshot_20230331_203948_Chrome.jpg

The VP Heesun Yoon has a few Patient under his belt.
Screenshot_20230331_204104_Chrome.jpg



Learning 🏖
 
  • Like
  • Fire
  • Haha
Reactions: 14 users

Learning

Learning to the Top 🕵‍♂️
Not sure if this has been shared. Brainchip get a mention in here.

"Brain-inspired computing: Neuromorphic computing refers to chips that use one of several brain-inspired techniques to produce ultra-low–power devices for specific types of AI workloads. While “neuromorphic” may be applied to any chip that mixes memory and compute at a fine level of granularity and utilizes many-to-many connectivity, it is more frequently applied to chips that are designed to process and accelerate spiking neural networks (SNNs). SNNs, which are distinct from mainstream AI (deep learning), copy the brain’s method of processing data and communicating between neurons. These networks are extremely sparse and can enable extremely low-power chips. Our current understanding of neuroscience suggests that voltage spikes travel from one neuron to the next, whereby the neuron performs some form of integration of the data (roughly analogous to applying neural network weights) before firing a spike to the next neuron in the circuit. Approaches to replicate this may encode data in spike amplitudes and use digital electronics (BrainChip) or encode data in timing of spikes and use asynchronous digital (Intel Loihi) or analog electronics (Innatera). As these technologies (and our understanding of neuroscience) continue to mature, we will see more brain-inspired chip companies, as well as further integration between neuromorphic computing and neuromorphic sensing, where there are certainly synergies to be exploited. SynSense, for example, is already working with Inivation and Prophesee on combining its neuromorphic chip with event-based image sensors."


Learning 🏖
 
  • Like
  • Fire
Reactions: 23 users
  • Like
  • Haha
Reactions: 8 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Holy cow!

1 billion euros in orders for Scala 3 ...

... and BrainChip is in a Joint Development with Valeo!

https://smallcaps.com.au/brainchip-joint-development-agreement-akida-neuromorphic-chip-valeo/

BrainChip signs joint development agreement for Akida neuromorphic chip with Valeo​

By George Tchetvertakov - June 9, 2020

In a JD, it is likely that there would be no licence, just a share of receipts based on relative contribution.

LdN said we had a sweet spot for LiDaR, sort of like Prophesee, nViso, SiFive ...

It's time one filled one's boots ...
Suffice to say, I'M SO EXCITED!!! 🥳🥳🥳

 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 19 users
Those of you on twitter can you please report this piece of shit also. Yes off topic feel free to report my post but your just as bad as this filth if you do.

He's referring to a picture of a child
Apologies but I saw it and I cannot ignore it.
 
Last edited:
  • Like
  • Wow
  • Fire
Reactions: 8 users

Colorado23

Regular
I struggle to digest the amazing volume of leads that many provide on this forum. Has anything been discussed regarding Onsemi. Interesting ecosystem partner list https://www.onsemi.com/company/about-onsemi/ecosystem-partners

Interesting product range with strong focus on low energy. Staggering revenue annually and would be lovely to be one of the non disclosed EAP partners.
 
  • Like
  • Wow
Reactions: 10 users

Diogenese

Top 20
SUBSCRIBE SEARCH
MARCH 30, 2023

Luminar announces new lidar technology and startup acquisition​

A series of technology announcements were made during a recent investor day event, illustrating Luminar’s ambitious roadmap for bringing safety and autonomy solutions to the automotive industry.

Eric van Rees

luminar%20fig-2.jpg.large.1024x1024.jpg
via Luminar
During CES 2023, Luminar announced a special news event scheduled for the end of February. Dubbed “Luminar Day”, a series of technology releases and partnership announcements were made during a live stream, showing Luminar’s ambitious plans and roadmap for the future. In addition to developing lidar sensors, Luminar plans to integrate different hardware and software components for improved vehicle safety and autonomous driving capabilities through acquisitions and partnerships with multiple car manufacturers and technology providers, as well as scaling up production of these solutions, anticipating large-scale market adoption of autonomous vehicles in the next couple of years.

A new lidar sensor​

Luminar’s current sensor portfolio has been extended with the new Iris+ sensor (and associated software), which comes with a range of 300 meters. This is 50 meters more than the current maximum range of the Iris sensor. The sensor design is such that it can be integrated seamlessly into the roofline of production vehicle models. Mercedes-Benz announced it will integrate IRIS+ into its next-generation vehicle lineup. The sensor will enable greater performance and collision avoidance of small objects at up to autobahn-level speeds, enhancing vehicle safety and the autonomous capabilities of a vehicle. Luminar has plans for an additional manufacturing facility in Asia with a local partner to support the vast global scale of upcoming vehicle launches with Iris+, as well as a production plant in Mexico that will be operated by contract manufacturer Celestica.
luminar%20fig-1.jpg.medium.800x800.jpg
New Iris+ sensor, via Luminar

Software development: Luminar AI Engine release​

The live stream featured the premiere of Luminar’s machine learning-based AI Engine for object detection in 3D data captured by lidar sensors. Since 2017, Luminar has been working on AI capabilities on 3D lidar data to improve the performance and functionality of next-generation safety and autonomy in automotive. The company plans to capture lidar data with more than a million vehicles that will provide input for its AI engine and build a 3D model of the drivable world. To accelerate Luminar’s AI Engine efforts, an exclusive partnership was announced with Scale.ai, a San Francisco-headquartered AI applications developer that will provide data labeling and AI tools. Luminar is not the first lidar tech company to work with Scale.ai: in the past, it has worked with Velodyne to find edge cases in 3D data and curate valuable data for annotation.

Seagate lidar division acquisition​

Just as with the Civil Maps acquisition announced at CES 2023 to accelerate its lidar production process, Luminar recently acquired the lidar division of data storage company Seagate. That company develops high-capacity, modular storage solutions (hard drives) for capturing massive amounts of data created by autonomous cars. Specifically, Luminar acquired lidar-related IP (internet protocols), assets and a technical team.

Additional announcements​

Apart from these three lidar-related announcements, multiple announcements were made that show the scale of Luminar's ambition to provide lidar-based solutions for automotive. Take for example the new commercial agreement with Pony.ai, an autonomous driving technology company. The partnership is meant to further improve the performance and accuracy of Luminar’s AI engine for Pony.ai’s next-generation commercial trucking and robotaxi platforms. Luminar also announced the combination of three semiconductor subsidiaries into a new entity named Luminar Semiconductor. The advanced receiver, laser and processing chip technologies provided by this entity are not limited to lidar-based applications but are also used in the aerospace, medical and communications sectors.
Specifically, when you don't understand, don't explain:

"Specifically, Luminar acquired lidar-related IP (internet protocols), assets and a technical team."
 
  • Like
  • Haha
Reactions: 9 users

VictorG

Member
Those of you on twitter can you please report this piece of shit also. Yes off topic feel free to report my post but your just as bad as this filth if you do.

He's referring to a picture of a child

Reported
 
  • Like
  • Love
  • Fire
Reactions: 19 users
  • Like
  • Love
  • Fire
Reactions: 15 users

VictorG

Member
  • Like
  • Fire
  • Love
Reactions: 13 users
Just on this subject did anyone else read about all the bodies they are finding as the Colorado river dries up due to the drought. Giving all the impression of having been an organised crime disposal site.

It then got me thinking about how autonomous AKIDA powered submersibles could be used to search dams and waterways.

My opinion only DYOR
FF

AKIDA BALLISTA
They might find Jimmy Hoffa!

SC
 
  • Like
  • Wow
  • Haha
Reactions: 4 users
This weekly chart set up has been around since 'manual pen and quill only' days. No system is infallible but on this occasion has support from possible bullish long and short term divergences on the retail popular short term MACH (26,12,19) and the long term MACH (100,50,30).
Interestingly this buy set up has also been completed on the XJO 200 this week. Has been completed a week or so ago on the US 500.
Price action suggests that a close equal to or above last weeks high is bullish. Also last weeks low cannot be breached.
Weekend scans by traders will throw up BRN as a buy candidate for consideration. Shorters will note.
Its wait and see now.
So basically things are set to pop now. And BRN SP is ready and loaded. Glad I bought a bunch more in the last few weeks.

Some serious BRN shares were wanted at the close auction today...look at the action at .495!

26CD3D9B-FE31-4D9C-822F-38E4A68AA475.png
 
  • Like
  • Fire
  • Love
Reactions: 17 users
So....we all read Akida Gen 2 adds support for VIT...that's cool.

I now just read the team at Valeo AI had a Jan 23 paper accepted for CVPR'23 titled...

IMG_20230331_194538.jpg


Hmmmm :unsure:;):LOL:

Not that I really understand it haha

Paper here:


Bit about the team.


Main research themes​

Multi-sensor scene understanding and forecasting — Driving monitoring and automatization relies first on a variety of sensors (cameras, radars, laser scanners) that deliver complementary information on the surroundings of a vehicle and on its cabin. Exploiting at best the outputs of each of these sensors at any instant is fundamental to understand the complex environment of the vehicle and to anticipate its evolution in the next seconds. To this end, we explore various machine learning approaches where sensors are considered either in isolation or collectively.

Data/annotation-efficient learning — Collecting diverse enough real data, and annotating it precisely, is complex, costly, time-consuming and doomed insufficient for complex open-world applications. To reduce dramatically these needs, we explore various alternatives to full supervision, in particular for perception tasks: self-supervised representation learning for images and point clouds, visual object detection with no or weak supervision only, unsupervised domain adaptation for semantic segmentation of images and point clouds, for instance. We also investigate training with fully-synthetic or generatively-augmented data.

Dependable models — When the unexpected happens, when the weather badly degrades, when a sensor gets blocked, embedded safety-critical models should continue working or, at least, diagnose the situation to react accordingly, e.g., by calling an alternative system or human oversight. With this in mind, we investigate ways to assess and improve the robustness of neural nets to perturbations, corner cases and various distribution shifts. Making their inner workings more interpretable, by design or in a post-hoc way, is also an important and challenging venue that we explore towards more trust-worthy models.
 
  • Like
  • Fire
  • Love
Reactions: 33 users

Boab

I wish I could paint like Vincent
So....we all read Akida Gen 2 adds support for VIT...that's cool.

I now just read the team at Valeo AI had a Jan 23 paper accepted for CVPR'23 titled...

View attachment 33378

Hmmmm :unsure:;):LOL:

Not that I really understand it haha

Paper here:


Bit about the team.


Main research themes​

Multi-sensor scene understanding and forecasting — Driving monitoring and automatization relies first on a variety of sensors (cameras, radars, laser scanners) that deliver complementary information on the surroundings of a vehicle and on its cabin. Exploiting at best the outputs of each of these sensors at any instant is fundamental to understand the complex environment of the vehicle and to anticipate its evolution in the next seconds. To this end, we explore various machine learning approaches where sensors are considered either in isolation or collectively.

Data/annotation-efficient learning — Collecting diverse enough real data, and annotating it precisely, is complex, costly, time-consuming and doomed insufficient for complex open-world applications. To reduce dramatically these needs, we explore various alternatives to full supervision, in particular for perception tasks: self-supervised representation learning for images and point clouds, visual object detection with no or weak supervision only, unsupervised domain adaptation for semantic segmentation of images and point clouds, for instance. We also investigate training with fully-synthetic or generatively-augmented data.

Dependable models — When the unexpected happens, when the weather badly degrades, when a sensor gets blocked, embedded safety-critical models should continue working or, at least, diagnose the situation to react accordingly, e.g., by calling an alternative system or human oversight. With this in mind, we investigate ways to assess and improve the robustness of neural nets to perturbations, corner cases and various distribution shifts. Making their inner workings more interpretable, by design or in a post-hoc way, is also an important and challenging venue that we explore towards more trust-worthy models.
There's those 2 words again.
Semantic segmentation is a deep learning algorithm that associates a label or category with every pixel in an image. It is used to recognize a collection of pixels that form distinct categories. For example, an autonomous vehicle needs to identify vehicles, pedestrians, traffic signs, pavement, and other road features.

Segmentation.jpg
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 20 users
Hello from Uruguay, I snatched up another 90.000 bits of Australia yesterday and you're not going to get them back!
 
  • Like
  • Love
  • Haha
Reactions: 44 users

charles2

Regular
  • Like
Reactions: 6 users
I think the first people who are going to revolt against AI, are the ones that the AI get's smarter than first.
 
  • Haha
  • Like
Reactions: 4 users

Andy38

The hope of potential generational wealth is real
Article on Edge computer vision posted by our mates at Edge Impulse. Two things I took out of this:
Firstly, there are a lot more companies in this space than I’d realised. And secondly, Although we are not mentioned in the 34 companies having a positive impact on society, I feel our impact will not only be positive it’ll be F…… ginormous!!!
Have a great weekend Chippers!!
https://omdena.com/blog/edge-computer-vision-companies/
 
  • Like
  • Fire
Reactions: 22 users
Top Bottom