BRN Discussion Ongoing

Kachoo

Regular
All this good news from Luminar hasn't helped their share price. Back below where they started.
View attachment 33360
Shows you the reality of the markets.

The think is BRN is different they have many avenues to apply their tech. Where Luminar is focused on one specific business. Not to take away from thwm but Akida is way more divers. Its the heart well the brain of the tech. Ha ba
 
  • Like
  • Fire
  • Love
Reactions: 14 users

Learning

Learning to the Top 🕵‍♂️
On the subject of Lidar.
Maybe Brainchip sales team could contact these guys.

As Akida has a sweet spot for Lidar.
SOSLAB are in joint development with HYUNDAI for Lidar.

Tony if you are reading this; its a sale lead. 😀 No commission require 😎
Screenshot_20230331_203929_Chrome.jpg
Screenshot_20230331_203948_Chrome.jpg

The VP Heesun Yoon has a few Patient under his belt.
Screenshot_20230331_204104_Chrome.jpg



Learning 🏖
 
  • Like
  • Fire
  • Haha
Reactions: 14 users

Learning

Learning to the Top 🕵‍♂️
Not sure if this has been shared. Brainchip get a mention in here.

"Brain-inspired computing: Neuromorphic computing refers to chips that use one of several brain-inspired techniques to produce ultra-low–power devices for specific types of AI workloads. While “neuromorphic” may be applied to any chip that mixes memory and compute at a fine level of granularity and utilizes many-to-many connectivity, it is more frequently applied to chips that are designed to process and accelerate spiking neural networks (SNNs). SNNs, which are distinct from mainstream AI (deep learning), copy the brain’s method of processing data and communicating between neurons. These networks are extremely sparse and can enable extremely low-power chips. Our current understanding of neuroscience suggests that voltage spikes travel from one neuron to the next, whereby the neuron performs some form of integration of the data (roughly analogous to applying neural network weights) before firing a spike to the next neuron in the circuit. Approaches to replicate this may encode data in spike amplitudes and use digital electronics (BrainChip) or encode data in timing of spikes and use asynchronous digital (Intel Loihi) or analog electronics (Innatera). As these technologies (and our understanding of neuroscience) continue to mature, we will see more brain-inspired chip companies, as well as further integration between neuromorphic computing and neuromorphic sensing, where there are certainly synergies to be exploited. SynSense, for example, is already working with Inivation and Prophesee on combining its neuromorphic chip with event-based image sensors."


Learning 🏖
 
  • Like
  • Fire
Reactions: 23 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Holy cow!

1 billion euros in orders for Scala 3 ...

... and BrainChip is in a Joint Development with Valeo!

https://smallcaps.com.au/brainchip-joint-development-agreement-akida-neuromorphic-chip-valeo/

BrainChip signs joint development agreement for Akida neuromorphic chip with Valeo​

By George Tchetvertakov - June 9, 2020

In a JD, it is likely that there would be no licence, just a share of receipts based on relative contribution.

LdN said we had a sweet spot for LiDaR, sort of like Prophesee, nViso, SiFive ...

It's time one filled one's boots ...
Suffice to say, I'M SO EXCITED!!! 🥳🥳🥳

 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 19 users
Those of you on twitter can you please report this piece of shit also. Yes off topic feel free to report my post but your just as bad as this filth if you do.

He's referring to a picture of a child
Apologies but I saw it and I cannot ignore it.
 
Last edited:
  • Like
  • Wow
  • Fire
Reactions: 8 users

Colorado23

Regular
I struggle to digest the amazing volume of leads that many provide on this forum. Has anything been discussed regarding Onsemi. Interesting ecosystem partner list https://www.onsemi.com/company/about-onsemi/ecosystem-partners

Interesting product range with strong focus on low energy. Staggering revenue annually and would be lovely to be one of the non disclosed EAP partners.
 
  • Like
  • Wow
Reactions: 10 users

Diogenese

Top 20
SUBSCRIBE SEARCH
MARCH 30, 2023

Luminar announces new lidar technology and startup acquisition​

A series of technology announcements were made during a recent investor day event, illustrating Luminar’s ambitious roadmap for bringing safety and autonomy solutions to the automotive industry.

Eric van Rees

luminar%20fig-2.jpg.large.1024x1024.jpg
via Luminar
During CES 2023, Luminar announced a special news event scheduled for the end of February. Dubbed “Luminar Day”, a series of technology releases and partnership announcements were made during a live stream, showing Luminar’s ambitious plans and roadmap for the future. In addition to developing lidar sensors, Luminar plans to integrate different hardware and software components for improved vehicle safety and autonomous driving capabilities through acquisitions and partnerships with multiple car manufacturers and technology providers, as well as scaling up production of these solutions, anticipating large-scale market adoption of autonomous vehicles in the next couple of years.

A new lidar sensor​

Luminar’s current sensor portfolio has been extended with the new Iris+ sensor (and associated software), which comes with a range of 300 meters. This is 50 meters more than the current maximum range of the Iris sensor. The sensor design is such that it can be integrated seamlessly into the roofline of production vehicle models. Mercedes-Benz announced it will integrate IRIS+ into its next-generation vehicle lineup. The sensor will enable greater performance and collision avoidance of small objects at up to autobahn-level speeds, enhancing vehicle safety and the autonomous capabilities of a vehicle. Luminar has plans for an additional manufacturing facility in Asia with a local partner to support the vast global scale of upcoming vehicle launches with Iris+, as well as a production plant in Mexico that will be operated by contract manufacturer Celestica.
luminar%20fig-1.jpg.medium.800x800.jpg
New Iris+ sensor, via Luminar

Software development: Luminar AI Engine release​

The live stream featured the premiere of Luminar’s machine learning-based AI Engine for object detection in 3D data captured by lidar sensors. Since 2017, Luminar has been working on AI capabilities on 3D lidar data to improve the performance and functionality of next-generation safety and autonomy in automotive. The company plans to capture lidar data with more than a million vehicles that will provide input for its AI engine and build a 3D model of the drivable world. To accelerate Luminar’s AI Engine efforts, an exclusive partnership was announced with Scale.ai, a San Francisco-headquartered AI applications developer that will provide data labeling and AI tools. Luminar is not the first lidar tech company to work with Scale.ai: in the past, it has worked with Velodyne to find edge cases in 3D data and curate valuable data for annotation.

Seagate lidar division acquisition​

Just as with the Civil Maps acquisition announced at CES 2023 to accelerate its lidar production process, Luminar recently acquired the lidar division of data storage company Seagate. That company develops high-capacity, modular storage solutions (hard drives) for capturing massive amounts of data created by autonomous cars. Specifically, Luminar acquired lidar-related IP (internet protocols), assets and a technical team.

Additional announcements​

Apart from these three lidar-related announcements, multiple announcements were made that show the scale of Luminar's ambition to provide lidar-based solutions for automotive. Take for example the new commercial agreement with Pony.ai, an autonomous driving technology company. The partnership is meant to further improve the performance and accuracy of Luminar’s AI engine for Pony.ai’s next-generation commercial trucking and robotaxi platforms. Luminar also announced the combination of three semiconductor subsidiaries into a new entity named Luminar Semiconductor. The advanced receiver, laser and processing chip technologies provided by this entity are not limited to lidar-based applications but are also used in the aerospace, medical and communications sectors.
Specifically, when you don't understand, don't explain:

"Specifically, Luminar acquired lidar-related IP (internet protocols), assets and a technical team."
 
  • Like
  • Haha
Reactions: 9 users

VictorG

Member
Those of you on twitter can you please report this piece of shit also. Yes off topic feel free to report my post but your just as bad as this filth if you do.

He's referring to a picture of a child

Reported
 
  • Like
  • Love
  • Fire
Reactions: 19 users
Thankyou mate. I saw it and I will not ignore stuff like that
 
  • Like
  • Love
  • Fire
Reactions: 15 users

VictorG

Member
  • Like
  • Fire
  • Love
Reactions: 13 users
Just on this subject did anyone else read about all the bodies they are finding as the Colorado river dries up due to the drought. Giving all the impression of having been an organised crime disposal site.

It then got me thinking about how autonomous AKIDA powered submersibles could be used to search dams and waterways.

My opinion only DYOR
FF

AKIDA BALLISTA
They might find Jimmy Hoffa!

SC
 
  • Like
  • Wow
  • Haha
Reactions: 4 users
This weekly chart set up has been around since 'manual pen and quill only' days. No system is infallible but on this occasion has support from possible bullish long and short term divergences on the retail popular short term MACH (26,12,19) and the long term MACH (100,50,30).
Interestingly this buy set up has also been completed on the XJO 200 this week. Has been completed a week or so ago on the US 500.
Price action suggests that a close equal to or above last weeks high is bullish. Also last weeks low cannot be breached.
Weekend scans by traders will throw up BRN as a buy candidate for consideration. Shorters will note.
Its wait and see now.
So basically things are set to pop now. And BRN SP is ready and loaded. Glad I bought a bunch more in the last few weeks.

Some serious BRN shares were wanted at the close auction today...look at the action at .495!

26CD3D9B-FE31-4D9C-822F-38E4A68AA475.png
 
  • Like
  • Fire
  • Love
Reactions: 17 users
So....we all read Akida Gen 2 adds support for VIT...that's cool.

I now just read the team at Valeo AI had a Jan 23 paper accepted for CVPR'23 titled...

IMG_20230331_194538.jpg


Hmmmm :unsure:;):LOL:

Not that I really understand it haha

Paper here:


Bit about the team.


Main research themes​

Multi-sensor scene understanding and forecasting — Driving monitoring and automatization relies first on a variety of sensors (cameras, radars, laser scanners) that deliver complementary information on the surroundings of a vehicle and on its cabin. Exploiting at best the outputs of each of these sensors at any instant is fundamental to understand the complex environment of the vehicle and to anticipate its evolution in the next seconds. To this end, we explore various machine learning approaches where sensors are considered either in isolation or collectively.

Data/annotation-efficient learning — Collecting diverse enough real data, and annotating it precisely, is complex, costly, time-consuming and doomed insufficient for complex open-world applications. To reduce dramatically these needs, we explore various alternatives to full supervision, in particular for perception tasks: self-supervised representation learning for images and point clouds, visual object detection with no or weak supervision only, unsupervised domain adaptation for semantic segmentation of images and point clouds, for instance. We also investigate training with fully-synthetic or generatively-augmented data.

Dependable models — When the unexpected happens, when the weather badly degrades, when a sensor gets blocked, embedded safety-critical models should continue working or, at least, diagnose the situation to react accordingly, e.g., by calling an alternative system or human oversight. With this in mind, we investigate ways to assess and improve the robustness of neural nets to perturbations, corner cases and various distribution shifts. Making their inner workings more interpretable, by design or in a post-hoc way, is also an important and challenging venue that we explore towards more trust-worthy models.
 
  • Like
  • Fire
  • Love
Reactions: 33 users

Boab

I wish I could paint like Vincent
So....we all read Akida Gen 2 adds support for VIT...that's cool.

I now just read the team at Valeo AI had a Jan 23 paper accepted for CVPR'23 titled...

View attachment 33378

Hmmmm :unsure:;):LOL:

Not that I really understand it haha

Paper here:


Bit about the team.


Main research themes​

Multi-sensor scene understanding and forecasting — Driving monitoring and automatization relies first on a variety of sensors (cameras, radars, laser scanners) that deliver complementary information on the surroundings of a vehicle and on its cabin. Exploiting at best the outputs of each of these sensors at any instant is fundamental to understand the complex environment of the vehicle and to anticipate its evolution in the next seconds. To this end, we explore various machine learning approaches where sensors are considered either in isolation or collectively.

Data/annotation-efficient learning — Collecting diverse enough real data, and annotating it precisely, is complex, costly, time-consuming and doomed insufficient for complex open-world applications. To reduce dramatically these needs, we explore various alternatives to full supervision, in particular for perception tasks: self-supervised representation learning for images and point clouds, visual object detection with no or weak supervision only, unsupervised domain adaptation for semantic segmentation of images and point clouds, for instance. We also investigate training with fully-synthetic or generatively-augmented data.

Dependable models — When the unexpected happens, when the weather badly degrades, when a sensor gets blocked, embedded safety-critical models should continue working or, at least, diagnose the situation to react accordingly, e.g., by calling an alternative system or human oversight. With this in mind, we investigate ways to assess and improve the robustness of neural nets to perturbations, corner cases and various distribution shifts. Making their inner workings more interpretable, by design or in a post-hoc way, is also an important and challenging venue that we explore towards more trust-worthy models.
There's those 2 words again.
Semantic segmentation is a deep learning algorithm that associates a label or category with every pixel in an image. It is used to recognize a collection of pixels that form distinct categories. For example, an autonomous vehicle needs to identify vehicles, pedestrians, traffic signs, pavement, and other road features.

Segmentation.jpg
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 20 users
Hello from Uruguay, I snatched up another 90.000 bits of Australia yesterday and you're not going to get them back!
 
  • Like
  • Love
  • Haha
Reactions: 44 users
I think the first people who are going to revolt against AI, are the ones that the AI get's smarter than first.
 
  • Haha
  • Like
Reactions: 4 users

Andy38

The hope of potential generational wealth is real
Article on Edge computer vision posted by our mates at Edge Impulse. Two things I took out of this:
Firstly, there are a lot more companies in this space than I’d realised. And secondly, Although we are not mentioned in the 34 companies having a positive impact on society, I feel our impact will not only be positive it’ll be F…… ginormous!!!
Have a great weekend Chippers!!
https://omdena.com/blog/edge-computer-vision-companies/
 
  • Like
  • Fire
Reactions: 22 users
D

Deleted member 118

Guest
  • Like
  • Fire
Reactions: 6 users
Top Bottom