BRN Discussion Ongoing

MDhere

Regular
Probably worthless info and likely mentioned already at some stage by longer term holders than myself

Her contribution at BrainChip sounds significant, but not sure this adds to a potential Qualcomm-BrainChip relationship. Timing might not be quite right either 🤷🏼‍♂️

View attachment 17291
View attachment 17292
View attachment 17293
View attachment 17294
Not sure re Qualcomm collaboration apart from her working there but it appears she may be in connection with Renesas via Renesas aquisition of Dialog semiconductors which aondevices seemed to have had a previous partnership with. Aondevices would need to use our ip as Brainchip have all the necessary patents and back in Feb the Company wrote to a investor classifying Aondevices a non risk to the Company. So she will need to get onboard if she wants to be on top 🤣
 
  • Like
  • Fire
Reactions: 6 users

Xhosa12345

Regular
Anybody have some intel on whats going on with President Xi?
 
  • Haha
Reactions: 1 users

Diogenese

Top 20
When I read all the speculation regarding links to customers or concerns that Brainchip may have been replaced by another player on a known automotive project I remember the following:

1. AKIDA TM & IP are processor agnostic - meaning they can work with Nvidia, ARM or Qualcomm processors and any RISC-V;

2. AKIDA TM & IP are sensor agnostic - meaning every type of sensor from Cameras, DVS, LiDAR, Radar & Ultrasonics used in automotive applications;

3. The CEO Sean Hehir said the following at the AGM:

“When I started to evaluate joining BrainChip I started with the technology. Being a Silicon Valley based executive I had easy access to some of the world’s best technical minds who I engaged to evaluate the core technology. The overwhelming feedback was the technology is visionary in its design, unparalleled in flexibility, and transformative in performance.

As the engagement progressed, I met with many of the core team members and concluded that I had never met a more talented, dedicated, and focused group of individuals in all my years in the technology business.”

4. Mercedes Benz are still listed on Brainchips website as trusting Brainchip and are on record as describing Brainchip as the artificial intelligence experts and AKIDA as offering up to 10 times better performance than competitor solutions;

5. On NVISO’s presentation it shows AKIDA offering at least ten times better performance on fps than Jetson Nano 1000 -v- 100;

6. Both ARM and SiFive have partnered with Brainchip since the Mercedes Benz reveal and confirm the Brainchip claim that AKIDA is processor agnostic;

7. The CEO Sean Hehir stated in a presentation that they continue to work with Mercedes Benz and in the most recent presentations this month Mercedes Benz still features.

8. Brainchip has partnered with Prophesee and AKIDA TM & IP have been confirmed as agnostic;

9. Brainchip announced Valeo as an EAP customer in 2020 and continues to state on the website that they are trusted by Valeo and Valeo also continues to appear in slides at Brainchip presentations;

10. Sensor Fusion is integral to the Mercedes’ Benz multi sensor redundancy approach to safe autonomy at highway speeds and AKIDA TM & IP offers this opportunity as it can process cameras, DVS, Radar, LiDAR and Ultrasonics;

So just as I said to @MC🐠 a couple of years ago it continues to be the case that I see dots joining Brainchip and its AKIDA TM & IP under every rock, behind every door and under every bed in the automotive industry and why would I not when it is “visionary in its design, unparalleled in flexibility, and transformative in performance.”

My opinion only DYOR
FF


AKIDA BALLISTA

"The overwhelming feedback was the technology is visionary in its design, unparalleled in flexibility, and transformative in performance."

... but apart from that, what has Akida ever done for neuromorphic computing?
 
  • Haha
  • Like
  • Wow
Reactions: 25 users

Diogenese

Top 20
When I read all the speculation regarding links to customers or concerns that Brainchip may have been replaced by another player on a known automotive project I remember the following:

1. AKIDA TM & IP are processor agnostic - meaning they can work with Nvidia, ARM or Qualcomm processors and any RISC-V;

2. AKIDA TM & IP are sensor agnostic - meaning every type of sensor from Cameras, DVS, LiDAR, Radar & Ultrasonics used in automotive applications;

3. The CEO Sean Hehir said the following at the AGM:

“When I started to evaluate joining BrainChip I started with the technology. Being a Silicon Valley based executive I had easy access to some of the world’s best technical minds who I engaged to evaluate the core technology. The overwhelming feedback was the technology is visionary in its design, unparalleled in flexibility, and transformative in performance.

As the engagement progressed, I met with many of the core team members and concluded that I had never met a more talented, dedicated, and focused group of individuals in all my years in the technology business.”

4. Mercedes Benz are still listed on Brainchips website as trusting Brainchip and are on record as describing Brainchip as the artificial intelligence experts and AKIDA as offering up to 10 times better performance than competitor solutions;

5. On NVISO’s presentation it shows AKIDA offering at least ten times better performance on fps than Jetson Nano 1000 -v- 100;

6. Both ARM and SiFive have partnered with Brainchip since the Mercedes Benz reveal and confirm the Brainchip claim that AKIDA is processor agnostic;

7. The CEO Sean Hehir stated in a presentation that they continue to work with Mercedes Benz and in the most recent presentations this month Mercedes Benz still features.

8. Brainchip has partnered with Prophesee and AKIDA TM & IP have been confirmed as agnostic;

9. Brainchip announced Valeo as an EAP customer in 2020 and continues to state on the website that they are trusted by Valeo and Valeo also continues to appear in slides at Brainchip presentations;

10. Sensor Fusion is integral to the Mercedes’ Benz multi sensor redundancy approach to safe autonomy at highway speeds and AKIDA TM & IP offers this opportunity as it can process cameras, DVS, Radar, LiDAR and Ultrasonics;

So just as I said to @MC🐠 a couple of years ago it continues to be the case that I see dots joining Brainchip and its AKIDA TM & IP under every rock, behind every door and under every bed in the automotive industry and why would I not when it is “visionary in its design, unparalleled in flexibility, and transformative in performance.”

My opinion only DYOR
FF


AKIDA BALLISTA
4. Mercedes Benz are still listed on Brainchips website as trusting Brainchip and are on record as describing Brainchip as the artificial intelligence experts and AKIDA as offering up to 10 times better performance than competitor solutions;

What is the probability that, after having worked with BrainChip for X years and boosted Akida's performance to the moon, and remembering that "Hey Mercedes!" was just an example of their use of Akida, MB will switch to another tech within 9 months?
 
  • Like
  • Love
  • Fire
Reactions: 22 users

Diogenese

Top 20
When I read all the speculation regarding links to customers or concerns that Brainchip may have been replaced by another player on a known automotive project I remember the following:

1. AKIDA TM & IP are processor agnostic - meaning they can work with Nvidia, ARM or Qualcomm processors and any RISC-V;

2. AKIDA TM & IP are sensor agnostic - meaning every type of sensor from Cameras, DVS, LiDAR, Radar & Ultrasonics used in automotive applications;

3. The CEO Sean Hehir said the following at the AGM:

“When I started to evaluate joining BrainChip I started with the technology. Being a Silicon Valley based executive I had easy access to some of the world’s best technical minds who I engaged to evaluate the core technology. The overwhelming feedback was the technology is visionary in its design, unparalleled in flexibility, and transformative in performance.

As the engagement progressed, I met with many of the core team members and concluded that I had never met a more talented, dedicated, and focused group of individuals in all my years in the technology business.”

4. Mercedes Benz are still listed on Brainchips website as trusting Brainchip and are on record as describing Brainchip as the artificial intelligence experts and AKIDA as offering up to 10 times better performance than competitor solutions;

5. On NVISO’s presentation it shows AKIDA offering at least ten times better performance on fps than Jetson Nano 1000 -v- 100;

6. Both ARM and SiFive have partnered with Brainchip since the Mercedes Benz reveal and confirm the Brainchip claim that AKIDA is processor agnostic;

7. The CEO Sean Hehir stated in a presentation that they continue to work with Mercedes Benz and in the most recent presentations this month Mercedes Benz still features.

8. Brainchip has partnered with Prophesee and AKIDA TM & IP have been confirmed as agnostic;

9. Brainchip announced Valeo as an EAP customer in 2020 and continues to state on the website that they are trusted by Valeo and Valeo also continues to appear in slides at Brainchip presentations;

10. Sensor Fusion is integral to the Mercedes’ Benz multi sensor redundancy approach to safe autonomy at highway speeds and AKIDA TM & IP offers this opportunity as it can process cameras, DVS, Radar, LiDAR and Ultrasonics;

So just as I said to @MC🐠 a couple of years ago it continues to be the case that I see dots joining Brainchip and its AKIDA TM & IP under every rock, behind every door and under every bed in the automotive industry and why would I not when it is “visionary in its design, unparalleled in flexibility, and transformative in performance.”

My opinion only DYOR
FF


AKIDA BALLISTA
"10. Sensor Fusion is integral to the Mercedes’ Benz multi sensor redundancy approach to safe autonomy at highway speeds and AKIDA TM & IP offers this opportunity as it can process cameras, DVS, Radar, LiDAR and Ultrasonics;"

MB also indicated their intention of standardizing on chips to reduce inventory.
 
  • Like
  • Love
  • Fire
Reactions: 18 users

Boab

I wish I could paint like Vincent
  • Like
Reactions: 4 users
"The overwhelming feedback was the technology is visionary in its design, unparalleled in flexibility, and transformative in performance."

... but apart from that, what has Akida ever done for neuromorphic computing?
Not much unless you want to explore deep space or distant planets with Rovers that can achieve autonomous speeds of 20kph or bring hand held diagnostics to remote and or disadvantaged peoples around the globe otherwise all they could do is monitor structural integrity of buildings and infrastructure to warn of catastrophic failure sufficiently early to prevent loss of life.

Pretty much a right off otherwise unless you count secure home security, fall monitoring and vehicle safety systems from braking to driver fatigue.

Then there is the nonsense about cochlear improvement, artificial sight and intelligent prosthesis.

The rubbish around monitoring air and water quality and maintaining correct electricity allocations in cities during peak demand and reducing compute power used in the cloud by 97% but this is all just fluff.

If it was any good someone would be using it to intelligently ensure my toast is browned the same on both sides.

Now that would raise up humanity and make us all feel we are heading in the right direction evolutionary wise.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Haha
  • Fire
Reactions: 25 users
Just apropos nothing in particular, could Qualcomm be described as a telecom company?
I think you just did if I am reading you correctly so Yes. 😎
 
  • Like
  • Haha
Reactions: 9 users

Diogenese

Top 20
This is from the fine print in the Sharp and Makita agreement and shows just how closely related the semiconductor industry is behind the corporate veil:

“2. Purpose of and Reason for the Offering for Third-Party Allotment” and “3.(2)
Intended Use of Proceeds” above, the proceeds from the Third-Party Allotment Capital Increase will be allocated to the development of products for Makita using Sharp’s electronics technologies (including its sensor technologies). Additionally, by developing the Robotics business, which is one of the new businesses conceptualized by Sharp, this business alliance will increase Sharp’s corporate value and help sustainable growth. Therefore, Sharp has determined that the number of shares to be issued and the degree of dilution of shares by the Third-Party Capital Allotment Increase, and the total number of shares to be issued and the degree of dilution of shares by the Third-Party Capital Allotment Increase, the DENSO Third- Party Allotment Capital Increase, the LIXIL Third-Party Allotment Capital Increase, the QUALCOMM Second Third- Party Allotment Capital Increase and the Samsung Electronics Japan Third-Party Allotment Capital Increase, are reasonable”

GREASE might have been the word in a musical but ECOSYSTEM is clearly the word in semiconductors.

My opinion only DYOR
FF

AKIDA BALLISTA
... but does it fit the scansion?
 
  • Like
Reactions: 2 users
  • Like
  • Fire
Reactions: 3 users
... but does it fit the scansion?
I am sure Leonard Cohen could have made it fit with a minor fall and a major lift…

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Love
  • Like
  • Haha
Reactions: 6 users
Bit of Prophesee LinkedIn action from a couple of hours ago on the back of recent news
Kai-Fu Lee attended Carnegie Mellon University and lists them as one of five companies he is interested in


1664085127021.png


1664085010412.png

1664084875558.png


1664087451173.png




 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 24 users
4. Mercedes Benz are still listed on Brainchips website as trusting Brainchip and are on record as describing Brainchip as the artificial intelligence experts and AKIDA as offering up to 10 times better performance than competitor solutions;

What is the probability that, after having worked with BrainChip for X years and boosted Akida's performance to the moon, and remembering that "Hey Mercedes!" was just an example of their use of Akida, MB will switch to another tech within 9 months?
After deciding to make its own unsolicited announcement about the fact they are working with Brainchip.

I took a straw poll of one and its Blind Freddie's opinion that the odds are zero to minus one.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Haha
Reactions: 12 users
  • Like
  • Thinking
Reactions: 7 users
Looks like a case for on chip learning to me....need an Akida :)

Hadn't thought of this but the last paragraph I highlighted would appear pretty non negotiable to me in so much as how many $ and how much time would you spend "training" for each and every country and its own unique animals or terrain etc for each ADAS system in each model vehicle?

Think it was @Stable Genius who might have posted not long ago about the problem this animal could cause.


Is ‘fake data’ the real deal when training algorithms?​

The use of synthetic data is a cost‑effective way to teach AI about human responses. But can it help eliminate bias and make self‑driving cars safer?

Laurie Clarke
Sat 18 Jun 2022 23.00 AEST


You're at the wheel of your car but you’re exhausted. Your shoulders start to sag, your neck begins to droop, your eyelids slide down. As your head pitches forward, you swerve off the road and speed through a field, crashing into a tree.

But what if your car’s monitoring system recognised the tell-tale signs of drowsiness and prompted you to pull off the road and park instead? The European Commission has legislated that from this year, new vehicles be fitted with systems to catch distracted and sleepy drivers to help avert accidents. Now a number of startups are training artificial intelligence systems to recognise the giveaways in our facial expressions and body language.

These companies are taking a novel approach for the field of AI. Instead of filming thousands of real-life drivers falling asleep and feeding that information into a deep-learning model to “learn” the signs of drowsiness, they’re creating millions of fake human avatars to re-enact the sleepy signals.

“Big data” defines the field of AI for a reason. To train deep learning algorithms accurately, the models need to have a multitude of data points. That creates problems for a task such as recognising a person falling asleep at the wheel, which would be difficult and time-consuming to film happening in thousands of cars. Instead, companies have begun building virtual datasets.

Using synthetic data cuts out a lot of the messiness of the more traditional way to train deep learning algorithms

Synthesis AI and Datagen are two companies using full-body 3D scans, including detailed face scans, and motion data captured by sensors placed all over the body, to gather raw data from real people. This data is fed through algorithms that tweak various dimensions many times over to create millions of 3D representations of humans, resembling characters in a video game, engaging in different behaviours across a variety of simulations.

In the case of someone falling asleep at the wheel, they might film a human performer falling asleep and combine it with motion capture, 3D animations and other techniques used to create video games and animated movies, to build the desired simulation. “You can map [the target behaviour] across thousands of different body types, different angles, different lighting, and add variability into the movement as well,” says Yashar Behzadi, CEO of Synthesis AI.

Using synthetic data cuts out a lot of the messiness of the more traditional way to train deep learning algorithms. Typically, companies would have to amass a vast collection of real-life footage and low-paid workers would painstakingly label
each of the clips. These would be fed into the model, which would learn how to recognise the behaviours.

The big sell for the synthetic data approach is that it’s quicker and cheaper by a wide margin. But these companies also claim it can help tackle the bias that creates a huge headache for AI developers. It’s well documented that some AI facial recognition software is poor at recognising and correctly identifying particular demographic groups. This tends to be because these groups are underrepresented in the training data, meaning the software is more likely to misidentify these people.

Niharika Jain, a software engineer and expert in gender and racial bias in generative machine learning, highlights the notorious example of Nikon Coolpix’s “blink detection” feature, which, because the training data included a majority of white faces, disproportionately judged Asian faces to be blinking. “A good driver-monitoring system must avoid misidentifying members of a certain demographic as asleep more often than others,” she says.

The typical response to this problem is to gather more data from the underrepresented groups in real-life settings. But companies such as Datagen say this is no longer necessary. The company can simply create more faces from the underrepresented groups, meaning they’ll make up a bigger proportion of the final dataset. Real 3D face scan data from thousands of people is whipped up into millions of AI composites. “There’s no bias baked into the data; you have full control of the age, gender and ethnicity of the people that you’re generating,” says Gil Elbaz, co-founder of Datagen. The creepy faces that emerge don’t look like real people, but the company claims that they’re similar enough to teach AI systems how to respond to real people in similar scenarios.

There is, however, some debate over whether synthetic data can really eliminate bias. Bernease Herman, a data scientist at the University of Washington eScience Institute, says that although synthetic data can improve the robustness of facial recognition models on underrepresented groups, she does not believe that synthetic data alone can close the gap between the performance on those groups and others. Although the companies sometimes publish academic papers showcasing how their algorithms work, the algorithms themselves are proprietary, so researchers cannot independently evaluate them.

In areas such as virtual reality, as well as robotics, where 3D mapping is important, synthetic data companies argue it could actually be preferable to train AI on simulations, especially as 3D modelling, visual effects and gaming technologies improve. “It’s only a matter of time until… you can create these virtual worlds and train your systems completely in a simulation,” says Behzadi.

This kind of thinking is gaining ground in the autonomous vehicle industry, where synthetic data is becoming instrumental in teaching self-driving vehicles’ AI how to navigate the road. The traditional approach – filming hours of driving footage and feeding this into a deep learning model – was enough to get cars relatively good at navigating roads. But the issue vexing the industry is how to get cars to reliably handle what are known as “edge cases” – events that are rare enough that they don’t appear much in millions of hours of training data. For example, a child or dog running into the road, complicated roadworks or even some traffic cones placed in an unexpected position, which was enough to stump a driverless Waymo vehicle in Arizona in 2021.

With synthetic data, companies can create endless variations of scenarios in virtual worlds that rarely happen in the real world. “Instead of waiting millions more miles to accumulate more examples, they can artificially generate as many examples as they need of the edge case for training and testing,” says Phil Koopman, associate professor in electrical and computer engineering at Carnegie Mellon University.

AV companies such as Waymo, Cruise and Wayve are increasingly relying on real-life data combined with simulated driving in virtual worlds. Waymo has created a simulated world using AI and sensor data collected from its self-driving vehicles, complete with artificial raindrops and solar glare. It uses this to train vehicles on normal driving situations, as well as the trickier edge cases. In 2021, Waymo told the Verge that it had simulated 15bn miles of driving, versus a mere 20m miles of real driving.

An added benefit to testing autonomous vehicles out in virtual worlds first is minimising the chance of very real accidents. “A large reason self-driving is at the forefront of a lot of the synthetic data stuff is fault tolerance,” says Herman. “A self-driving car making a mistake 1% of the time, or even 0.01% of the time, is probably too much.”

In 2017, Volvo’s self-driving technology, which had been taught how to respond to large North American animals such as deer, was baffled when encountering kangaroos for the first time in Australia. “If a simulator doesn’t know about kangaroos, no amount of simulation will create one until it is seen in testing and designers figure out how to add it,” says Koopman. For Aaron Roth, professor of computer and cognitive science at the University of Pennsylvania, the challenge will be to create synthetic data that is indistinguishable from real data. He thinks it is plausible that we’re at that point for face data, as computers can now generate photorealistic images of faces. “But for a lot of other things,” – which may or may not include kangaroos – “I don’t think that we’re there yet.”
 
  • Like
  • Fire
  • Love
Reactions: 19 users
Last edited:
  • Like
  • Fire
  • Thinking
Reactions: 22 users
Looks like a case for on chip learning to me....need an Akida :)

Hadn't thought of this but the last paragraph I highlighted would appear pretty non negotiable to me in so much as how many $ and how much time would you spend "training" for each and every country and its own unique animals or terrain etc for each ADAS system in each model vehicle?

Think it was @Stable Genius who might have posted not long ago about the problem this animal could cause.


Is ‘fake data’ the real deal when training algorithms?​

The use of synthetic data is a cost‑effective way to teach AI about human responses. But can it help eliminate bias and make self‑driving cars safer?

Laurie Clarke
Sat 18 Jun 2022 23.00 AEST


You're at the wheel of your car but you’re exhausted. Your shoulders start to sag, your neck begins to droop, your eyelids slide down. As your head pitches forward, you swerve off the road and speed through a field, crashing into a tree.

But what if your car’s monitoring system recognised the tell-tale signs of drowsiness and prompted you to pull off the road and park instead? The European Commission has legislated that from this year, new vehicles be fitted with systems to catch distracted and sleepy drivers to help avert accidents. Now a number of startups are training artificial intelligence systems to recognise the giveaways in our facial expressions and body language.

These companies are taking a novel approach for the field of AI. Instead of filming thousands of real-life drivers falling asleep and feeding that information into a deep-learning model to “learn” the signs of drowsiness, they’re creating millions of fake human avatars to re-enact the sleepy signals.

“Big data” defines the field of AI for a reason. To train deep learning algorithms accurately, the models need to have a multitude of data points. That creates problems for a task such as recognising a person falling asleep at the wheel, which would be difficult and time-consuming to film happening in thousands of cars. Instead, companies have begun building virtual datasets.



Synthesis AI and Datagen are two companies using full-body 3D scans, including detailed face scans, and motion data captured by sensors placed all over the body, to gather raw data from real people. This data is fed through algorithms that tweak various dimensions many times over to create millions of 3D representations of humans, resembling characters in a video game, engaging in different behaviours across a variety of simulations.

In the case of someone falling asleep at the wheel, they might film a human performer falling asleep and combine it with motion capture, 3D animations and other techniques used to create video games and animated movies, to build the desired simulation. “You can map [the target behaviour] across thousands of different body types, different angles, different lighting, and add variability into the movement as well,” says Yashar Behzadi, CEO of Synthesis AI.

Using synthetic data cuts out a lot of the messiness of the more traditional way to train deep learning algorithms. Typically, companies would have to amass a vast collection of real-life footage and low-paid workers would painstakingly label
each of the clips. These would be fed into the model, which would learn how to recognise the behaviours.

The big sell for the synthetic data approach is that it’s quicker and cheaper by a wide margin. But these companies also claim it can help tackle the bias that creates a huge headache for AI developers. It’s well documented that some AI facial recognition software is poor at recognising and correctly identifying particular demographic groups. This tends to be because these groups are underrepresented in the training data, meaning the software is more likely to misidentify these people.

Niharika Jain, a software engineer and expert in gender and racial bias in generative machine learning, highlights the notorious example of Nikon Coolpix’s “blink detection” feature, which, because the training data included a majority of white faces, disproportionately judged Asian faces to be blinking. “A good driver-monitoring system must avoid misidentifying members of a certain demographic as asleep more often than others,” she says.

The typical response to this problem is to gather more data from the underrepresented groups in real-life settings. But companies such as Datagen say this is no longer necessary. The company can simply create more faces from the underrepresented groups, meaning they’ll make up a bigger proportion of the final dataset. Real 3D face scan data from thousands of people is whipped up into millions of AI composites. “There’s no bias baked into the data; you have full control of the age, gender and ethnicity of the people that you’re generating,” says Gil Elbaz, co-founder of Datagen. The creepy faces that emerge don’t look like real people, but the company claims that they’re similar enough to teach AI systems how to respond to real people in similar scenarios.

There is, however, some debate over whether synthetic data can really eliminate bias. Bernease Herman, a data scientist at the University of Washington eScience Institute, says that although synthetic data can improve the robustness of facial recognition models on underrepresented groups, she does not believe that synthetic data alone can close the gap between the performance on those groups and others. Although the companies sometimes publish academic papers showcasing how their algorithms work, the algorithms themselves are proprietary, so researchers cannot independently evaluate them.

In areas such as virtual reality, as well as robotics, where 3D mapping is important, synthetic data companies argue it could actually be preferable to train AI on simulations, especially as 3D modelling, visual effects and gaming technologies improve. “It’s only a matter of time until… you can create these virtual worlds and train your systems completely in a simulation,” says Behzadi.

This kind of thinking is gaining ground in the autonomous vehicle industry, where synthetic data is becoming instrumental in teaching self-driving vehicles’ AI how to navigate the road. The traditional approach – filming hours of driving footage and feeding this into a deep learning model – was enough to get cars relatively good at navigating roads. But the issue vexing the industry is how to get cars to reliably handle what are known as “edge cases” – events that are rare enough that they don’t appear much in millions of hours of training data. For example, a child or dog running into the road, complicated roadworks or even some traffic cones placed in an unexpected position, which was enough to stump a driverless Waymo vehicle in Arizona in 2021.

With synthetic data, companies can create endless variations of scenarios in virtual worlds that rarely happen in the real world. “Instead of waiting millions more miles to accumulate more examples, they can artificially generate as many examples as they need of the edge case for training and testing,” says Phil Koopman, associate professor in electrical and computer engineering at Carnegie Mellon University.

AV companies such as Waymo, Cruise and Wayve are increasingly relying on real-life data combined with simulated driving in virtual worlds. Waymo has created a simulated world using AI and sensor data collected from its self-driving vehicles, complete with artificial raindrops and solar glare. It uses this to train vehicles on normal driving situations, as well as the trickier edge cases. In 2021, Waymo told the Verge that it had simulated 15bn miles of driving, versus a mere 20m miles of real driving.

An added benefit to testing autonomous vehicles out in virtual worlds first is minimising the chance of very real accidents. “A large reason self-driving is at the forefront of a lot of the synthetic data stuff is fault tolerance,” says Herman. “A self-driving car making a mistake 1% of the time, or even 0.01% of the time, is probably too much.”

In 2017, Volvo’s self-driving technology, which had been taught how to respond to large North American animals such as deer, was baffled when encountering kangaroos for the first time in Australia. “If a simulator doesn’t know about kangaroos, no amount of simulation will create one until it is seen in testing and designers figure out how to add it,” says Koopman. For Aaron Roth, professor of computer and cognitive science at the University of Pennsylvania, the challenge will be to create synthetic data that is indistinguishable from real data. He thinks it is plausible that we’re at that point for face data, as computers can now generate photorealistic images of faces. “But for a lot of other things,” – which may or may not include kangaroos – “I don’t think that we’re there yet.”
I personally believe we need cars that "think like you" not cars that do not think but we hope have been trained with enough samples to get it right be they real or synthetic samples. Has anyone here had a builders wheel barrow fly off the back of a ute and land on the tyre and bounce over their car. It happened to me and I doubt that Phil Koopman will have factored that in to his training synthetic or otherwise.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Wow
  • Haha
Reactions: 18 users
Looks like a case for on chip learning to me....need an Akida :)

Hadn't thought of this but the last paragraph I highlighted would appear pretty non negotiable to me in so much as how many $ and how much time would you spend "training" for each and every country and its own unique animals or terrain etc for each ADAS system in each model vehicle?

Think it was @Stable Genius who might have posted not long ago about the problem this animal could cause.


Is ‘fake data’ the real deal when training algorithms?​

The use of synthetic data is a cost‑effective way to teach AI about human responses. But can it help eliminate bias and make self‑driving cars safer?

Laurie Clarke
Sat 18 Jun 2022 23.00 AEST


You're at the wheel of your car but you’re exhausted. Your shoulders start to sag, your neck begins to droop, your eyelids slide down. As your head pitches forward, you swerve off the road and speed through a field, crashing into a tree.

But what if your car’s monitoring system recognised the tell-tale signs of drowsiness and prompted you to pull off the road and park instead? The European Commission has legislated that from this year, new vehicles be fitted with systems to catch distracted and sleepy drivers to help avert accidents. Now a number of startups are training artificial intelligence systems to recognise the giveaways in our facial expressions and body language.

These companies are taking a novel approach for the field of AI. Instead of filming thousands of real-life drivers falling asleep and feeding that information into a deep-learning model to “learn” the signs of drowsiness, they’re creating millions of fake human avatars to re-enact the sleepy signals.

“Big data” defines the field of AI for a reason. To train deep learning algorithms accurately, the models need to have a multitude of data points. That creates problems for a task such as recognising a person falling asleep at the wheel, which would be difficult and time-consuming to film happening in thousands of cars. Instead, companies have begun building virtual datasets.



Synthesis AI and Datagen are two companies using full-body 3D scans, including detailed face scans, and motion data captured by sensors placed all over the body, to gather raw data from real people. This data is fed through algorithms that tweak various dimensions many times over to create millions of 3D representations of humans, resembling characters in a video game, engaging in different behaviours across a variety of simulations.

In the case of someone falling asleep at the wheel, they might film a human performer falling asleep and combine it with motion capture, 3D animations and other techniques used to create video games and animated movies, to build the desired simulation. “You can map [the target behaviour] across thousands of different body types, different angles, different lighting, and add variability into the movement as well,” says Yashar Behzadi, CEO of Synthesis AI.

Using synthetic data cuts out a lot of the messiness of the more traditional way to train deep learning algorithms. Typically, companies would have to amass a vast collection of real-life footage and low-paid workers would painstakingly label
each of the clips. These would be fed into the model, which would learn how to recognise the behaviours.

The big sell for the synthetic data approach is that it’s quicker and cheaper by a wide margin. But these companies also claim it can help tackle the bias that creates a huge headache for AI developers. It’s well documented that some AI facial recognition software is poor at recognising and correctly identifying particular demographic groups. This tends to be because these groups are underrepresented in the training data, meaning the software is more likely to misidentify these people.

Niharika Jain, a software engineer and expert in gender and racial bias in generative machine learning, highlights the notorious example of Nikon Coolpix’s “blink detection” feature, which, because the training data included a majority of white faces, disproportionately judged Asian faces to be blinking. “A good driver-monitoring system must avoid misidentifying members of a certain demographic as asleep more often than others,” she says.

The typical response to this problem is to gather more data from the underrepresented groups in real-life settings. But companies such as Datagen say this is no longer necessary. The company can simply create more faces from the underrepresented groups, meaning they’ll make up a bigger proportion of the final dataset. Real 3D face scan data from thousands of people is whipped up into millions of AI composites. “There’s no bias baked into the data; you have full control of the age, gender and ethnicity of the people that you’re generating,” says Gil Elbaz, co-founder of Datagen. The creepy faces that emerge don’t look like real people, but the company claims that they’re similar enough to teach AI systems how to respond to real people in similar scenarios.

There is, however, some debate over whether synthetic data can really eliminate bias. Bernease Herman, a data scientist at the University of Washington eScience Institute, says that although synthetic data can improve the robustness of facial recognition models on underrepresented groups, she does not believe that synthetic data alone can close the gap between the performance on those groups and others. Although the companies sometimes publish academic papers showcasing how their algorithms work, the algorithms themselves are proprietary, so researchers cannot independently evaluate them.

In areas such as virtual reality, as well as robotics, where 3D mapping is important, synthetic data companies argue it could actually be preferable to train AI on simulations, especially as 3D modelling, visual effects and gaming technologies improve. “It’s only a matter of time until… you can create these virtual worlds and train your systems completely in a simulation,” says Behzadi.

This kind of thinking is gaining ground in the autonomous vehicle industry, where synthetic data is becoming instrumental in teaching self-driving vehicles’ AI how to navigate the road. The traditional approach – filming hours of driving footage and feeding this into a deep learning model – was enough to get cars relatively good at navigating roads. But the issue vexing the industry is how to get cars to reliably handle what are known as “edge cases” – events that are rare enough that they don’t appear much in millions of hours of training data. For example, a child or dog running into the road, complicated roadworks or even some traffic cones placed in an unexpected position, which was enough to stump a driverless Waymo vehicle in Arizona in 2021.

With synthetic data, companies can create endless variations of scenarios in virtual worlds that rarely happen in the real world. “Instead of waiting millions more miles to accumulate more examples, they can artificially generate as many examples as they need of the edge case for training and testing,” says Phil Koopman, associate professor in electrical and computer engineering at Carnegie Mellon University.

AV companies such as Waymo, Cruise and Wayve are increasingly relying on real-life data combined with simulated driving in virtual worlds. Waymo has created a simulated world using AI and sensor data collected from its self-driving vehicles, complete with artificial raindrops and solar glare. It uses this to train vehicles on normal driving situations, as well as the trickier edge cases. In 2021, Waymo told the Verge that it had simulated 15bn miles of driving, versus a mere 20m miles of real driving.

An added benefit to testing autonomous vehicles out in virtual worlds first is minimising the chance of very real accidents. “A large reason self-driving is at the forefront of a lot of the synthetic data stuff is fault tolerance,” says Herman. “A self-driving car making a mistake 1% of the time, or even 0.01% of the time, is probably too much.”

In 2017, Volvo’s self-driving technology, which had been taught how to respond to large North American animals such as deer, was baffled when encountering kangaroos for the first time in Australia. “If a simulator doesn’t know about kangaroos, no amount of simulation will create one until it is seen in testing and designers figure out how to add it,” says Koopman. For Aaron Roth, professor of computer and cognitive science at the University of Pennsylvania, the challenge will be to create synthetic data that is indistinguishable from real data. He thinks it is plausible that we’re at that point for face data, as computers can now generate photorealistic images of faces. “But for a lot of other things,” – which may or may not include kangaroos – “I don’t think that we’re there yet.”

Yep. @Fullmoonfever I’ve hit 3 Roos and sold my motorbike (Suzuki GSX750F) to prevent meeting the grim reaper early!

They just jump straight at you, or into you. Brains the size of a pea!

Our local crash repairer is building a new house: mostly paid for by collisions with kangaroos!

The sooner they have something similar to Valeo’s Scala 3 in operation the safer we’ll be! “See the invisible”

A blast from the past! I never get tired of watching this media release and remember watching the presenters reaction during the Q & A which surprisingly isn’t included!



Enjoy

Edit: if you want to skip the sales pitch it starts at 1.40
 
  • Love
  • Like
  • Fire
Reactions: 13 users

Dang Son

Regular
Just apropos nothing in particular, could Qualcomm be described as a telecom company?
Qualcomm Incorporated operates as a multinational semiconductor and telecommunications equipment company. The Company develops and delivers digital wireless communications products and services based on CDMA digital technology.
 
Last edited:
  • Like
  • Fire
Reactions: 18 users
Top Bottom