BRN Discussion Ongoing

stockduck

Regular
@SERA2g This may be of interest.......I too have my suspicions about NXP and this is another Dot.........I think this "like" by Rob Telson means something not nothing.......


View attachment 1406
Wonderfull dot research.......

leads to...


and...


"
smartmicro, the Braunschweig-, Germany-based specialist in automotive radar technology with over 20 years of experience in this field, had pioneered automotive corner radar technology as early as 2003 when we developed the first long range corner radar for the lane change assist function to go to mass production in 2005 with Audi and VW. Today more than 150 employees are developing latest front, corner and imaging radar technology. smartmicro delivers engineering services for automotive 1st tier suppliers and runs own production lines for high performance automotive products as well as for traffic management (ITS) solutions.

"
 
  • Like
Reactions: 18 users

butcherano

Regular
Sebastian Schmitt isnt a moron:


Yeah...fair call. Maybe that was a tad harsh. Brainchip must have taken note of his comment though, because they're following him on twitter now.
 
  • Like
Reactions: 14 users

Jasonk

Regular
Intel realising they cant be the master of everything?

An Intel-TSMC CPU? Intel's Tile Architecture to Mix and Match Chip Tech​

Intel's new CPU roadmap mentions a Tile-based architecture that taps chip technologies from Intel and external foundries.

Screenshot_20220220-072333_Samsung Internet.jpg





On a side note, I have come across alot of material that sounds like Brainchip but without it being mentioned.

Out of interest, if a company uses brainchip IP in its chip design are they required to say in the specifications and tecnical documentation? I've spent the last few days looking at up and coming hardware / chip designs.
 
  • Like
  • Thinking
  • Fire
Reactions: 27 users
D

Deleted member 118

Guest
Intel realising they cant be the master of everything?

An Intel-TSMC CPU? Intel's Tile Architecture to Mix and Match Chip Tech​

Intel's new CPU roadmap mentions a Tile-based architecture that taps chip technologies from Intel and external foundries.

View attachment 1418




On a side note, I have come across alot of material that sounds like Brainchip but without it being mentioned.

Out of interest, if a company uses brainchip IP in its chip design are they required to say in the specifications and tecnical documentation? I've spent the last few days looking at up and coming hardware / chip designs.

 
  • Like
Reactions: 4 users

uiux

Regular
Yeah...fair call. Maybe that was a tad harsh. Brainchip must have taken note of his comment though, because they're following him on twitter now.

I think he is friends with a few of the PhDs on the team
 
  • Like
  • Thinking
Reactions: 8 users

JK200SX

Regular
Hi 1000 eyes
Slightly unusual request. Over on the Brainchip Nanose thread Bacon Lover has found a 2022 two page magazine article relating to Nanose and the Sniffphone. Only problem is it appears to be written in Hebrew. The Google translate function is not performing for those of us over there so if you are a wizard with translate or better still read Hebrew you assistance would be greatly appreciated.
FF.
Hello FF,

Do you have a link to the article?
 
  • Like
Reactions: 2 users

Labsy

Regular
It may already be known to most fellow shareholders, but I would like to share my train of thought with you without being annoying.

The article I came across is from 2017.

https://www.all-electronics.de/automotive-transportation/keynotes-automobil-elektronik.html

If you read the cache text you get the following link:

https://webcache.googleusercontent....mobil-elektronik.html+&cd=3&hl=de&ct=clnk&gl= en

and this translated into English here:
https://www-all--electronics-de.tra..._sl=de&_x_tr_tl=en&_x_tr_hl=de&_x_tr_pto=wapp

The statement by the CEO of Nvidia in today's context is particularly interesting:


...

Artificial intelligence​

For many visitors, the special atmosphere and the familiar character of this industry get-together in Ludwigsburg are an important reason for their participation. However, one guest was a bit irritated by the free exchange of ideas: "We wouldn't be able to have a conference like this, where competitors sit together and talk about new trends," emphasized Nvidia's CEO Jensen Huang right at the beginning of his keynote speech. "We prefer to keep the good ideas to ourselves." But Huang was happy to talk about the current projects at Nvidia. The starting point for his remarks was the development of performance in different computing architectures. While the microprocessor with its sequential data processing is showing signs of saturation in the increase in performance, this does not apply to parallel data processing with graphics processors (GPUs). Here, more transistors continued to mean more power.

Parallel data processing with graphics processors ensures further increases in computing power.
Jensen Huang, Nvidia: "The parallel data processing with graphics processors ensures further increasing computing power." Matthias Baumgartner
With regard to specific applications in the automotive sector, Huang put forward a thesis that was somewhat surprising in view of the concentrated expertise in the audience: "At the moment there are very few people in the world who really understand the enormous computing power required for autonomous driving." And added that in addition to performance, particularly high energy efficiency is just as important. While conventional high-performance computers load them with a few thousand watts, GPU-based systems such as Nvidia's Drive PX solution make do with considerably less.

Nvidia primarily relies on deep learning technology, in which artificial neural networks are trained with corresponding data sets in order to recognize traffic signs, for example. Such an approach requires massive parallel data processing, for which GPU-based hardware is particularly suited. The big difference between deep learning and conventional programming methods is that the source code is not generated manually, but is located in the data with which a deep learning system is trained, according to Huang: “The data is the source Code.” With deep learning, the computer writes the software itself, so to speak, in which the learned experiences are stored in the neural network.

Jensen Huang, Nvidia: Such a conference would not be possible with us.
Jensen Huang, Nvidia: "We wouldn't be able to hold a conference like this." Matthias Baumgartner
Both the methods of deep learning and the hardware platform developed by Nvidia can be used in addition to the development of highly automatedUse driving functions up to autonomous driving for completely different applications. As an example, Huang cited the cooperative , in which industrial robots are trained to interact with workers in the production process without endangering them. Nvidia has developed a virtual parallel world for training robots, in which virtual ones are trained. Analogous to this "Holodeck" it should also take place in the automotive world.

...


In connection with the announcement by Mercedes Benz that they are using brain chip technology in their latest study

and another announcement by Mercedes Benz that Nvidia is granted a high share of the sales of future vehicle technologies,


I suspect that Nvidia has been linked to brainchip for some time (till 2017 ?). In my opinion, after such reports, it's about time for Nvidia to put on their pants and open the curtain, too much fear of the competition also conveys a lack of self-confidence in their own products... what do you think?

Hope the links work, sorry if not. :unsure:

Thank you all for your great contribution and support.
God I hope you are right... it would mean our wildest dreams have come true, then imagine a NASDAQ listing with Nvidia, renesas, megachips, amongst other ip customers on board .. wow.....
Investros would be like fans crying and fainting at a michael Jackson concert.....
 
  • Like
  • Haha
Reactions: 29 users
Hey @Diogenese

Very happy to be proved wrong on this one, but GrAI Matter Labs (GML) look to also have a chip/similar tech available for trial?

This May 2020 article mentions that both Brainchip and GrAI Matter Labs were working to commercialise chips. Have they also found success?

I must be wrong

@uiux in recent correspondence on HC, you have used an 8 point comparison to Brainchip

I unfortunately do not have your technical abilities but was keen to understand how the GML technology compares

Cheers
TLS
The following is what GrAi Matter say about themselves. They on the basis of this are only working on vision and in a way which probably limits their future potential and certainly do not have on chip learning or one shot learning and before running require training with the whole image so that they can then start to detect changes via spikes.
My opinion only so DYOR
FF

AKIDA BALLISTA:

"GrAI Matter, meanwhile, is treading a line between neuromorphic SNNs and the more common artificial neural network (ANN). “We’ve taken a hybrid approach. We do event-based processing,” said Mahesh Makhijani, vice president of business development at GrAI Matter. While SNNs work well for such an approach, they’re also harder to train – and suitable data sets for training are hard to come by.

GrAI is using a more traditional architecture, but rather than fully processing each video frame, it processes only those pixels that have changed from the prior frame. So the first frame is processed as a whole, while subsequent frames require less processing. “We’ve inherited the event-based architecture from spiking networks, but we have parallel processing like Hailo and Mythic and others,” continued Makhijani. “We don’t do in-memory compute as analog, but we do digital near-memory compute.”

The company doesn’t allocate one processor per node, so frames that need more processing may have higher latency if there is a need to time-share processors. For sparsely changing frames, latency and power are reduced.

pic2.png

Fig. 2: In the top image, every node executes for every frame. In the bottom image, only those pixels that have changed are processed, keeping many nodes inactive for frames with few changes. Source: GrAI Matter"
 
  • Like
Reactions: 13 users
  • Like
Reactions: 3 users
The following is what GrAi Matter say about themselves. They on the basis of this are only working on vision and in a way which probably limits their future potential and certainly do not have on chip learning or one shot learning and before running require training with the whole image so that they can then start to detect changes via spikes.
My opinion only so DYOR
FF

AKIDA BALLISTA:

"GrAI Matter, meanwhile, is treading a line between neuromorphic SNNs and the more common artificial neural network (ANN). “We’ve taken a hybrid approach. We do event-based processing,” said Mahesh Makhijani, vice president of business development at GrAI Matter. While SNNs work well for such an approach, they’re also harder to train – and suitable data sets for training are hard to come by.

GrAI is using a more traditional architecture, but rather than fully processing each video frame, it processes only those pixels that have changed from the prior frame. So the first frame is processed as a whole, while subsequent frames require less processing. “We’ve inherited the event-based architecture from spiking networks, but we have parallel processing like Hailo and Mythic and others,” continued Makhijani. “We don’t do in-memory compute as analog, but we do digital near-memory compute.”

The company doesn’t allocate one processor per node, so frames that need more processing may have higher latency if there is a need to time-share processors. For sparsely changing frames, latency and power are reduced.

pic2.png

Fig. 2: In the top image, every node executes for every frame. In the bottom image, only those pixels that have changed are processed, keeping many nodes inactive for frames with few changes. Source: GrAI Matter"

Thanks for the clarification FF

In that case it sounds like they have been forced to change tack and focus their efforts elsewhere

Likely due to Brainchip beating them to the punch and blocking the path with patents

Another “competitor” left in Brainchip’s wake

Onwards and upwards ⬆️

TLS
 
  • Like
Reactions: 20 users

Dhm

Regular
Hoping you guys can answer me this: Last night I was out with friends and I was telling them about Brainchip and its multi sector advantages, noteably in last nights conversation, with EVs. I was asked about Teslas, for example. What percentage - roughly - of Tesla computing is currently Edge based, and what is Cloud based?
 
  • Like
Reactions: 3 users

JK200SX

Regular
Sorry missed your message this should be the link:

Heading

The "electronic nose" will save lives:
The nano-lottery dials will alert you
Diseases and infections through sniffing only

Sub heading/para:

The system is being developed by Nanoz Medical, which includes high-caliber nanoparticle imaging systems that will be able to detect.
Infectious diseases and warn of precancerous tumors in the breath "smelling" stings. "This is nothing less than a revolution
In medicine, it will save lives and lead to efficiency in the health system, "say the company's executives.

First paragraph:

And plans future applications in additional areas

Our cantor Oh Yod revolutionized the burden of medicine.
Using as a test, innovative diagnostics, not beating.
Neh. We develop a test using a blown mind.
Another will be able to perform a suture and will help with a quick diagnosis
Of Bufen-identical diseases that will support sharpening
Tilt to the ghost of the ghost treatment that the nabra winery is following.
Ne is toxic to patients with diseases such as hard dementia
And he was an early Nahal of his crabs and moth-seren.
A winepress that does not require optimal tools to diagnose.
Says Odit Marom to Beck, a joint message from Zor
Laying Nan Medical 16:10) the one I will classify; Spitta
The "electron nose" that stands in front of the technols.
Of the field of medicine


That's all Ive got so far. The rest is coming up as gobledigook - may need someone fluent in Hebrew?
 
  • Like
Reactions: 11 users

Jasonk

Regular
Hoping you guys can answer me this: Last night I was out with friends and I was telling them about Brainchip and its multi sector advantages, noteably in last nights conversation, with EVs. I was asked about Teslas, for example. What percentage - roughly - of Tesla computing is currently Edge based, and what is Cloud based?
Without going back over documents I've previously seen. I couldnt tell you the split but the below will give you an idea on why on board AI is a must. I went large on tesltra when it dipped into the 2.90 range a while back based on the below.

Autonomous vehicles will generate as much as 40 terabytes of data an hour from cameras, radar, and other sensors—equivalent to an iPhone's use over 3,000 years—and suck in massive amounts more to navigate roads, according to Morgan Stanley.17 Sept 2021
1645320550896.png

https://www.bloomberg.com › articles

Driverless Cars' Need for Data Is Sparking a New Space Race



I would be surprised if space x was ever given a license to operate in major populations in Australia, when I last checked space x could only take regional scraps. . IMO
 
  • Like
Reactions: 11 users

M_C

Founding Member

Silicon Labs’ latest families of wireless-enabled SoCs for IoT applications for the first time include a hardware AI/ML accelerator. The upgrade is indicative of the growing popularity of AI/ML techniques for a variety of IoT markets, including smart home, medical and industrial. Dedicated AI/ML hardware on-chip improves power consumption, critical to many IoT applications, even bringing AI/ML within reach for more power-sensitive IoT applications.

“You’ve always been able to run machine learning algorithms on an M-class processor, the trick is can you do it in an energy efficient way”
said Ross Sabolcik, general manager for IoT industrial and commercial products at Silicon Labs. “If you burn so much energy making the calculations, you might as well push it to the cloud, if you have the bandwidth. Our focus was not only to be able to run AI and ML, but to be able to do it in a really efficient way.”

The BG24 and MG24 families, with Bluetooth capability and multi-protocol capability, respectively, will be the first devices in Silicon Labs’ portfolio to feature a new, in-house developed AI/ML accelerator. The accelerator offloads AI/ML workloads from adjacent Arm Cortex-M33 microcontroller cores in applications such as smart home, medical and industrial IoT.

Sabolcik said the company’s hardware accelerator can speed up IoT AI/ML workloads up to four times with a resulting six-fold power savings (compared to using the Cortex-M33). Such power savings are suitable for battery-powered IoT devices. Latency is also improved, compared to sending data back and forth to the cloud for processing.

Capture.PNG
 
  • Like
  • Fire
Reactions: 23 users

Diogenese

Top 20
Hey @Diogenese

Very happy to be proved wrong on this one, but GrAI Matter Labs (GML) look to also have a chip/similar tech available for trial?

This May 2020 article mentions that both Brainchip and GrAI Matter Labs were working to commercialise chips. Have they also found success?

I must be wrong

@uiux in recent correspondence on HC, you have used an 8 point comparison to Brainchip

I unfortunately do not have your technical abilities but was keen to understand how the GML technology compares

Cheers
TLS
Hi TLS,

I don't have any hard performance data for GraiMatter.

GraiMatter have 13 patent applications. Their chip is time-multiplexed whereas Akida is asynchronous (free running). Time multiplexing is slower than asynchronous operation, as all time multiplexed actions are delayed waiting for the clock to tick over, whereas asynchronous actions occur in real time.

https://worldwide.espacenet.com/pat...blication/WO2020043761A1?q=pa = "Grai matter"

Their NPU seems much more complex than the Akida NPU with additional memories interspersed in the decision process. More components = more transistors = more power usage:

Akida NPU:

1645321403972.png


GraiMatter NPU:

1645320610727.png


WO2020025680A1 DATA PROCESSING MODULE, DATA PROCESSING SYSTEM AND DATA PROCESSING METHOD

One approach is to mimic such a complex system with a time-multiplexed design wherein a plurality of neural units share a processing facility. Since digital hardware can run orders of magnitudes faster than the speed at which biological neurons work the shared processing facility can realistically emulate neuron behavior, while this approach saves space to implement a higher density of virtual neurons and their synapses.

...

The data processing module operates as a spiking neural network, wherein neural unit states are updated in a time- multiplexed manner. The improved data processing module comprises a combination of independently addressable memory units that determine the network topology. A first of these memory units is an input synapse memory unit which may be indexed with an input synapse identification number and provides for each identified input synapse, input synapse properties including a neural unit identification number having the identified input synapse as an input to receive firing event messages and a weight to be assigned to such messages received at that input. A second of these memory units is an output synapse memory unit which may be indexed with an output synapse identification number and provides for each identified output synapse, output synapse properties including a input synapse identification number which is a destination for a firing event messages and a delay (if any) with which such messages are to be delivered to that destination. A third of these memory units is an output synapse slice memory unit, which may be indexed with a neural unit identification number and specifies for each identified neural unit a range of indexes in the output synapse memory unit.
 
  • Like
  • Fire
Reactions: 19 users
Hoping you guys can answer me this: Last night I was out with friends and I was telling them about Brainchip and its multi sector advantages, noteably in last nights conversation, with EVs. I was asked about Teslas, for example. What percentage - roughly - of Tesla computing is currently Edge based, and what is Cloud based?
Hi Dhm

I have not heard this question asked before and as a result have not heard any answers however I would think we need answers to the following as well:

The first question is what is the Edge. Brainchip defines the Edge where EV's are concerned as the whole vehicle and anything that is not done on the vehicle is not being done at the Edge.

The second question what is the cost of the computing being done on a Tesla? This cost has components such as the actual cost of the architecture being used and power being consumed by it to perform these functions and on this side to what extent does the physical weight of the architecture impact the amount of power available for the driving wheels?

Certainly for most functions Tesla runs a connected network and is constantly exchanging data with the cloud however some compute must be occurring on vehicle which using Brainchip's definition is at the Edge.

The following though begs the question of how Edge is Tesla's Edge:
(from September, 2021)
"TESLA MAKES CARS. Now, it’s also the latest company to seek an edge in artificial intelligence by making its own silicon chips.

At a promotional event last month, Tesla revealed details of a custom AI chip called D1 for training the machine-learning algorithm behind its Autopilot self-driving system. The event focused on Tesla’s AI work and featured a dancing human posing as a humanoid robot the company intends to build.

Tesla is the latest non traditional chipmaker to design its own silicon. As AI becomes more important and costly to deploy, other companies that are heavily invested in the technology—including Google, Amazon, and Microsoft—also now design their own chips.

At the event, Tesla CEO Elon Musk said squeezing more performance out of the computer system used to train the company’s neural network will be key to progress in autonomous driving. “If it takes a couple of days for a model to train versus a couple of hours, it’s a big deal,” he said.

Tesla already designs chips that interpret sensor input in its cars, after switching from using Nvidia hardware in 2019. But creating a powerful and complex kind of chip needed to train AI algorithms is a lot more expensive and challenging.

“If you believe that the solution to autonomous driving is training a large neural network, then what followed was exactly the kind of vertically integrated strategy you’d need,” says Chris Gerdes, director of the Center for Automotive Research at Stanford, who attended the Tesla event.

Many car companies use neural networks to identify objects on the road, but Tesla is relying more heavily on the technology, with a single giant neural network known as a “transformer” receiving input from eight cameras at once.

“We are effectively building a synthetic animal from the ground up,” Tesla’s AI chief, Andrej Karpathy, said during the August event. “The car can be thought of as an animal. It moves around autonomously, senses the environment and acts autonomously.”

Transformer models have provided big advances in areas such as language understanding in recent years; the gains have come from making the models larger and more data-hungry. Training the largest AI programs requires several million dollars worth of cloud computer power.

David Kanter, a chip analyst with Real World Technologies, says Musk is betting that by speeding the training, “then I can make this whole machine—the self-driving program—accelerate ahead of the Cruises and the Waymos of the world,” referring to two of Tesla’s rivals in autonomous driving.

Gerdes, of Stanford, says Tesla’s strategy is built around its neural network. Unlike many self-driving car companies, Tesla does not use lidar, a more expensive kind of sensor that can see the world in 3D. It relies instead on interpreting scenes by using the neural network algorithm to parse input from its cameras and radar. This is more computationally demanding because the algorithm has to reconstruct a map of its surroundings from the camera feeds rather than relying on sensors that can capture that picture directly.

But Tesla also gathers more training data than other car companies. Each of the more than 1 million Teslas on the road sends back to the company the videofeeds from its eight cameras. Tesla says it employs 1,000 people to label those images—noting cars, trucks, traffic signs, lane markings, and other features—to help train the large transformer. At the August event, Tesla also said it can automatically select which images to prioritize in labeling to make the process more efficient.

Gerdes says one risk of Tesla’s approach is that, at a certain point, adding more data may not make the system better. “Is it just a matter of more data?” he says. “Or do neural networks’ capabilities plateau at a lower level than you hope?”

Answering that question is likely to be expensive either way."

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
  • Wow
Reactions: 23 users

MrEdge

Member
A new accelerator chip called “Hiddenite” that can achieve state-of-the-art accuracy!

How well will this compete with AKD1000 if commercialised?

 
  • Like
  • Thinking
Reactions: 2 users

JK200SX

Regular

Silicon Labs’ latest families of wireless-enabled SoCs for IoT applications for the first time include a hardware AI/ML accelerator. The upgrade is indicative of the growing popularity of AI/ML techniques for a variety of IoT markets, including smart home, medical and industrial. Dedicated AI/ML hardware on-chip improves power consumption, critical to many IoT applications, even bringing AI/ML within reach for more power-sensitive IoT applications.

“You’ve always been able to run machine learning algorithms on an M-class processor, the trick is can you do it in an energy efficient way”
said Ross Sabolcik, general manager for IoT industrial and commercial products at Silicon Labs. “If you burn so much energy making the calculations, you might as well push it to the cloud, if you have the bandwidth. Our focus was not only to be able to run AI and ML, but to be able to do it in a really efficient way.”

The BG24 and MG24 families, with Bluetooth capability and multi-protocol capability, respectively, will be the first devices in Silicon Labs’ portfolio to feature a new, in-house developed AI/ML accelerator. The accelerator offloads AI/ML workloads from adjacent Arm Cortex-M33 microcontroller cores in applications such as smart home, medical and industrial IoT.

Sabolcik said the company’s hardware accelerator can speed up IoT AI/ML workloads up to four times with a resulting six-fold power savings (compared to using the Cortex-M33). Such power savings are suitable for battery-powered IoT devices. Latency is also improved, compared to sending data back and forth to the cloud for processing.

View attachment 1429

Interesting, Todd Vierra liked this also.....

1645322533499.png
 
  • Like
  • Thinking
Reactions: 17 users

M_C

Founding Member

Silicon Labs’ latest families of wireless-enabled SoCs for IoT applications for the first time include a hardware AI/ML accelerator. The upgrade is indicative of the growing popularity of AI/ML techniques for a variety of IoT markets, including smart home, medical and industrial. Dedicated AI/ML hardware on-chip improves power consumption, critical to many IoT applications, even bringing AI/ML within reach for more power-sensitive IoT applications.

“You’ve always been able to run machine learning algorithms on an M-class processor, the trick is can you do it in an energy efficient way”
said Ross Sabolcik, general manager for IoT industrial and commercial products at Silicon Labs. “If you burn so much energy making the calculations, you might as well push it to the cloud, if you have the bandwidth. Our focus was not only to be able to run AI and ML, but to be able to do it in a really efficient way.”

The BG24 and MG24 families, with Bluetooth capability and multi-protocol capability, respectively, will be the first devices in Silicon Labs’ portfolio to feature a new, in-house developed AI/ML accelerator. The accelerator offloads AI/ML workloads from adjacent Arm Cortex-M33 microcontroller cores in applications such as smart home, medical and industrial IoT.

Sabolcik said the company’s hardware accelerator can speed up IoT AI/ML workloads up to four times with a resulting six-fold power savings (compared to using the Cortex-M33). Such power savings are suitable for battery-powered IoT devices. Latency is also improved, compared to sending data back and forth to the cloud for processing.

View attachment 1429

Silicon Labs' Most Capable Family of SoCs

The single-die BG24 and MG24 SoCs combine a 78 MHz ARM Cortex-M33 processor, high-performance 2.4 GHz radio, industry-leading 20-bit ADC, an optimized combination of Flash (up to 1536 kB) and RAM (up to 256 kB), and an AI/ML hardware accelerator for processing machine learning algorithms while offloading the ARM Cortex-M33, so applications have more cycles to do other work. Supporting a broad range of 2.4 GHz wireless IoT protocols, these SoCs incorporate the highest security with the best RF performance/energy-efficiency ratio in the market.

Availability

EFR32BG24 and EFR32MG24 SoCs in 5 mm x 5 mm QFN40 and 6 mm x 6 mm QFN48 packages are shipping today to Alpha customers and will be available for mass deployment in April 2022. Multiple evaluation boards are available to designers developing applications. Modules based on the BG24 and MG24 SoCs will be available in the second half of 2022.

To learn more about the new BG24 family, go to: http://silabs.com/bg24.
To learn more about the new MG24 family, go to: http://silabs.com/mg24.
To learn more about how Silicon Labs supports AI and ML, go to: http://silabs.com/ai-ml.
 
  • Like
  • Fire
Reactions: 9 users

Dhm

Regular
Hi Dhm

I have not heard this question asked before and as a result have not heard any answers however I would think we need answers to the following as well:

The first question is what is the Edge. Brainchip defines the Edge where EV's are concerned as the whole vehicle and anything that is not done on the vehicle is not being done at the Edge.

The second question what is the cost of the computing being done on a Tesla? This cost has components such as the actual cost of the architecture being used and power being consumed by it to perform these functions and on this side to what extent does the physical weight of the architecture impact the amount of power available for the driving wheels?

Certainly for most functions Tesla runs a connected network and is constantly exchanging data with the cloud however some compute must be occurring on vehicle which using Brainchip's definition is at the Edge.

The following though begs the question of how Edge is Tesla's Edge:
(from September, 2021)
"TESLA MAKES CARS. Now, it’s also the latest company to seek an edge in artificial intelligence by making its own silicon chips.

At a promotional event last month, Tesla revealed details of a custom AI chip called D1 for training the machine-learning algorithm behind its Autopilot self-driving system. The event focused on Tesla’s AI work and featured a dancing human posing as a humanoid robot the company intends to build.

Tesla is the latest non traditional chipmaker to design its own silicon. As AI becomes more important and costly to deploy, other companies that are heavily invested in the technology—including Google, Amazon, and Microsoft—also now design their own chips.

At the event, Tesla CEO Elon Musk said squeezing more performance out of the computer system used to train the company’s neural network will be key to progress in autonomous driving. “If it takes a couple of days for a model to train versus a couple of hours, it’s a big deal,” he said.

Tesla already designs chips that interpret sensor input in its cars, after switching from using Nvidia hardware in 2019. But creating a powerful and complex kind of chip needed to train AI algorithms is a lot more expensive and challenging.

“If you believe that the solution to autonomous driving is training a large neural network, then what followed was exactly the kind of vertically integrated strategy you’d need,” says Chris Gerdes, director of the Center for Automotive Research at Stanford, who attended the Tesla event.

Many car companies use neural networks to identify objects on the road, but Tesla is relying more heavily on the technology, with a single giant neural network known as a “transformer” receiving input from eight cameras at once.

“We are effectively building a synthetic animal from the ground up,” Tesla’s AI chief, Andrej Karpathy, said during the August event. “The car can be thought of as an animal. It moves around autonomously, senses the environment and acts autonomously.”

Transformer models have provided big advances in areas such as language understanding in recent years; the gains have come from making the models larger and more data-hungry. Training the largest AI programs requires several million dollars worth of cloud computer power.

David Kanter, a chip analyst with Real World Technologies, says Musk is betting that by speeding the training, “then I can make this whole machine—the self-driving program—accelerate ahead of the Cruises and the Waymos of the world,” referring to two of Tesla’s rivals in autonomous driving.

Gerdes, of Stanford, says Tesla’s strategy is built around its neural network. Unlike many self-driving car companies, Tesla does not use lidar, a more expensive kind of sensor that can see the world in 3D. It relies instead on interpreting scenes by using the neural network algorithm to parse input from its cameras and radar. This is more computationally demanding because the algorithm has to reconstruct a map of its surroundings from the camera feeds rather than relying on sensors that can capture that picture directly.

But Tesla also gathers more training data than other car companies. Each of the more than 1 million Teslas on the road sends back to the company the videofeeds from its eight cameras. Tesla says it employs 1,000 people to label those images—noting cars, trucks, traffic signs, lane markings, and other features—to help train the large transformer. At the August event, Tesla also said it can automatically select which images to prioritize in labeling to make the process more efficient.

Gerdes says one risk of Tesla’s approach is that, at a certain point, adding more data may not make the system better. “Is it just a matter of more data?” he says. “Or do neural networks’ capabilities plateau at a lower level than you hope?”

Answering that question is likely to be expensive either way."

My opinion only DYOR
FF

AKIDA BALLISTA
Thank you, I found that source as well. I like the animal analogy, however I see little evidence that the 'transformer' is little more than just a 'gatherer' that passes the data received from it on to the cloud. I plan to phone Tesla here in Sydney tomorrow about this. My bet is the response will be 'umm, err...."
 
  • Like
  • Haha
Reactions: 7 users
Top Bottom