BRN Discussion Ongoing

AARONASX

Holding onto what I've got
  • Like
  • Fire
  • Love
Reactions: 7 users

Sirod69

bavarian girl ;-)
Wevolver

Wevolver

BrainChip Holdings Ltd is a company that specializes in neuromorphic computing. They're known for their innovative approach and strategic vision in the field of artificial intelligence (AI). Their technology aims to redefine how AI is implemented at the edge.

In this article, we share highlights from their latest investor podcast to provide a holistic view of the company's achievements, strategies, and future direction.

Listen to the full podcast here: https://lnkd.in/euwf_Bfg

--------------------------------

How to get your company on Wevolver?

Wevolver is a platform used by millions of engineers to stay up
to date about the latest technologies.

Learn how your company can connect with the community and reach a global audience of engineers: https://lnkd.in/gtbsMuU2

 
  • Like
  • Fire
  • Love
Reactions: 24 users

Frangipani

Top 20
No reason to discuss with you: There was no previous link - neither to any Merc page directly nor to this countdown.

No matter how many lines you want to write and try to state the opposite: Not the way you want is to believe here.

This is also not meant to adresse you personally as we all know how this ends... greets to @Fact Finder

What a pathetic reply.

I was simply asking you a genuine question in case I had misunderstood you,
Or are you saying there was previously no link at all on the Brainchip website
when clicking on the image or the orange title of the Jan 3, 2022 press release resp a direct link to the press release only? 🤔
because I had assumed the link to the Mercedes media webpage had been there all this while. That is why I used “IMO” in my original reply to you.

No need for you to get personal.
 
  • Like
  • Thinking
  • Love
Reactions: 10 users

Sirod69

bavarian girl ;-)
Please stop arguing, it's not worth it. Everyone should read what they want and everyone should think what they want about it. So that's just hurting this great forum.
I don't like fights.
Work Together We Did It GIF by Holler Studios
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 71 users

IloveLamp

Top 20
 
  • Like
  • Fire
  • Thinking
Reactions: 4 users

Gies

Regular
Close connections?
 
  • Like
  • Fire
Reactions: 7 users
The ideal amount is directly related to your age and appetite for risk..

"Those who are younger can tolerate more risk, but they often have less income to invest. Those who near retirement may have more money to invest, but less time to recover from any losses. Asset allocation by age plays an important role in building a sound retirement investing strategy".

I "reckon" if you have 100000 shares and you're in your 20's you're going to do quite well.

If you're in your 30's or 40's, 2 to 300 thousand.

50's to 60's, 400 thousand and above.

But it all depends on how successful this Company becomes and how many shares you still own..

There will be many, who would be wise, to never let their kids know, that they "used" to be shareholders in this Company, if it lives up to its potential (which is a dirty word).

Best keep that sort a thing a deep dark secret 🤣...

Hey, just my opinion..
I like this analogy so what share prices would you consider to be reasonable for each of the following, just for shits and giggles ?.
2024
2025
2026
 
  • Thinking
  • Like
Reactions: 2 users

skutza

Regular
LOG POST

Flawless Defect Detection with Edge Impulse and BrainChip

EDGE AI
By Nick BildMar 15, 2024
Flawless Defect Detection with Edge Impulse and BrainChip

In manufacturing environments, automated defect detection has become a key component in ensuring product quality, efficiency, and customer satisfaction. Detecting defects is crucial for several reasons. Firstly, it safeguards the reputation of the manufacturer by ensuring that only products meeting the highest standards reach the market, minimizing the risk of recalls or customer dissatisfaction. Secondly, it enhances operational efficiency by reducing waste, rework, and the overall cost of production. Finally, in industries where safety is a key consideration, such as in the automotive or aerospace sectors, defect detection can literally be a matter of life and death.

Automation of the process is increasingly preferred over manual inspections because it eliminates human error and subjectivity, resulting in more consistent and reliable inspections. Moreover, it significantly increases inspection speed and throughput, allowing for higher volumes of products to be inspected in shorter time frames. Automation also enables the integration of advanced technologies such as machine learning and computer vision, which can detect defects with higher accuracy and even anticipate potential issues before they occur.

Recent technological advances have enabled the development of many highly accurate and efficient inspection systems; however, these existing systems can be very expensive and complicated. Equipment costs, installations, and training can stretch the budgets of even large organizations, and necessary calibration procedures and other maintenance can lead to significant downtime. At best, these factors will cause manufacturers to take a hit to the bottom line. And in the worst case, smaller organizations may find themselves priced out of the automated inspection market completely, having to forego the myriad benefits.

image3-1.png
BrainChip Akida Development Kit
But the march of technological progress continues on, and as it does, technologies become available to wider audiences. Engineer Peter Ing recognized that recent advances in machine learning algorithms and edge computing platforms, in particular, could be leveraged to perform automated defect detection in a simple and cost-effective manner. To prove this point, Ing built a prototype detection system with a hand from Edge Impulse and BrainChip.

Smart detection​

In a manufacturing environment, products are generally inspected as they zip by on a production line, commonly on a conveyor belt. As they do, each individual item must first be located, and once found, it must be determined if it looks as it should, or if there is something abnormal about it. This is a challenging thing to do in real-time, especially on a budget, because the algorithms required for these tasks can be very computationally expensive.

Ing took a two-pronged approach to deal with these challenges. On the software side, he used Edge Impulse to design and optimize an object detection algorithm for locating each individual item, and to act as a classification algorithm for spotting any defects. This took most of the complexity out of the development process, and also made it possible to gear these algorithms towards running on edge hardware platforms. On the hardware end of the equation, Ing chose to use a development kit with a BrainChip Akida neuromorphic processor. The Akida processors are hard to beat when it comes to the balance between performance and energy efficiency.

image2.png
Automatically labeled training data
Specifically, Ing used an Akida Development Kit centered around a Raspberry Pi Compute Module 4 in this project. This gives the versatility of an Arm-based Linux system for general-purpose development, with an Akida AKD1000 neuromorphic processor to make short work of machine learning workloads. He paired this with a USB webcam to capture images of products as they pass by during production.

After setting up the hardware, Ing’s next step involved capturing images to train the pair of machine learning models needed by the inspection device. To prove the concept, he collected images of gears — some normal and others defective in one way or another — and uploaded them to two separate Edge Impulse projects.

image5.png
A pre-trained model from the Akida Model Zoo can be fine-tuned with Edge Impulse

Objects, defects, and deployment​

The first project focuses on object detection, leveraging Edge Impulse’s own innovative FOMO algorithm. FOMO is ideal for use on resource-constrained edge devices, as it has been demonstrated to consume just 1/30 the computing power and memory of competing object-detection models like MobileNet SSD and YOLOv5. To prepare the training data for this pipeline, Ing utilized the Auto Labeler tool. With just a few clicks, this utility will identify all objects of interest in the uploaded images and draw bounding boxes around them. Without this AI-powered boost, labeling can be a very time-consuming and tedious process.

Ing’s second project leverages a neural network classifier to identify defects in detected objects. Edge Impulse’s extensive support for BrainChip devices enabled Ing to select a pre-trained model from the Akida Model Zoo. These pre-trained models already contain a lot of knowledge about the world, which means smaller datasets can be used to fine-tune them for a particular use case. In addition to saving time, this results in the generation of more accurate models.

With the impulses built and trained, Ing utilized the Deployment tool to export the models in BrainChip MetaTF Model format. This tool handles quantization of the model weights and converting the classifier to a spiking neural network for use with the Akida processor, which could otherwise be a daunting process.

image4.png
Ing’s web application performs real-time defect detection
Ing also developed a custom Python script to handle model inferences, and designed a simple web-based interface that shows images of gears as they roll by on a conveyor system in real-time. Annotations show if the gears look good as they pass by, or if the system finds a defect. It also gives options to swap out the object detection and classification models on the fly or adjust model settings — no need for downtime on the production line!

In addition to his detailed project write-up, Ing has also made the Edge Impulse object detection and classification projects public. Whatever items you need an automated inspection system for, this information will give you a running start. You could have a better quality control process roughed out after a day’s work. And feel free to clone Ing’s projects — we call it sharing, not cheating.

Hi all, is it just me or does this type of post/marketing actually put us backwards? I mean as nice as the information is, it seems like a very basic and ...... i don't know unprofessional finish/polish to the company? Cheap? i can totally say it's just my impression, but maybe others feel similar? Ho hum.
 
  • Thinking
Reactions: 1 users

7für7

Top 20
Hi all, is it just me or does this type of post/marketing actually put us backwards? I mean as nice as the information is, it seems like a very basic and ...... i don't know unprofessional finish/polish to the company? Cheap? i can totally say it's just my impression, but maybe others feel similar? Ho hum.
It’s a AI-Tech presentation via LinkedIn… what do you expect? An event with 5000 guests in a format like Apple do it for the new iPhone or Benz for a new model? People who are in this tech-sector don’t need a show which costs hundreds of thousands of dollars just to see the progress 🤷🏻‍♂️ IMO
 
  • Like
  • Fire
  • Love
Reactions: 12 users

IloveLamp

Top 20
I like this analogy so what share prices would you consider to be reasonable for each of the following, just for shits and giggles ?.
2024
2025
2026
Stab in the dark

End of 2024 ......$3

End of 2025.......$8

End of 2026.......$25 (edit maybe $15 - $20)
 
Last edited:
  • Like
  • Fire
  • Wow
Reactions: 27 users

skutza

Regular
It’s a AI-Tech presentation via LinkedIn… what do you expect? An event with 5000 guests in a format like Apple do it for the new iPhone or Benz for a new model? People who are in this tech-sector don’t need a show which costs hundreds of thousands of dollars just to see the progress 🤷🏻‍♂️ IMO
If it is connected to the company and promoted by the company, then yes. But you are right as a small start up, we should feel and look like one.
 

Diogenese

Top 20
  • Haha
  • Like
  • Fire
Reactions: 19 users

Evermont

Stealth Mode
1000 times compute in 8 years - that should give global warming a kick along.

I couldn't find the tempest emoji so fire will have to do.

They do mention some gains in efficiency, 25 x doesn't seem commensurate though.

"...They would also deliver 25 times the energy efficiency, NVIDIA said, a key claim when the creation of AI is criticised for its ravenous needs for energy and natural resources when compared to more conventional computing."

 
  • Like
  • Fire
  • Haha
Reactions: 9 users

7für7

Top 20
If it is connected to the company and promoted by the company, then yes. But you are right as a small start up, we should feel and look like one.

Just to make sure I'm not misunderstanding, you typically expect a huge event from a company that costs hundreds of thousands of dollars to showcase progress in their technology that they're still refining? But because we're a startup, you'll let it slide? (Besides, it's Edge Impulse with our Akida, but okay...)

And by the way in the next set of Quartals, you're probably going to be the first one to ask why we have -1.5 million in expenses. 😂🫵
 
  • Like
Reactions: 1 users

Bloodsy

Regular
  • Like
  • Thinking
  • Fire
Reactions: 8 users

Bravo

If ARM was an arm, BRN would be its biceps💪!

Edge Impulse Brings Nvidia’s Tao Toolkit To TinyML Hardware​

By Sally Ward-Foxton 03.18.2024 0
Share Post
Share on Facebook
Share on Twitter


Edge Impulse and Nvidia have collaborated to bring Nvidia’s Tao training toolkit to tiny hardware from other silicon vendors, including microcontrollers from NXP, STMicro, Alif and Renesas, with more hardware to follow. Embedded design teams can now easily train and optimize models on Nvidia GPUs in the cloud or on-premises using Tao, then deploy on embedded hardware using Edge Impulse.
“We realized AI is expanding into edge opportunities like IoT, where Nvidia doesn’t have silicon. So we said, why not?” Deepu Talla, VP and GM for robotics and edge computing at Nvidia, told EE Times. “We’re a platform, we have no problem enabling all of the ecosystem. [What we are able] to deploy on Nvidia Jetson, we can now deploy on a CPU, on an FPGA, on an accelerator—whatever custom accelerator you have—you can even deploy on a microcontroller.”

As part of the collaboration, Edge Impulse optimized almost 88 models from Nvidia’s model zoo for resource-constrained hardware at the edge. These models are available from Nvidia free of charge. The company has also added an extension to Nvidia Omniverse Replicator that allows users to create additional synthetic training data from existing datasets.

Tao toolkit

Tao is Nvidia’s toolkit for training and optimizing AI models for edge devices. In the latest release of Tao, model export in ONNX format is now supported, which makes it possible to deploy a Tao-trained model on any computing platform.


Integration with Edge Impulse’s platform means Edge Impulse users get access to the latest research from Nvidia, including new types of models like vision transformers. Edge Impulse’s integrated development environment can handle data collection, training on your own dataset, evaluation and comparison of models for different devices, and deployment of Tao models to any hardware. Training is run on Nvidia GPUs in Edge Impulse’s cloud via API.

Nvidia Tao toolkit now has ONNX support so that models can be deployed on any hardware (Source: Nvidia) Nvidia Tao toolkit now has ONNX support so that models can be deployed on any hardware. (Source: Nvidia)
Why would Nvidia make tools and models it has invested heavily in available to other types of hardware?
“Nvidia doesn’t participate in all of the AI inference market,” Talla said, noting that Nvidia’s edge AI offerings, including Jetson, are built for autonomous machines and industrial robotics where heavy duty inference is required.
Beyond that, in smartphones and IoT devices: “We will not participate in that market,” he said. “Our strategy is to play in autonomous machines, where there’s multiple sensors and sensor fusion, and that’s a strategic choice we made. The tens or hundreds of companies developing products from mobile to IoT, you could say they are competitors, but it’s overall speeding up the adoption of AI, which is good.”
Making Tao available for smaller AI chips than Jetson isn’t an altruistic move, Talla said.
“A rising tide lifts all boats,” he said. “There’s a gain for Nvidia because…IoT devices go will go into billions if not tens of billions of units annually. Jetson is not targeting that market. As AI adoption grows at the edge, we want to monetize it on the data center side. If somebody’s going to use our GPUs in the cloud to train their AI, we have monetized that.”
Users will save money, he said, because Tao will make it easier to train on GPUs in the cloud, increasing time to market for products.
“It’s beneficial to everyone in the ecosystem,” he said. “I think this is a win-win for all of our partners in the middle, and end customers.”
Nvidia went through a lot of the same challenges facing embedded developers today in creating and optimizing models for Jetson hardware seven to either years ago. For example, Talla said, gathering data is very difficult as you can’t cover all the corner cases, there are many open-source models to choose from and they change frequently, and AI frameworks are also continuously changing.
“Even if you master all of that, how can you create a performance model that is going to be the right size, meaning the memory footprint, especially when it comes to running at the edge?”
Tao was developed for this purpose five to eight years ago and most of it was open sourced last year.
“We want to give full control for anybody to take as many pieces as they want, to control their destiny, that’s why it’s not a closed piece of software,” Talla said.
The technical collaboration between Nvidia and Edge Impulse had several facets, Talla said. First, the teams needed to make sure models being trained in Tao were in the right format for silicon vendors’ runtime tools (edge hardware platforms typically have their own runtime compilers to optimize further). Second, Nvidia regularly updates its model zoo with state of the art models, but backporting those models to older frameworks is extremely challenging—the challenge, he said, is figuring out “whether we can keep the old models with the old frameworks despite adding newer models, something we’re still trying to figure out together.”

Model zoo

As part of the collaboration, Edge Impulse has optimized almost 88 models for the edge from Nvidia’s model zoo, Daniel Situnayake, director of ML at Edge Impulse, told EE Times.
“We’ve selected specific computer vision models from Nvidia’s Tao library that are appropriate for embedded constraints based on their trade-offs between latency, memory use and task performance,” he said.
Models like RetinaNet, YOLOv3, YOLOv4 and SSD were ideal options with slightly different strengths, he said. Because these models previously required Nvidia hardware to run, a certain amount of adaptation was required.
“To make them universal, we’ve performed model surgery to create custom versions of the models that will run on any C++ target, and we’ve created target-optimized implementations of any custom operations that are required,” Situnayake said. “For example, we’ve written fast versions of the decoding and non-maximum suppression algorithms used to create bounding boxes for object detection models.”
Further optimizations include quantization, scaling models down to run on mid-range microcontrollers like those based on Arm Cortex-M4 cores, and pre-training them to support input resolutions that are appropriate for embedded vision sensors.
“This results in seriously tiny models, for example, a YOLOv3 object detection model that uses 500 kB RAM and 1.2 MB ROM,” he said.
Models can be deployed via Edge Impulse’s EON compiler or using the silicon vendor’s toolchain. Edge Impulse’s EON Tuner hyperparameter optimization system can help users choose the optimum combination of model and hyperparameters for the user’s data set and target device.
Nvidia Omniverse Replicator integration with Edge Impulse allows users to generate synthetic data to address any gaps in their datasets (Source: Nvidia) Nvidia Omniverse Replicator integration with Edge Impulse allows users to generate synthetic data to address any gaps in their datasets. (Source: Nvidia)
Edge Impulse has also been working with Nvidia on integration with Omniverse Replicator, Nvidia’s tool for synthetic data generation. Edge Impulse users can now use Omniverse Replicator to generate synthetic image data based on their existing data for training—perhaps to address certain gaps in the dataset to ensure accurate and versatile trained models.
Edge Impulse’s integration with Nvidia Tao is currently available for hardware targets including NXP, STMicro, Alif and Renesas, with Nordic devices next in line for onboarding, the company said.

 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 47 users

Edge Impulse Brings Nvidia’s Tao Toolkit To TinyML Hardware​

By Sally Ward-Foxton 03.18.2024 0
Share Post
Share on Facebook
Share on Twitter


Edge Impulse and Nvidia have collaborated to bring Nvidia’s Tao training toolkit to tiny hardware from other silicon vendors, including microcontrollers from NXP, STMicro, Alif and Renesas, with more hardware to follow. Embedded design teams can now easily train and optimize models on Nvidia GPUs in the cloud or on-premises using Tao, then deploy on embedded hardware using Edge Impulse.
“We realized AI is expanding into edge opportunities like IoT, where Nvidia doesn’t have silicon. So we said, why not?” Deepu Talla, VP and GM for robotics and edge computing at Nvidia, told EE Times. “We’re a platform, we have no problem enabling all of the ecosystem. [What we are able] to deploy on Nvidia Jetson, we can now deploy on a CPU, on an FPGA, on an accelerator—whatever custom accelerator you have—you can even deploy on a microcontroller.”

As part of the collaboration, Edge Impulse optimized almost 88 models from Nvidia’s model zoo for resource-constrained hardware at the edge. These models are available from Nvidia free of charge. The company has also added an extension to Nvidia Omniverse Replicator that allows users to create additional synthetic training data from existing datasets.

Tao toolkit

Tao is Nvidia’s toolkit for training and optimizing AI models for edge devices. In the latest release of Tao, model export in ONNX format is now supported, which makes it possible to deploy a Tao-trained model on any computing platform.


Integration with Edge Impulse’s platform means Edge Impulse users get access to the latest research from Nvidia, including new types of models like vision transformers. Edge Impulse’s integrated development environment can handle data collection, training on your own dataset, evaluation and comparison of models for different devices, and deployment of Tao models to any hardware. Training is run on Nvidia GPUs in Edge Impulse’s cloud via API.

Nvidia Tao toolkit now has ONNX support so that models can be deployed on any hardware (Source: Nvidia) Nvidia Tao toolkit now has ONNX support so that models can be deployed on any hardware. (Source: Nvidia)
Why would Nvidia make tools and models it has invested heavily in available to other types of hardware?
“Nvidia doesn’t participate in all of the AI inference market,” Talla said, noting that Nvidia’s edge AI offerings, including Jetson, are built for autonomous machines and industrial robotics where heavy duty inference is required.
Beyond that, in smartphones and IoT devices: “We will not participate in that market,” he said. “Our strategy is to play in autonomous machines, where there’s multiple sensors and sensor fusion, and that’s a strategic choice we made. The tens or hundreds of companies developing products from mobile to IoT, you could say they are competitors, but it’s overall speeding up the adoption of AI, which is good.”
Making Tao available for smaller AI chips than Jetson isn’t an altruistic move, Talla said.
“A rising tide lifts all boats,” he said. “There’s a gain for Nvidia because…IoT devices go will go into billions if not tens of billions of units annually. Jetson is not targeting that market. As AI adoption grows at the edge, we want to monetize it on the data center side. If somebody’s going to use our GPUs in the cloud to train their AI, we have monetized that.”
Users will save money, he said, because Tao will make it easier to train on GPUs in the cloud, increasing time to market for products.
“It’s beneficial to everyone in the ecosystem,” he said. “I think this is a win-win for all of our partners in the middle, and end customers.”
Nvidia went through a lot of the same challenges facing embedded developers today in creating and optimizing models for Jetson hardware seven to either years ago. For example, Talla said, gathering data is very difficult as you can’t cover all the corner cases, there are many open-source models to choose from and they change frequently, and AI frameworks are also continuously changing.
“Even if you master all of that, how can you create a performance model that is going to be the right size, meaning the memory footprint, especially when it comes to running at the edge?”
Tao was developed for this purpose five to eight years ago and most of it was open sourced last year.
“We want to give full control for anybody to take as many pieces as they want, to control their destiny, that’s why it’s not a closed piece of software,” Talla said.
The technical collaboration between Nvidia and Edge Impulse had several facets, Talla said. First, the teams needed to make sure models being trained in Tao were in the right format for silicon vendors’ runtime tools (edge hardware platforms typically have their own runtime compilers to optimize further). Second, Nvidia regularly updates its model zoo with state of the art models, but backporting those models to older frameworks is extremely challenging—the challenge, he said, is figuring out “whether we can keep the old models with the old frameworks despite adding newer models, something we’re still trying to figure out together.”

Model zoo

As part of the collaboration, Edge Impulse has optimized almost 88 models for the edge from Nvidia’s model zoo, Daniel Situnayake, director of ML at Edge Impulse, told EE Times.
“We’ve selected specific computer vision models from Nvidia’s Tao library that are appropriate for embedded constraints based on their trade-offs between latency, memory use and task performance,” he said.
Models like RetinaNet, YOLOv3, YOLOv4 and SSD were ideal options with slightly different strengths, he said. Because these models previously required Nvidia hardware to run, a certain amount of adaptation was required.
“To make them universal, we’ve performed model surgery to create custom versions of the models that will run on any C++ target, and we’ve created target-optimized implementations of any custom operations that are required,” Situnayake said. “For example, we’ve written fast versions of the decoding and non-maximum suppression algorithms used to create bounding boxes for object detection models.”
Further optimizations include quantization, scaling models down to run on mid-range microcontrollers like those based on Arm Cortex-M4 cores, and pre-training them to support input resolutions that are appropriate for embedded vision sensors.
“This results in seriously tiny models, for example, a YOLOv3 object detection model that uses 500 kB RAM and 1.2 MB ROM,” he said.
Models can be deployed via Edge Impulse’s EON compiler or using the silicon vendor’s toolchain. Edge Impulse’s EON Tuner hyperparameter optimization system can help users choose the optimum combination of model and hyperparameters for the user’s data set and target device.
Nvidia Omniverse Replicator integration with Edge Impulse allows users to generate synthetic data to address any gaps in their datasets (Source: Nvidia) Nvidia Omniverse Replicator integration with Edge Impulse allows users to generate synthetic data to address any gaps in their datasets. (Source: Nvidia)
Edge Impulse has also been working with Nvidia on integration with Omniverse Replicator, Nvidia’s tool for synthetic data generation. Edge Impulse users can now use Omniverse Replicator to generate synthetic image data based on their existing data for training—perhaps to address certain gaps in the dataset to ensure accurate and versatile trained models.
Edge Impulse’s integration with Nvidia Tao is currently available for hardware targets including NXP, STMicro, Alif and Renesas, with Nordic devices next in line for onboarding, the company said.


My favorite quotes:
Integration with Edge Impulse’s platform means Edge Impulse users get access to the latest research from Nvidia, including new types of models like vision transformers.


Making Tao available for smaller AI chips than Jetson isn’t an altruistic move, Talla said.

“A rising tide lifts all boats,” he said. “There’s a gain for Nvidia because…IoT devices go will go into billions if not tens of billions of units annually.
Jetson is not targeting that market. As AI adoption grows at the edge, we want to monetize it on the data center side. If somebody’s going to use our GPUs in the cloud to train their AI, we have monetized that.”
 
  • Like
  • Fire
  • Love
Reactions: 22 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
NVIDIA have announced DRIVE Thor SoC will be based on Blackwell GPU architecture.



Screenshot 2024-03-19 at 11.26.34 am.png



And here's something I think is VERY INTERESTING! Thor will be paired with a "YET TO BE NAMED GPU"!!! 👀

4c90a413e52d6753011e4f4df0d025c7.gif




EXTRACT

Screenshot 2024-03-19 at 11.35.14 am.png



 
Last edited:
  • Like
  • Fire
  • Thinking
Reactions: 25 users

Newk R

Regular
Can somebody please get us linked to NVIDIA......now!!!!
 
  • Haha
  • Like
  • Fire
Reactions: 23 users
I am just trying to get up to date about the Nvidia GTC news.
Nvidia's Blackwell looks really impressive but power consumption for these ML systems is becoming mind boggling if you add up all instances/data centers etc.

A german tech journal had a nice quote (translated via deepl.com)
Jensen Huang jokingly mentioned that you could heat your jacuzzi with the waste heat from an NVL-72 system and gave specific data on this. At a flow rate of 2 liters per second, a system should heat the water from 25 degrees Celsius to 45 degrees. This would result in a heating capacity of around 167 kilowatts.


NVIDIA have announced DRIVE Thor SoC will be using Blackwell GPU architecture.



View attachment 59364


And here's something I think is VERY INTERESTING! Thor will be paired with a "YET TO BE NAMED GPU"!!!



EXTRACT

View attachment 59365


 
  • Wow
  • Like
  • Fire
Reactions: 9 users
Top Bottom