Fullmoonfever
Top 20
Good to see some game developers starting recognise the future re neuromorphic.
Blog excerpt below. Not us specifically but the tech which we know we are at the forefront.
In our last blog on AI, we covered different types of AI models. But how do these digital brains actually learn and train themselves? In this blog, we’re going to take a closer look into the processes behind AI learning and training, as well as cover some future considerations and caveats when it comes to using AI.
This also means that the economic value, the trade secret or the recipe of AI, if you will, are the weights and the training datasets. The network of connections are not nearly as valuable from a business standpoint. This is why Meta and other companies have started sharing untrained models publicly. That doesn’t mean there aren’t plenty of open source weights as well, but they are usually inferior quality to what big companies with lots of investment in training time have. This is also why publications and authors are upset that their articles were scraped into a training dataset without compensation.
When a trained model generates a response from a stimulus, it’s called inference, akin to human thought processes. While inference is generally swift, Generative Models face computational hurdles due to their expansive output layers, driving the race for AI-capable chips.
There are two kinds of AI-helpful chips – the kind everyone has been using to accelerate training and inference have been GPUs – which happen to be the same exact chips in your graphics card in your PC, PS5, or XBox. If you think that’s strange, you are not alone. While GPUs coincidentally accelerate certain AI tasks, the future is in Neuromorphic Processors. These implement neurons in silicon. This means pre-trained AI models will be able to run inference entirely in hardware, but it’s likely training will still need to rely on classical techniques for longer, so don’t dump your NVIDIA stock just yet!
Blog excerpt below. Not us specifically but the tech which we know we are at the forefront.
How AI Models Learn and Train Themselves | Filament Games
Curious about how AI models actually learn and train themselves? Here's an in-depth look at the fascinating processes behind AI learning and training!
www.filamentgames.com
how ai models learn and train themselves
March 27, 2024 // Alex StoneIn our last blog on AI, we covered different types of AI models. But how do these digital brains actually learn and train themselves? In this blog, we’re going to take a closer look into the processes behind AI learning and training, as well as cover some future considerations and caveats when it comes to using AI.
Neural Networks and Pre-Trained Models
All modern AI models rely on Neural Networks, mirroring the structure of the human brain. These networks consist of simulated neurons connected in various configurations, each with assigned “weights” dictating their influence. These weights, crucial for learning, are determined through rigorous training against labeled datasets. Models like ChatGPT and DALL-E are pre-trained, with companies investing heavily in refining these weights and datasets, making them the linchpin of AI value.This also means that the economic value, the trade secret or the recipe of AI, if you will, are the weights and the training datasets. The network of connections are not nearly as valuable from a business standpoint. This is why Meta and other companies have started sharing untrained models publicly. That doesn’t mean there aren’t plenty of open source weights as well, but they are usually inferior quality to what big companies with lots of investment in training time have. This is also why publications and authors are upset that their articles were scraped into a training dataset without compensation.
When a trained model generates a response from a stimulus, it’s called inference, akin to human thought processes. While inference is generally swift, Generative Models face computational hurdles due to their expansive output layers, driving the race for AI-capable chips.
There are two kinds of AI-helpful chips – the kind everyone has been using to accelerate training and inference have been GPUs – which happen to be the same exact chips in your graphics card in your PC, PS5, or XBox. If you think that’s strange, you are not alone. While GPUs coincidentally accelerate certain AI tasks, the future is in Neuromorphic Processors. These implement neurons in silicon. This means pre-trained AI models will be able to run inference entirely in hardware, but it’s likely training will still need to rely on classical techniques for longer, so don’t dump your NVIDIA stock just yet!