Nice find
@Rocket577
It definitely shows that Nintendo are playing in our "sandbox". Still talking back propagation though.
I highlighted a paragraph I found interesting.
A computer system is provided for training a neural network that converts images. Input images are applied to the neural network and a difference in image values is determined between predicted image data and target image data. A Fast Fourier Transform is taken of the difference. The neural...
patents.google.com
Systems and methods of neural network training
Abstract
A computer system is provided for training a neural network that converts images. Input images are applied to the neural network and a difference in image values is determined between predicted image data and target image data. A Fast Fourier Transform is taken of the difference. The neural network is trained on based the L1 Norm of resulting frequency data.
- TECHNICAL OVERVIEW
- [0002]
The technology described herein relates to machine learning and training machine learned models or systems. More particularly, the technology described includes subject matter that relates to training neural networks to convert or upscale images by using, for example, Fourier Transforms (FTs).
- INTRODUCTION
- [0003]
Machine learning can give computers the ability to “learn” a specific task without expressly programming the computer for that task. An example of machine learning systems includes deep learning neural networks. Such networks (and other forms of machine learning) can be used to, for example, help with automatically recognizing whether a cat is in a photograph. The learning takes place by using thousands or millions of photos to “train” the network (also called a model) to recognize when a cat is in a photograph. The training process can include, for example, determining weights for the model that achieve the indicated goal (e.g., identifying cats within a photo). The training process may include using a loss function in a way (e.g., via backpropagation) that seeks to train the model or neural network that will minimize the loss represented by the function. Different loss functions include L1 (Least Absolute Deviations) and L2 (Least Square Errors) loss functions.
- [0004]
It will be appreciated that new and improved techniques, systems, and processes are continually sought after in these areas of technology, such as technology that is used to train machine learned models or neural networks.
- SUMMARY
- [0005]
In some examples, computer system for training a neural network that processes images is provided. In some examples, the system is used to train neural networks to upscale images from one resolution to another resolution. The system may include computer storage that stores image data for a plurality of images. The system may be configured to generate, from the plurality of images, input image data and then apply the input image data to a neural network to generate predicted output image data. A difference between the predicted output image data and target image data is calculated and that difference may then be transformed in frequency domain data. In some examples, the L1-loss is then used on the frequency domain data calculated from the difference, which is then used to train the neural network using backpropagation (e.g., stochastic gradient descent). Using the L1 loss may encourage sparsity of the frequency domain data, which may also be referred to, or part of, Compressed Sensing. In contrast, using the L2 loss on the same frequency domain data may generally not produce in good results due to, for example, Parseval's Theorem. In other words, the L2 loss of frequency transformed data, using a Fourier Transform, is the same as the L2 loss of the data—i.e., L2(FFT(data))=L2(data). Thus, frequency transforming the data when using an L2 loss may not produce different results.
TT