Style Transfer with PyTorch
This course covers the important aspects of neural style transfer, a technique for transforming images, and discusses Generative Adversarial Networks in order to efficiently create realistic images and videos.
What you'll learn
Style transfer refers to the use of a neural network to transform an image so that it comes to artistically resemble another image, while still retaining its original content. Neural style transfer is fast becoming popular as a way to change the aesthetics of an image. In this course, Style Transfer with PyTorch, you will gain the ability to use pre-trained convolutional neural networks (CNNs) that come out-of-the-box in PyTorch for style transfer. First, you will learn how style transfer involves a style image as well as a content image, and a pretrained neural network that usually does not change at all during the training process. Next, you will discover how intermediate layers of the CNN are designated as style layers of interest and content layers of interest. Then, you will explore the minimization of two loss functions - a style loss and a content loss. Finally, you will delve into leveraging a new and much-hyped family of ML models, known as Generative Adversarial Networks (GANs) to create realistic images and videos. When you’re finished with this course, you will have the skills and knowledge to perform neural style transfer to get images that combine content and artistic style from two different inputs and use GANs to generate realistic images from noise.
Table of contents
- Version Check 0m
- Module Overview 1m
- Prerequisites and Course Outline 2m
- Content, Style, and Target Images 4m
- Training the Target Image for Style Transfer 7m
- Content Loss 4m
- Style Loss: Cosine Similarity and Dot Products 4m
- Style Loss: Gram Matrix 4m
- Setting up a Deep Learning Virtual Machine 4m
- Using Convolution Filters to Detect Features 8m
- Module Summary 1m
- Module Overview 1m
- Pretrained Models for Style Transfer 2m
- Loading the VGG19 Pretrained Model 3m
- Exploring and Transforming the Content and Style Images 6m
- Extracting Feature Maps from the Content and Style Images 4m
- Calculating the Gram Matrix to Extract Style Information 3m
- Training the Target Image to Perform Style Transfer 6m
- Style Transfer Using AlexNet 6m
- Module Summary 1m
- Module Overview 1m
- Understanding Generative Adversarial Networks (GANs) 7m
- Training a GAN 3m
- Understanding the Leaky ReLU Activation Function 6m
- Loading and Exploring the MNIST Handwritten Digit Images 4m
- Setting up the Generator and Discriminator Neural Networks 4m
- Training the Discriminator 4m
- Training the Generator and Generating Fake Images 3m
- Cleaning up Resources 1m
- Summary and Further Study 2m