Foundations of PyTorch
This course covers many aspects of building deep learning models in PyTorch, including neurons and neural networks, and how PyTorch uses differential calculus to train such models and create dynamic computation graphs in deep learning.
What you'll learn
PyTorch is fast emerging as a popular choice for building deep learning models owing to its flexibility, ease-of-use and built-in support for optimized hardware such as GPUs. Using PyTorch, you can build complex deep learning models, while still using Python-native support for debugging and visualization. In this course, Foundations of PyTorch, you will gain the ability to leverage PyTorch support for dynamic computation graphs, and contrast that with other popular frameworks such as TensorFlow. First, you will learn the internals of neurons and neural networks, and see how activation functions, affine transformations, and layers come together inside a deep learning model. Next, you will discover how such a model is trained, that is, how the best values of model parameters are estimated. You will then see how gradient descent optimization is smartly implemented to optimize this process. You will understand the different types of differentiation that could be used in this process, and how PyTorch uses Autograd to implement reverse-mode auto-differentiation. You will work with different PyTorch constructs such as Tensors, Variables, and Gradients. Finally, you will explore how to build dynamic computation graphs in PyTorch. You will round out the course by contrasting this with the approaches used in TensorFlow, another leading deep learning framework which previously offered only static computation graphs, but has recently added support for dynamic computation graphs. When you’re finished with this course, you will have the skills and knowledge to move on to building deep learning models in PyTorch and harness the power of dynamic computation graphs.
Table of contents
- Module Overview 2m
- Demo: Creating and Initializing Tensors 9m
- Demo: Simple Operations on Tensors 7m
- Demo: Elementwise and Matrix Operations on Tensors 5m
- Demo: Converting between PyTorch Tensors and NumPy Arrays 5m
- PyTorch Support for CUDA Devices 6m
- Demo: Setting up a Deep Learning VM to Work with GPUs 6m
- Demo: Creating Tensors on CUDA-enabled Devices 4m
- Demo: Working with the Device Context Manager 5m
- Summary 1m
- Module Overview 1m
- Gradient Descent Optimization 4m
- Forward and Backward Passes 3m
- Calculating Gradients 5m
- Using Gradients to Update Model Parameters 4m
- Two Passes in Reverse Mode Automatic Differentiation 4m
- Demo: Introducing Autograd 7m
- Demo: Working with Gradients 5m
- Demo: Variables and Tensors 3m
- Demo: Training a Linear Model Using Autograd 9m
- Summary 2m
- Module Overview 1m
- Static vs. Dynamic Computation Graphs 7m
- Dynamic Computation Graphs in PyTorch 1m
- Demo: Installing Tensorflow, Graphviz, and Hidden Layer 1m
- Demo: Building Dynamic Computations Graphs with PyTorch 3m
- Demo: Visualizing Neural Networks in PyTorch Using Hidden Layer 4m
- Demo: Building Static Computation Graphs with Tensorflow 7m
- Demo: Visualizing Tensorflow Graphs with Tensorboard 3m
- Demo: Dynamic Computation Graphs in Tensorflow with Eager Execution 4m
- Debugging in PyTorch and Tensorflow 2m
- Summary and Further Study 2m