Managing Models Using MLflow on Databricks
This course will teach you how to manage the end-to-end lifecycle of your machine learning models using the MLflow managed service on Databricks.
What you'll learn
The machine learning workflow involves many intricate steps to ensure that the model that you deploy to production is meaningful and robust. Managing this workflow manually is hard which is why the MLflow service which manages the integrated machine learning workflow end-to-end is game changing. Databricks makes this even easier by offering a managed version of this service that is simple, intuitive, and easy to use.
In this course, Managing Models Using MLflow on Databricks, you will learn to create an MLflow experiment and use it to track the runs of your models.
First, you will see how you can use explicit logging to log model-related metrics and parameters and view, sort, and compare runs in an experiment.
Next, you will then see how you can use autologging to track all relevant parameter, metrics, and artifacts without you having to explicitly write logging code.
Then, you will see how you can use MLflow to productionize and serve your models, and register your models in the model registry and perform batch inference using your model.
After that, you will learn how to transition your model through lifecycle stages such as Staging, Production, and Archived.
Finally, you will see how you can work with custom models in MLflow. You will also learn how to package your model in a reusable format as an MLflow project and run training using that project hosted on Github or on the Databricks file system.
When you are finished with this course, you will have the skills and knowledge to use MLflow on Databricks to manage the entire lifecycle of your machine learning model.
Table of contents
- Prerequisites and Course Outline 3m
- Introducing the Databricks Machine Learning Runtime 6m
- Introducing MLflow Tracking with Experiments and Runs 6m
- Demo: Setting up the Machine Learning Environment on Databricks 4m
- Demo: Cleaning and Preprocessing Data 6m
- Demo: Creating an Experiment 2m
- Demo: Creating and Terminating Runs 4m
- Demo: Tracking Model Metrics in a Run 3m
- Demo: Tracking Model Parameters in a Run 2m
- Demo: Viewing and Sorting Runs 3m
- Demo: Comparing Runs in an Experiment 4m
- Demo: Autologging with MLflow 5m
- Demo: Searching and Sorting Runs 3m
- Demo: Programmatically Working with Experiments and Runs 4m
- The MLflow Model Registry and Model Serving 5m
- Demo: Logging Runs in a Notebook Experiment 4m
- Demo: Registering a Model Using the UI 3m
- Demo: Using a Loaded Model for Predictions 3m
- Demo: Performing Batch Inference 7m
- Demo: Transitioning the Model Stage 6m
- Demo: Deploying the Model to a REST End Point 3m
- Demo: Making Inferences Using Python 3m
- Demo: Making Inferences Using the Browser and Curl 2m
- Demo: Registering a New Version of an Existing Model 6m