Create a Real-time Inferencing Service with Azure ML
In this lab, you’ll create a real-time inference service with Azure ML. When you’re finished with this lab, you’ll have experience creating a compute instance, training, deploying and testing real-time endpoints using Azure Machine Learning SDK.
Terms and conditions apply.
Lab info
Lab author
Challenge
Train and Register a Machine Learning Model
You will create a compute instance to run your Jupyter notebooks on an Azure machine learning workspace, then clone a GitHub repository to aid you train and register a machine learning model to predict whether a flight will be delayed or not.
Challenge
Create an Entry Script and Execution Environment
You will learn to set up inference configuration, which includes the scoring and environment files required to load and use the registered model.
Challenge
Deploy Model to a Web Service Hosted on Azure Container Instance (ACI)
You will deploy the registered model as a web service so that an application can simply make an HTTP request, sending the flight data in JSON (or binary) format, and receive back the predicted class(es).
Challenge
Consume the Real-time Endpoint
You will learn to consume the deployed Web Service by call sending requests to the endpoint to make real-time predictions.
Provided environment for hands-on practice
We will provide the credentials and environment necessary for you to practice right within your browser.
Guided walkthrough
Follow along with the author’s guided walkthrough and build something new in your provided environment!
Did you know?
On average, you retain 75% more of your learning if you get time for practice.