In this article, you'll learn how to deploy an AutoML-trained machine learning model to an online (real-time inference) endpoint. Automated machine learning, also referred to as automated ML or AutoML, is the process of automating the time-consuming, iterative tasks of developing a machine learning model. For more, see What is automated machine learning (AutoML)?.
In this article you'll know how to deploy AutoML trained machine learning model to online endpoints using:
Deploy from Azure Machine Learning studio and no code
Deploying an AutoML-trained model from the Automated ML page is a no-code experience. That is, you don't need to prepare a scoring script and environment, both are auto generated.
Go to the Automated ML page in the studio
Select your experiment and run
Choose the Models tab
Select the model you want to deploy
Once you select a model, the Deploy button will light up with a drop-down menu
Select Deploy to real-time endpoint option
The system will generate the Model and Environment needed for the deployment.
Complete the wizard to deploy the model to an online endpoint
Deploy manually from the studio or command line
If you wish to have more control over the deployment, you can download the training artifacts and deploy them.
To download the components you'll need for deployment:
Go to your Automated ML experiment and run in your machine learning workspace
Choose the Models tab
Select the model you wish to use. Once you select a model, the Download button will become enabled
Choose Download
You'll receive a zip file containing:
A conda environment specification file named conda_env_<VERSION>.yml
A Python scoring file named scoring_file_<VERSION>.py
The model itself, in a Python .pkl file named model.pkl
To deploy using these files, you can use either the studio or the Azure CLI.
Go to the Models page in Azure Machine Learning studio
Select + Register Model option
Register the model you downloaded from Automated ML run
Go to Environments page, select Custom environment, and select + Create option to create an environment for your deployment. Use the downloaded conda yaml to create a custom environment
Select the model, and from the Deploy drop-down option, select Deploy to real-time endpoint
Complete all the steps in wizard to create an online endpoint and deployment
To create a deployment from the CLI, you'll need the Azure CLI with the ML v2 extension. Run the following command to confirm that you've both:
az version
If you receive an error message or you don't see Extensions: ml in the response, follow the steps at Install and set up the CLI (v2).
Sign in:
az login
If you've access to multiple Azure subscriptions, you can set your active subscription:
az account set -s "<YOUR_SUBSCRIPTION_NAME_OR_ID>"
Set the default resource group and workspace to where you wish to create the deployment:
az configure --defaults group=$GROUP workspace=$WORKSPACE location=$LOCATION
Put the scoring file in its own directory
Create a directory called src/ and place the scoring file you downloaded into it. This directory is uploaded to Azure and contains all the source code necessary to do inference. For an AutoML model, there's just the single scoring file.
Create the endpoint and deployment yaml file
To create an online endpoint from the command line, you'll need to create an endpoint.yml and a deployment.yml file. The following code, taken from the Azure Machine Learning Examples repo shows the endpoints/online/managed/sample/, which captures all the required inputs:
You'll need to modify this file to use the files you downloaded from the AutoML Models page.
Create a file automl_endpoint.yml and automl_deployment.yml and paste the contents of the above example.
Change the value of the name of the endpoint. The endpoint name needs to be unique within the Azure region. The name for an endpoint must start with an upper- or lowercase letter and only consist of '-'s and alphanumeric characters.
In the automl_deployment file, change the value of the keys at the following paths:
Path
Change to
model:path
The path to the model.pkl file you downloaded.
code_configuration:code:path
The directory in which you placed the scoring file.
code_configuration:scoring_script
The name of the Python scoring file (scoring_file_<VERSION>.py).
environment:conda_file
A file URL for the downloaded conda environment file (conda_env_<VERSION>.yml).
Create a directory called src/ and place the scoring file you downloaded into it. This directory is uploaded to Azure and contains all the source code necessary to do inference. For an AutoML model, there's just the single scoring file.
Connect to Azure Machine Learning workspace
Import the required libraries:
# import required libraries
from azure.ai.ml import MLClient
from azure.ai.ml.entities import (
ManagedOnlineEndpoint,
ManagedOnlineDeployment,
Model,
Environment,
CodeConfiguration,
)
from azure.identity import DefaultAzureCredential
Configure workspace details and get a handle to the workspace:
# enter details of your Azure Machine Learning workspace
subscription_id = "<SUBSCRIPTION_ID>"
resource_group = "<RESOURCE_GROUP>"
workspace = "<AZUREML_WORKSPACE_NAME>"
# get a handle to the workspace
ml_client = MLClient(
DefaultAzureCredential(), subscription_id, resource_group, workspace
)
Create the endpoint and deployment
Next, we'll create the managed online endpoints and deployments.
Configure online endpoint:
Tip
name: The name of the endpoint. It must be unique in the Azure region. The name for an endpoint must start with an upper- or lowercase letter and only consist of '-'s and alphanumeric characters. For more information on the naming rules, see managed online endpoint limits.
auth_mode : Use key for key-based authentication. Use aml_token for Azure Machine Learning token-based authentication. A key doesn't expire, but aml_token does expire. For more information on authenticating, see Authenticate to an online endpoint.
# Creating a unique endpoint name with current datetime to avoid conflicts
import datetime
online_endpoint_name = "endpoint-" + datetime.datetime.now().strftime("%m%d%H%M%f")
# create an online endpoint
endpoint = ManagedOnlineEndpoint(
name=online_endpoint_name,
description="this is a sample online endpoint",
auth_mode="key",
)
Create the endpoint:
Using the MLClient created earlier, we'll now create the Endpoint in the workspace. This command will start the endpoint creation and return a confirmation response while the endpoint creation continues.
ml_client.begin_create_or_update(endpoint)
Configure online deployment:
A deployment is a set of resources required for hosting the model that does the actual inferencing. We'll create a deployment for our endpoint using the ManagedOnlineDeployment class.
In the above example, we assume the files you downloaded from the AutoML Models page are in the src directory. You can modify the parameters in the code to suit your situation.
Parameter
Change to
model:path
The path to the model.pkl file you downloaded.
code_configuration:code:path
The directory in which you placed the scoring file.
code_configuration:scoring_script
The name of the Python scoring file (scoring_file_<VERSION>.py).
environment:conda_file
A file URL for the downloaded conda environment file (conda_env_<VERSION>.yml).
Create the deployment:
Using the MLClient created earlier, we'll now create the deployment in the workspace. This command will start the deployment creation and return a confirmation response while the deployment creation continues.