Deploy MLflow models to online endpoints

APPLIES TO: Azure CLI ml extension v2 (current)

In this article, learn how to deploy your MLflow model to an online endpoint for real-time inference. When you deploy your MLflow model to an online endpoint, it's a no-code-deployment so you don't have to provide a scoring script or an environment.

You only provide the typical MLflow model folder contents:

  • MLmodel file
  • conda.yaml
  • model file(s)

For no-code-deployment, Azure Machine Learning

  • Dynamically installs Python packages provided in the conda.yaml file, this means the dependencies are installed during container runtime.
    • The base container image/curated environment used for dynamic installation is mcr.microsoft.com/azureml/mlflow-ubuntu18.04-py37-cpu-inference or AzureML-mlflow-ubuntu18.04-py37-cpu-inference
  • Provides a MLflow base image/curated environment that contains the following items:

Prerequisites

Before following the steps in this article, make sure you have the following prerequisites:

The information in this article is based on code samples contained in the azureml-examples repository. To run the commands locally without having to copy/paste YAML and other files, clone the repo and then change directories to the cli directory in the repo:

git clone https://github.com/Azure/azureml-examples --depth 1
cd azureml-examples
cd cli

If you haven't already set the defaults for the Azure CLI, save your default settings. To avoid passing in the values for your subscription, workspace, and resource group multiple times, use the following commands. Replace the following parameters with values for your specific configuration:

  • Replace <subscription> with your Azure subscription ID.
  • Replace <workspace> with your Azure Machine Learning workspace name.
  • Replace <resource-group> with the Azure resource group that contains your workspace.
  • Replace <location> with the Azure region that contains your workspace.

Tip

You can see what your current defaults are by using the az configure -l command.

az account set --subscription <subscription>
az configure --defaults workspace=<workspace> group=<resource-group> location=<location>

In this code snippet used in this article, the ENDPOINT_NAME environment variable contains the name of the endpoint to create and use. To set this, use the following command from the CLI. Replace <YOUR_ENDPOINT_NAME> with the name of your endpoint:

export ENDPOINT_NAME="<YOUR_ENDPOINT_NAME>"

Deploy using CLI (v2)

APPLIES TO: Azure CLI ml extension v2 (current)

This example shows how you can deploy an MLflow model to an online endpoint using CLI (v2).

Important

For MLflow no-code-deployment, testing via local endpoints is currently not supported.

  1. Create a YAML configuration file for your endpoint. The following example configures the name and authentication mode of the endpoint:

    create-endpoint.yaml

    $schema: https://azuremlschemas.azureedge.net/latest/managedOnlineEndpoint.schema.json
    name: my-endpoint
    auth_mode: key
    
  2. To create a new endpoint using the YAML configuration, use the following command:

    az ml online-endpoint create --name $ENDPOINT_NAME -f endpoints/online/mlflow/create-endpoint.yaml
    
  3. Create a YAML configuration file for the deployment. The following example configures a deployment of the sklearn-diabetes model to the endpoint created in the previous step:

    Important

    For MLflow no-code-deployment (NCD) to work, setting type to mlflow_model is required, type: mlflow_model​. For more information, see CLI (v2) model YAML schema.

    sklearn-deployment.yaml

    $schema: https://azuremlschemas.azureedge.net/latest/managedOnlineDeployment.schema.json
    name: sklearn-deployment
    endpoint_name: my-endpoint
    model:
      name: mir-sample-sklearn-mlflow-model
      version: 1
      path: sklearn-diabetes/model
      type: mlflow_model
    instance_type: Standard_DS2_v2
    instance_count: 1
    
  4. To create the deployment using the YAML configuration, use the following command:

    az ml online-deployment create --name sklearn-deployment --endpoint $ENDPOINT_NAME -f endpoints/online/mlflow/sklearn-deployment.yaml --all-traffic
    

Invoke the endpoint

Once your deployment completes, use the following command to make a scoring request to the deployed endpoint. The sample-request-sklearn.json file used in this command is located in the /cli/endpoints/online/mlflow directory of the azure-examples repo:

az ml online-endpoint invoke --name $ENDPOINT_NAME --request-file endpoints/online/mlflow/sample-request-sklearn.json

sample-request-sklearn.json

{"input_data": {
    "columns": [
      "age",
      "sex",
      "bmi",
      "bp",
      "s1",
      "s2",
      "s3",
      "s4",
      "s5",
      "s6"
    ],
    "data": [
      [ 1.0,2.0,3.0,4.0,5.0,6.0,7.0,8.0,9.0,10.0 ],
      [ 10.0,2.0,9.0,8.0,7.0,6.0,5.0,4.0,3.0,2.0]
    ],
    "index": [0,1]
  }}

The response will be similar to the following text:

[ 
  11633.100167144921,
  8522.117402884991
]

Delete endpoint

Once you're done with the endpoint, use the following command to delete it:

az ml online-endpoint delete --name $ENDPOINT_NAME --yes

Deploy using Azure Machine Learning studio

This example shows how you can deploy an MLflow model to an online endpoint using Azure Machine Learning studio.

  1. Models need to be registered in the Azure Machine Learning workspace to be deployed. Deployment of unregistered models isn't supported. To create a model in Azure Machine Learning, open the Models page in Azure Machine Learning. Select Register model and select where your model is located. Fill out the required fields, and then select Register.

    Screenshot of the UI to register a model.

  2. To create an endpoint deployment, use either the endpoints or models page:

    1. From the Endpoints page, Select +Create.

      Screenshot showing create option on the Endpoints UI page.

    2. Provide a name and authentication type for the endpoint, and then select Next.

    3. When selecting a model, select the MLflow model registered previously. Select Next to continue.

    4. When you select a model registered in MLflow format, in the Environment step of the wizard, you don't need a scoring script or an environment.

      Screenshot showing no code and environment needed for MLflow models

    5. Complete the wizard to deploy the model to the endpoint.

      Screenshot showing NCD review screen

Deploy models after a training job

This section helps you understand how to deploy models to an online endpoint once you've completed your training job. Models logged in a run are stored as artifacts. If you have used mlflow.autolog() in your training script, you'll see model artifacts generated in the job's output. You can use mlflow.autolog() for several common ML frameworks to log model parameters, performance metrics, model artifacts, and even feature importance graphs.

For more information, see Train models. Also see the training job samples in the GitHub repository.

  1. Models need to be registered in the Azure Machine Learning workspace to be deployed. Deployment of unregistered models isn't supported. You can register the model directly from the job's output using the Azure ML CLI (v2), the Azure ML SDK for Python (v2) or Azure Machine Learning studio.

    Tip

    To register the model, you will need to know the location where the model has been stored. If you are using autolog feature of MLflow, the path will depend on the type and framework of the model being used. We recommed to check the jobs output to identify which is the name of this folder. You can look for the folder that contains a file named MLModel. If you are logging your models manually using log_model, then the path is the argument you pass to such method. As an expample, if you log the model using mlflow.sklearn.log_model(my_model, "classifier"), then the path where the model is stored is classifier.

    Screenshot showing how to download Outputs and logs from Experimentation run

  2. To deploy the registered model, you can use either studio or the Azure command-line interface. Use the model folder from the outputs for deployment:

Next steps

To learn more, review these articles: