Edit

Share via


Deploy models for scoring in batch endpoints

APPLIES TO: Azure CLI ml extension v2 (current) Python SDK azure-ai-ml v2 (current)

Batch endpoints provide a convenient way to deploy models that run inference over large volumes of data. These endpoints simplify the process of hosting your models for batch scoring, so that your focus is on machine learning, rather than the infrastructure.

Use batch endpoints for model deployment when:

  • You have expensive models that require a longer time to run inference.
  • You need to perform inference over large amounts of data that is distributed in multiple files.
  • You don't have low latency requirements.
  • You can take advantage of parallelization.

In this article, you use a batch endpoint to deploy a machine learning model that solves the classic MNIST (Modified National Institute of Standards and Technology) digit recognition problem. Your deployed model then performs batch inferencing over large amounts of data—in this case, image files. You begin by creating a batch deployment of a model that was created using Torch. This deployment becomes the default one in the endpoint. Later, you create a second deployment of a mode that was created with TensorFlow (Keras), test the second deployment, and then set it as the endpoint's default deployment.

To follow along with the code samples and files needed to run the commands in this article locally, see the Clone the examples repository section. The code samples and files are contained in the azureml-examples repository.

Prerequisites

Before you follow the steps in this article, make sure you have the following prerequisites:

Clone the examples repository

The example in this article is based on code samples contained in the azureml-examples repository. To run the commands locally without having to copy/paste YAML and other files, first clone the repo and then change directories to the folder:

!git clone https://github.com/Azure/azureml-examples --depth 1
!cd azureml-examples/sdk/python/endpoints/batch/deploy-models/mnist-classifier

To follow along with this example in a Jupyter Notebook, in the cloned repository, open the notebook: mnist-batch.ipynb.

Prepare your system

Connect to your workspace

The workspace is the top-level resource for Azure Machine Learning, providing a centralized place to work with all the artifacts you create when you use Azure Machine Learning. In this section, you connect to the workspace in which you'll perform deployment tasks.

  1. Import the required libraries:

    from azure.ai.ml import MLClient, Input, load_component
    from azure.ai.ml.entities import BatchEndpoint, ModelBatchDeployment, ModelBatchDeploymentSettings, PipelineComponentBatchDeployment, Model, AmlCompute, Data, BatchRetrySettings, CodeConfiguration, Environment, Data
    from azure.ai.ml.constants import AssetTypes, BatchDeploymentOutputAction
    from azure.ai.ml.dsl import pipeline
    from azure.identity import DefaultAzureCredential
    

    Note

    Classes ModelBatchDeployment and PipelineComponentBatchDeployment were introduced in version 1.7.0 of the SDK.

  2. Configure workspace details and get a handle to the workspace:

    subscription_id = "<subscription>"
    resource_group = "<resource-group>"
    workspace = "<workspace>"
    
    ml_client = MLClient(DefaultAzureCredential(), subscription_id, resource_group, workspace)
    

Create compute

Batch endpoints run on compute clusters and support both Azure Machine Learning compute clusters (AmlCompute) and Kubernetes clusters. Clusters are a shared resource, therefore, one cluster can host one or many batch deployments (along with other workloads, if desired).

Create a compute named batch-cluster, as shown in the following code. You can adjust as needed and reference your compute using azureml:<your-compute-name>.

compute_name = "batch-cluster"
if not any(filter(lambda m: m.name == compute_name, ml_client.compute.list())):
    compute_cluster = AmlCompute(
        name=compute_name,
        description="CPU cluster compute",
        min_instances=0,
        max_instances=2,
    )
    ml_client.compute.begin_create_or_update(compute_cluster).result()

Note

You're not charged for the compute at this point, as the cluster remains at 0 nodes until a batch endpoint is invoked and a batch scoring job is submitted. For more information about compute costs, see Manage and optimize cost for AmlCompute.

Create a batch endpoint

A batch endpoint is an HTTPS endpoint that clients can call to trigger a batch scoring job. A batch scoring job is a job that scores multiple inputs. A batch deployment is a set of compute resources hosting the model that does the actual batch scoring (or batch inferencing). One batch endpoint can have multiple batch deployments. For more information on batch endpoints, see What are batch endpoints?.

Tip

One of the batch deployments serves as the default deployment for the endpoint. When the endpoint is invoked, the default deployment does the actual batch scoring. For more information on batch endpoints and deployments, see batch endpoints and batch deployment.

  1. Name the endpoint. The endpoint's name must be unique within an Azure region, since the name is included in the endpoint's URI. For example, there can be only one batch endpoint with the name mybatchendpoint in westus2.

    Place the endpoint's name in a variable so you can easily reference it later.

    endpoint_name = "mnist-batch"
  2. Configure the batch endpoint

    endpoint = BatchEndpoint(
        name=endpoint_name,
        description="A batch endpoint for scoring images from the MNIST dataset.",
        tags={"type": "deep-learning"},
    )

    The following table describes the key properties of the endpoint. For more information on batch endpoint definition, see BatchEndpoint Class.

    Key Description
    name The name of the batch endpoint. Needs to be unique at the Azure region level.
    description The description of the batch endpoint. This property is optional.
    tags The tags to include in the endpoint. This property is optional.
  3. Create the endpoint:

    ml_client.begin_create_or_update(endpoint).result()

Create a batch deployment

A model deployment is a set of resources required for hosting the model that does the actual inferencing. To create a batch model deployment, you need the following items:

  • A registered model in the workspace
  • The code to score the model
  • An environment with the model's dependencies installed
  • The pre-created compute and resource settings
  1. Begin by registering the model to be deployed—a Torch model for the popular digit recognition problem (MNIST). Batch Deployments can only deploy models that are registered in the workspace. You can skip this step if the model you want to deploy is already registered.

    Tip

    Models are associated with the deployment, rather than with the endpoint. This means that a single endpoint can serve different models (or model versions) under the same endpoint, provided that the different models (or model versions) are deployed in different deployments.

    model_name = "mnist-classifier-torch"
    model_local_path = "deployment-torch/model/"
    
    model = ml_client.models.create_or_update(
        Model(
            name=model_name,
            path=model_local_path,
            type=AssetTypes.CUSTOM_MODEL,
            tags={"task": "classification", "framework": "torch"},
        )
    )
  2. Now it's time to create a scoring script. Batch deployments require a scoring script that indicates how a given model should be executed and how input data must be processed. Batch endpoints support scripts created in Python. In this case, you deploy a model that reads image files representing digits and outputs the corresponding digit. The scoring script is as follows:

    Note

    For MLflow models, Azure Machine Learning automatically generates the scoring script, so you're not required to provide one. If your model is an MLflow model, you can skip this step. For more information about how batch endpoints work with MLflow models, see the article Using MLflow models in batch deployments.

    Warning

    If you're deploying an Automated machine learning (AutoML) model under a batch endpoint, note that the scoring script that AutoML provides only works for online endpoints and is not designed for batch execution. For information on how to create a scoring script for your batch deployment, see Author scoring scripts for batch deployments.

    deployment-torch/code/batch_driver.py

    import os
    import pandas as pd
    import torch
    import torchvision
    import glob
    from os.path import basename
    from mnist_classifier import MnistClassifier
    from typing import List
    
    
    def init():
        global model
        global device
    
        # AZUREML_MODEL_DIR is an environment variable created during deployment
        # It is the path to the model folder
        model_path = os.environ["AZUREML_MODEL_DIR"]
        model_file = glob.glob(f"{model_path}/*/*.pt")[-1]
    
        model = MnistClassifier()
        model.load_state_dict(torch.load(model_file))
        model.eval()
    
        device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
    
    
    def run(mini_batch: List[str]) -> pd.DataFrame:
        print(f"Executing run method over batch of {len(mini_batch)} files.")
    
        results = []
        with torch.no_grad():
            for image_path in mini_batch:
                image_data = torchvision.io.read_image(image_path).float()
                batch_data = image_data.expand(1, -1, -1, -1)
                input = batch_data.to(device)
    
                # perform inference
                predict_logits = model(input)
    
                # Compute probabilities, classes and labels
                predictions = torch.nn.Softmax(dim=-1)(predict_logits)
                predicted_prob, predicted_class = torch.max(predictions, axis=-1)
    
                results.append(
                    {
                        "file": basename(image_path),
                        "class": predicted_class.numpy()[0],
                        "probability": predicted_prob.numpy()[0],
                    }
                )
    
        return pd.DataFrame(results)
    
  3. Create an environment where your batch deployment will run. The environment should include the packages azureml-core and azureml-dataset-runtime[fuse], which are required by batch endpoints, plus any dependency your code requires for running. In this case, the dependencies have been captured in a conda.yaml file:

    deployment-torch/environment/conda.yaml

    name: mnist-env
    channels:
      - conda-forge
    dependencies:
      - python=3.8.5
      - pip<22.0
      - pip:
        - torch==1.13.0
        - torchvision==0.14.0
        - pytorch-lightning
        - pandas
        - azureml-core
        - azureml-dataset-runtime[fuse]
    

    Important

    The packages azureml-core and azureml-dataset-runtime[fuse] are required by batch deployments and should be included in the environment dependencies.

    Specify the environment as follows:

    Get a reference to the environment:

    env = Environment(
        name="batch-torch-py38",
        conda_file="deployment-torch/environment/conda.yaml",
        image="mcr.microsoft.com/azureml/openmpi4.1.0-ubuntu20.04:latest",
    )

    Warning

    Curated environments are not supported in batch deployments. You need to specify your own environment. You can always use the base image of a curated environment as yours to simplify the process.

  4. Create a deployment definition

    deployment = ModelBatchDeployment(
        name="mnist-torch-dpl",
        description="A deployment using Torch to solve the MNIST classification dataset.",
        endpoint_name=endpoint_name,
        model=model,
        code_configuration=CodeConfiguration(
            code="deployment-torch/code/", scoring_script="batch_driver.py"
        ),
        environment=env,
        compute=compute_name,
        settings=ModelBatchDeploymentSettings(
            max_concurrency_per_instance=2,
            mini_batch_size=10,
            instance_count=2,
            output_action=BatchDeploymentOutputAction.APPEND_ROW,
            output_file_name="predictions.csv",
            retry_settings=BatchRetrySettings(max_retries=3, timeout=30),
            logging_level="info",
        ),
    )

    The BatchDeployment Class allows you to configure the following key properties of a batch deployment:

    Key Description
    name Name of the deployment.
    endpoint_name Name of the endpoint to create the deployment under.
    model The model to use for the deployment. This value can be either a reference to an existing versioned model in the workspace or an inline model specification.
    environment The environment to use for the deployment. This value can be either a reference to an existing versioned environment in the workspace or an inline environment specification (optional for MLflow models).
    code_configuration The configuration about how to run inference for the model (optional for MLflow models).
    code_configuration.code Path to the source code directory for scoring the model.
    code_configuration.scoring_script Relative path to the scoring file in the source code directory.
    compute Name of the compute target on which to execute the batch scoring jobs.
    instance_count The number of nodes to use for each batch scoring job.
    settings The model deployment inference configuration.
    settings.max_concurrency_per_instance The maximum number of parallel scoring_script runs per instance.
    settings.mini_batch_size The number of files the code_configuration.scoring_script can process in one run() call.
    settings.retry_settings Retry settings for scoring each mini batch.
    settings.retry_settingsmax_retries The maximum number of retries for a failed or timed-out mini batch (default is 3).
    settings.retry_settingstimeout The timeout in seconds for scoring a mini batch (default is 30).
    settings.output_action How the output should be organized in the output file. Allowed values are append_row or summary_only. Default is append_row.
    settings.logging_level The log verbosity level. Allowed values are warning, info, debug. Default is info.
    settings.environment_variables Dictionary of environment variable name-value pairs to set for each batch scoring job.
  5. Create the deployment:

    Using the MLClient created earlier, create the deployment in the workspace. This command starts the deployment creation and returns a confirmation response while the deployment creation continues.

    ml_client.begin_create_or_update(deployment).result()

    Once the deployment is completed, set the new deployment as the default deployment in the endpoint:

    endpoint = ml_client.batch_endpoints.get(endpoint_name)
    endpoint.defaults.deployment_name = deployment.name
    ml_client.batch_endpoints.begin_create_or_update(endpoint).result()
  6. Check batch endpoint and deployment details.

    To check a batch deployment, run the following code:

    ml_client.batch_deployments.get(name=deployment.name, endpoint_name=endpoint.name)

Run batch endpoints and access results

Invoking a batch endpoint triggers a batch scoring job. The job name is returned from the invoke response and can be used to track the batch scoring progress. When running models for scoring in batch endpoints, you need to specify the path to the input data so that the endpoints can find the data you want to score. The following example shows how to start a new job over a sample data of the MNIST dataset stored in an Azure Storage Account.

You can run and invoke a batch endpoint using Azure CLI, Azure Machine Learning SDK, or REST endpoints. For more details about these options, see Create jobs and input data for batch endpoints.

Note

How does parallelization work?

Batch deployments distribute work at the file level, which means that a folder containing 100 files with mini-batches of 10 files will generate 10 batches of 10 files each. Notice that this happens regardless of the size of the files involved. If your files are too big to be processed in large mini-batches, we suggest that you either split the files into smaller files to achieve a higher level of parallelism or you decrease the number of files per mini-batch. Currently, batch deployments can't account for skews in a file's size distribution.

Tip

What's the difference between the inputs and input parameter when you invoke an endpoint?

In general, you can use a dictionary inputs = {} parameter with the invoke method to provide an arbitrary number of required inputs to a batch endpoint that contains a model deployment or a pipeline deployment.

For a model deployment, you can use the input parameter as a shorter way to specify the input data location for the deployment. This approach works because a model deployment always takes only one data input.

job = ml_client.batch_endpoints.invoke(
    endpoint_name=endpoint_name,
    deployment_name=deployment.name,
    input=Input(
        path="https://azuremlexampledata.blob.core.windows.net/data/mnist/sample/",
        type=AssetTypes.URI_FOLDER,
    ),
)

Batch endpoints support reading files or folders that are located in different locations. To learn more about the supported types and how to specify them, see Accessing data from batch endpoints jobs.

Monitor batch job execution progress

Batch scoring jobs usually take some time to process the entire set of inputs.

The following code checks the job status and outputs a link to the Azure Machine Learning studio for further details.

ml_client.jobs.get(job.name)

Check batch scoring results

The job outputs are stored in cloud storage, either in the workspace's default blob storage, or the storage you specified. To learn how to change the defaults, see Configure the output location. The following steps allow you to view the scoring results in Azure Storage Explorer when the job is completed:

  1. Run the following code to open the batch scoring job in Azure Machine Learning studio. The job studio link is also included in the response of invoke, as the value of interactionEndpoints.Studio.endpoint.

    az ml job show -n $JOB_NAME --web
    
  2. In the graph of the job, select the batchscoring step.

  3. Select the Outputs + logs tab and then select Show data outputs.

  4. From Data outputs, select the icon to open Storage Explorer.

    Studio screenshot showing view data outputs location.

    The scoring results in Storage Explorer are similar to the following sample page:

    Screenshot of the scoring output.

Configure the output location

By default, the batch scoring results are stored in the workspace's default blob store within a folder named by job name (a system-generated GUID). You can configure where to store the scoring outputs when you invoke the batch endpoint.

Use params_override to configure any folder in an Azure Machine Learning registered data store. Only registered data stores are supported as output paths. In this example you use the default data store:

batch_ds = ml_client.datastores.get_default()

Once you've identified the data store you want to use, configure the output as follows:

filename = f"predictions-{random.randint(0,99999)}.csv"

job = ml_client.batch_endpoints.invoke(
    endpoint_name=endpoint_name,
    input=Input(
        path="https://azuremlexampledata.blob.core.windows.net/data/mnist/sample/",
        type=AssetTypes.URI_FOLDER,
    ),
    params_override=[
        {"output_dataset.datastore_id": f"azureml:{batch_ds.id}"},
        {"output_dataset.path": f"/{endpoint_name}/"},
        {"output_file_name": filename},
    ],
)

Warning

You must use a unique output location. If the output file exists, the batch scoring job will fail.

Important

Unlike inputs, outputs can be stored only in Azure Machine Learning data stores that run on blob storage accounts.

Overwrite deployment configuration for each job

When you invoke a batch endpoint, some settings can be overwritten to make best use of the compute resources and to improve performance. The following settings can be configured on a per-job basis:

  • Instance count: use this setting to overwrite the number of instances to request from the compute cluster. For example, for larger volume of data inputs, you might want to use more instances to speed up the end to end batch scoring.
  • Mini-batch size: use this setting to overwrite the number of files to include in each mini-batch. The number of mini batches is decided by the total input file counts and mini-batch size. A smaller mini-batch size generates more mini batches. Mini batches can be run in parallel, but there might be extra scheduling and invocation overhead.
  • Other settings, such as max retries, timeout, and error threshold can be overwritten. These settings might impact the end-to-end batch scoring time for different workloads.
job = ml_client.batch_endpoints.invoke(
    endpoint_name=endpoint_name,
    input=Input(
        path="https://azuremlexampledata.blob.core.windows.net/data/mnist/sample/"
    ),
    params_override=[{"mini_batch_size": "20"}, {"compute.instance_count": "5"}],
)

Add deployments to an endpoint

Once you have a batch endpoint with a deployment, you can continue to refine your model and add new deployments. Batch endpoints will continue serving the default deployment while you develop and deploy new models under the same endpoint. Deployments don't affect one another.

In this example, you add a second deployment that uses a model built with Keras and TensorFlow to solve the same MNIST problem.

Add a second deployment

  1. Create an environment where your batch deployment will run. Include in the environment any dependency your code requires for running. You also need to add the library azureml-core, as it's required for batch deployments to work. The following environment definition has the required libraries to run a model with TensorFlow.

    Get a reference to the environment:

    env = Environment(
        name="batch-tensorflow-py38",
        conda_file="deployment-keras/environment/conda.yaml",
        image="mcr.microsoft.com/azureml/openmpi4.1.0-ubuntu20.04:latest",
    )

    The conda file used looks as follows:

    deployment-keras/environment/conda.yaml

    name: tensorflow-env
    channels:
      - conda-forge
    dependencies:
      - python=3.8.5
      - pip
      - pip:
        - pandas
        - tensorflow
        - pillow
        - azureml-core
        - azureml-dataset-runtime[fuse]
    
  2. Create a scoring script for the model:

    deployment-keras/code/batch_driver.py

    import os
    import numpy as np
    import pandas as pd
    import tensorflow as tf
    from typing import List
    from os.path import basename
    from PIL import Image
    from tensorflow.keras.models import load_model
    
    
    def init():
        global model
    
        # AZUREML_MODEL_DIR is an environment variable created during deployment
        model_path = os.path.join(os.environ["AZUREML_MODEL_DIR"], "model")
    
        # load the model
        model = load_model(model_path)
    
    
    def run(mini_batch: List[str]) -> pd.DataFrame:
        print(f"Executing run method over batch of {len(mini_batch)} files.")
    
        results = []
        for image_path in mini_batch:
            data = Image.open(image_path)
            data = np.array(data)
            data_batch = tf.expand_dims(data, axis=0)
    
            # perform inference
            pred = model.predict(data_batch)
    
            # Compute probabilities, classes and labels
            pred_prob = tf.math.reduce_max(tf.math.softmax(pred, axis=-1)).numpy()
            pred_class = tf.math.argmax(pred, axis=-1).numpy()
    
            results.append(
                {
                    "file": basename(image_path),
                    "class": pred_class[0],
                    "probability": pred_prob,
                }
            )
    
        return pd.DataFrame(results)
    
  3. Create a deployment definition

    deployment_keras = ModelBatchDeployment(
        name="mnist-keras-dpl",
        description="A deployment using Keras to solve the MNIST classification dataset.",
        endpoint_name=endpoint_name,
        model=model,
        code_configuration=CodeConfiguration(
            code="deployment-keras/code/", scoring_script="batch_driver.py"
        ),
        environment=env,
        compute=compute_name,
        settings=ModelBatchDeploymentSettings(
            instance_count=2,
            max_concurrency_per_instance=2,
            mini_batch_size=10,
            output_action=BatchDeploymentOutputAction.APPEND_ROW,
            output_file_name="predictions.csv",
            retry_settings=BatchRetrySettings(max_retries=3, timeout=30),
            logging_level="info",
        ),
    )
  4. Create the deployment:

    Using the MLClient created earlier, create the deployment in the workspace. This command starts the deployment creation and returns a confirmation response while the deployment creation continues.

    ml_client.begin_create_or_update(deployment_keras).result()

Test a non-default batch deployment

To test the new non-default deployment, you need to know the name of the deployment you want to run.

job = ml_client.batch_endpoints.invoke(
    endpoint_name=endpoint_name,
    deployment_name=deployment_keras.name,
    input=Input(
        path="https://azuremlexampledata.blob.core.windows.net/data/mnist/sample/",
        type=AssetTypes.URI_FOLDER,
    ),
)

Notice deployment_name is used to specify the deployment to execute. This parameter allows you to invoke a non-default deployment without updating the default deployment of the batch endpoint.

Update the default batch deployment

Although you can invoke a specific deployment inside an endpoint, you'll typically want to invoke the endpoint itself and let the endpoint decide which deployment to use—the default deployment. You can change the default deployment (and consequently, change the model serving the deployment) without changing your contract with the user invoking the endpoint. Use the following code to update the default deployment:

endpoint = ml_client.batch_endpoints.get(endpoint_name)
endpoint.defaults.deployment_name = deployment_keras.name
ml_client.batch_endpoints.begin_create_or_update(endpoint).result()

Delete the batch endpoint and the deployment

If you won't be using the old batch deployment, delete it by running the following code.

ml_client.batch_deployments.begin_delete(
    endpoint_name=endpoint_name, name=deployment.name
).result()

Run the following code to delete the batch endpoint and all its underlying deployments. Batch scoring jobs won't be deleted.

ml_client.batch_endpoints.begin_delete(name=endpoint_name)

Additional resources

Documentation