How to deploy pipelines with batch endpoints

APPLIES TO: Azure CLI ml extension v2 (current) Python SDK azure-ai-ml v2 (current)

You can deploy pipeline components under a batch endpoint, providing a convenient way to operationalize them in Azure Machine Learning. In this article, you'll learn how to create a batch deployment that contains a simple pipeline. You'll learn to:

  • Create and register a pipeline component
  • Create a batch endpoint and deploy a pipeline component
  • Test the deployment

About this example

In this example, we're going to deploy a pipeline component consisting of a simple command job that prints "hello world!". This component requires no inputs or outputs and is the simplest pipeline deployment scenario.

The example in this article is based on code samples contained in the azureml-examples repository. To run the commands locally without having to copy/paste YAML and other files, first clone the repo and then change directories to the folder:

git clone https://github.com/Azure/azureml-examples --depth 1
cd azureml-examples/cli

The files for this example are in:

cd endpoints/batch/deploy-pipelines/hello-batch

Follow along in Jupyter notebooks

You can follow along with the Python SDK version of this example by opening the sdk-deploy-and-test.ipynb notebook in the cloned repository.

Prerequisites

  • An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the free or paid version of Azure Machine Learning.

  • An Azure Machine Learning workspace. To create a workspace, see Manage Azure Machine Learning workspaces.

  • Ensure that you have the following permissions in the Machine Learning workspace:

    • Create or manage batch endpoints and deployments: Use an Owner, Contributor, or Custom role that allows Microsoft.MachineLearningServices/workspaces/batchEndpoints/*.
    • Create Azure Resource Manager deployments in the workspace resource group: Use an Owner, Contributor, or Custom role that allows Microsoft.Resources/deployments/write in the resource group where the workspace is deployed.
  • Install the following software to work with Machine Learning:

    Run the following command to install the Azure CLI and the ml extension for Azure Machine Learning:

    az extension add -n ml
    

    Pipeline component deployments for Batch Endpoints are introduced in version 2.7 of the ml extension for the Azure CLI. Use the az extension update --name ml command to get the latest version.


Connect to your workspace

The workspace is the top-level resource for Machine Learning. It provides a centralized place to work with all artifacts you create when you use Machine Learning. In this section, you connect to the workspace where you perform your deployment tasks.

In the following command, enter the values for your subscription ID, workspace, location, and resource group:

az account set --subscription <subscription>
az configure --defaults workspace=<workspace> group=<resource-group> location=<location>

Create the pipeline component

Batch endpoints can deploy either models or pipeline components. Pipeline components are reusable, and you can streamline your MLOps practice by using shared registries to move these components from one workspace to another.

The pipeline component in this example contains one single step that only prints a "hello world" message in the logs. It doesn't require any inputs or outputs.

The hello-component/hello.yml file contains the configuration for the pipeline component:

hello-component/hello.yml

$schema: https://azuremlschemas.azureedge.net/latest/pipelineComponent.schema.json
name: hello_batch
display_name: Hello Batch component
version: 1
type: pipeline
jobs:
  main_job:
    type: command
    component:
      code: src
      environment: azureml://registries/azureml/environments/sklearn-1.5/labels/latest
      command: >-
        python hello.py

Register the component:

az ml component create -f hello-component/hello.yml

Create a batch endpoint

  1. Provide a name for the endpoint. A batch endpoint's name needs to be unique in each region since the name is used to construct the invocation URI. To ensure uniqueness, append any trailing characters to the name specified in the following code.

    ENDPOINT_NAME="hello-batch"
    
  2. Configure the endpoint:

    The endpoint.yml file contains the endpoint's configuration.

    endpoint.yml

    $schema: https://azuremlschemas.azureedge.net/latest/batchEndpoint.schema.json
    name: hello-batch
    description: A hello world endpoint for component deployments.
    auth_mode: aad_token
    
  3. Create the endpoint:

    az ml batch-endpoint create --name $ENDPOINT_NAME  -f endpoint.yml
    
  4. Query the endpoint URI:

    az ml batch-endpoint show --name $ENDPOINT_NAME
    

Deploy the pipeline component

To deploy the pipeline component, we have to create a batch deployment. A deployment is a set of resources required for hosting the asset that does the actual work.

  1. Create a compute cluster. Batch endpoints and deployments run on compute clusters. They can run on any Azure Machine Learning compute cluster that already exists in the workspace. Therefore, multiple batch deployments can share the same compute infrastructure. In this example, we'll work on an Azure Machine Learning compute cluster called batch-cluster. Let's verify that the compute exists on the workspace or create it otherwise.

    az ml compute create -n batch-cluster --type amlcompute --min-instances 0 --max-instances 5
    
  2. Configure the deployment:

    The deployment.yml file contains the deployment's configuration. You can check the full batch endpoint YAML schema for extra properties.

    deployment.yml

    $schema: https://azuremlschemas.azureedge.net/latest/pipelineComponentBatchDeployment.schema.json
    name: hello-batch-dpl
    endpoint_name: hello-pipeline-batch
    type: pipeline
    component: azureml:hello_batch@latest
    settings:
        default_compute: batch-cluster
    
  3. Create the deployment:

    Run the following code to create a batch deployment under the batch endpoint and set it as the default deployment.

    az ml batch-deployment create --endpoint $ENDPOINT_NAME -f deployment.yml --set-default
    

    Tip

    Notice the use of the --set-default flag to indicate that this new deployment is now the default.

  4. Your deployment is ready for use.

Test the deployment

Once the deployment is created, it's ready to receive jobs. You can invoke the default deployment as follows:

JOB_NAME=$(az ml batch-endpoint invoke -n $ENDPOINT_NAME --query name -o tsv)

Tip

In this example, the pipeline doesn't have inputs or outputs. However, if the pipeline component requires some, they can be indicated at invocation time. To learn about how to indicate inputs and outputs, see Create jobs and input data for batch endpoints or see the tutorial How to deploy a pipeline to perform batch scoring with preprocessing (preview).

You can monitor the progress of the show and stream the logs using:

az ml job stream -n $JOB_NAME

Clean up resources

Once you're done, delete the associated resources from the workspace:

Run the following code to delete the batch endpoint and its underlying deployment. --yes is used to confirm the deletion.

az ml batch-endpoint delete -n $ENDPOINT_NAME --yes

(Optional) Delete compute, unless you plan to reuse your compute cluster with later deployments.

az ml compute delete -n batch-cluster

Next steps