Cuir in eagar

Comhroinn trí


Deploy model packages to online endpoints (preview)

Model package is a capability in Azure Machine Learning that allows you to collect all the dependencies required to deploy a machine learning model to a serving platform. Creating packages before deploying models provides robust and reliable deployment and a more efficient MLOps workflow. Packages can be moved across workspaces and even outside of Azure Machine Learning. Learn more about Model packages (preview)

Important

This feature is currently in public preview. This preview version is provided without a service-level agreement, and we don't recommend it for production workloads. Certain features might not be supported or might have constrained capabilities.

For more information, see Supplemental Terms of Use for Microsoft Azure Previews.

In this article, you learn how to package a model and deploy it to an online endpoint in Azure Machine Learning.

Prerequisites

Before following the steps in this article, make sure you have the following prerequisites:

  • An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the free or paid version of Azure Machine Learning.

  • An Azure Machine Learning workspace. If you don't have one, use the steps in the How to manage workspacesarticle to create one.

  • Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure Machine Learning. To perform the steps in this article, your user account must be assigned the owner or contributor role for the Azure Machine Learning workspace, or a custom role. For more information, see Manage access to an Azure Machine Learning workspace.

About this example

In this example, you package a model of type custom and deploy it to an online endpoint for online inference.

The example in this article is based on code samples contained in the azureml-examples repository. To run the commands locally without having to copy/paste YAML and other files, first clone the repo and then change directories to the folder:

git clone https://github.com/Azure/azureml-examples --depth 1
cd azureml-examples/cli

This section uses the example in the folder endpoints/online/deploy-packages/custom-model.

Connect to your workspace

Connect to the Azure Machine Learning workspace where you'll do your work.

az account set --subscription <subscription>
az configure --defaults workspace=<workspace> group=<resource-group> location=<location>

Package the model

You can create model packages explicitly to allow you to control how the packaging operation is done. You can create model packages by specifying the:

  • Model to package: Each model package can contain only a single model. Azure Machine Learning doesn't support packaging of multiple models under the same model package.
  • Base environment: Environments are used to indicate the base image, and in Python packages dependencies your model need. For MLflow models, Azure Machine Learning automatically generates the base environment. For custom models, you need to specify it.
  • Serving technology: The inferencing stack used to run the model.

Tip

If your model is MLflow, you don't need to create the model package manually. We can automatically package before deployment. See Deploy MLflow models to online endpoints.

  1. Model packages require the model to be registered in either your workspace or in an Azure Machine Learning registry. In this example, you already have a local copy of the model in the repository, so you only need to publish the model to the registry in the workspace. You can skip this section if the model you're trying to deploy is already registered.

    MODEL_NAME='sklearn-regression'
    MODEL_PATH='model'
    az ml model create --name $MODEL_NAME --path $MODEL_PATH --type custom_model
    
  2. Our model requires the following packages to run and we have them specified in a conda file:

    conda.yaml

    name: model-env
    channels:
      - conda-forge
    dependencies:
      - python=3.9
      - numpy=1.23.5
      - pip=23.0.1
      - scikit-learn=1.2.2
      - scipy=1.10.1
      - xgboost==1.3.3
    

    Note

    Notice how only model's requirements are indicated in the conda YAML. Any package required for the inferencing server will be included by the package operation.

    Tip

    If your model requires packages hosted in private feeds, you can configure your package to include them. Read Package a model that has dependencies in private Python feeds.

  3. Create a base environment that contains the model requirements and a base image. Only dependencies required by your model are indicated in the base environment. For MLflow models, base environment is optional in which case Azure Machine Learning autogenerates it for you.

    Create a base environment definition:

    sklearn-regression-env.yml

    $schema: https://azuremlschemas.azureedge.net/latest/environment.schema.json
    name: sklearn-regression-env
    image: mcr.microsoft.com/azureml/openmpi4.1.0-ubuntu22.04
    conda_file: conda.yaml
    description: An environment for models built with XGBoost and Scikit-learn.
    

    Then create the environment as follows:

    az ml environment create -f environment/sklearn-regression-env.yml
    
  4. Create a package specification:

    package-moe.yml

    $schema: http://azureml/sdk-2-0/ModelVersionPackage.json
    base_environment_source:
        type: environment_asset
        resource_id: azureml:sklearn-regression-env:1
    target_environment: sklearn-regression-online-pkg
    inferencing_server: 
        type: azureml_online
        code_configuration:
          code: src
          scoring_script: score.py
    
  5. Start the model package operation:

    az ml model package -n $MODEL_NAME -v $MODEL_VERSION --file package-moe.yml
    
  6. The result of the package operation is an environment.

Deploy the model package

Model packages can be deployed directly to online endpoints in Azure Machine Learning. Follow these steps to deploy a package to an online endpoint:

  1. Pick a name for an endpoint to host the deployment of the package and create it:

    ENDPOINT_NAME="sklearn-regression-online"
    
    az ml online-endpoint create -n $ENDPOINT_NAME
    
  2. Create the deployment, using the package. Notice how environment is configured with the package you've created.

    deployment.yml

    $schema: https://azuremlschemas.azureedge.net/latest/managedOnlineDeployment.schema.json
    name: with-package
    endpoint_name: hello-packages
    environment: azureml:sklearn-regression-online-pkg@latest
    instance_type: Standard_DS3_v2
    instance_count: 1
    

    Tip

    Notice you don't specify the model or scoring script in this example; they're all part of the package.

  3. Start the deployment:

    az ml online-deployment create -f deployment.yml
    
  4. At this point, the deployment is ready to be consumed. You can test how it's working by creating a sample request file:

    sample-request.json

    {
        "data": [
            [1,2,3,4,5,6,7,8,9,10], 
            [10,9,8,7,6,5,4,3,2,1]
        ]
    }
    
  5. Send the request to the endpoint

    az ml online-endpoint invoke --name $ENDPOINT_NAME --deployment with-package -r sample-request.json
    

Next step