Author scoring scripts for batch deployments

APPLIES TO: Azure CLI ml extension v2 (current) Python SDK azure-ai-ml v2 (current)

Batch endpoints allow you to deploy models to perform long-running inference at scale. When deploying models, you need to create and specify a scoring script (also known as batch driver script) to indicate how we should use it over the input data to create predictions. In this article, you will learn how to use scoring scripts in model deployments for different scenarios and their best practices.

Tip

MLflow models don't require a scoring script as it is autogenerated for you. For more details about how batch endpoints work with MLflow models, see the dedicated tutorial Using MLflow models in batch deployments.

Warning

If you are deploying an Automated ML model under a batch endpoint, notice that the scoring script that Automated ML provides only works for Online Endpoints and it is not designed for batch execution. Please follow this guideline to learn how to create one depending on what your model does.

Understanding the scoring script

The scoring script is a Python file (.py) that contains the logic about how to run the model and read the input data submitted by the batch deployment executor. Each model deployment provides the scoring script (allow with any other dependency required) at creation time. It is usually indicated as follows:

deployment.yml

code_configuration:
  code: code
  scoring_script: batch_driver.py

The scoring script must contain two methods:

The init method

Use the init() method for any costly or common preparation. For example, use it to load the model into memory. This function is called once at the beginning of the entire batch job. Your model's files are available in a path determined by the environment variable AZUREML_MODEL_DIR. Notice that depending on how your model was registered, its files may be contained in a folder (in the following example, the model has several files in a folder named model). See how you can find out what's the folder used by your model.

def init():
    global model

    # AZUREML_MODEL_DIR is an environment variable created during deployment
    # The path "model" is the name of the registered model's folder
    model_path = os.path.join(os.environ["AZUREML_MODEL_DIR"], "model")

    # load the model
    model = load_model(model_path)

Notice that in this example we are placing the model in a global variable model. Use global variables to make available any asset needed to perform inference to your scoring function.

The run method

Use the run(mini_batch: List[str]) -> Union[List[Any], pandas.DataFrame] method to perform the scoring of each mini-batch generated by the batch deployment. Such method is called once per each mini_batch generated for your input data. Batch deployments read data in batches accordingly to how the deployment is configured.

import pandas as pd
from typing import List, Any, Union

def run(mini_batch: List[str]) -> Union[List[Any], pd.DataFrame]:
    results = []

    for file in mini_batch:
        (...)

    return pd.DataFrame(results)

The method receives a list of file paths as a parameter (mini_batch). You can use this list to either iterate over each file and process it one by one, or to read the entire batch and process it at once. The best option depends on your compute memory and the throughput you need to achieve. For an example of how to read entire batches of data at once see High throughput deployments.

Note

How is work distributed?

Batch deployments distribute work at the file level, which means that a folder containing 100 files with mini-batches of 10 files will generate 10 batches of 10 files each. Notice that this will happen regardless of the size of the files involved. If your files are too big to be processed in large mini-batches we suggest to either split the files in smaller files to achieve a higher level of parallelism or to decrease the number of files per mini-batch. At this moment, batch deployment can't account for skews in the file's size distribution.

The run() method should return a Pandas DataFrame or an array/list. Each returned output element indicates one successful run of an input element in the input mini_batch. For file or folder data assets, each row/element returned represents a single file processed. For a tabular data asset, each row/element returned represents a row in a processed file.

Important

How to write predictions?

Whatever you return in the run() function will be appended in the output pedictions file generated by the batch job. It is important to return the right data type from this function. Return arrays when you need to output a single prediction. Return pandas DataFrames when you need to return multiple pieces of information. For instance, for tabular data you may want to append your predictions to the original record. Use a pandas DataFrame for this case. Although pandas DataFrame may contain column names, they are not included in the output file.

If you need to write predictions in a different way, you can customize outputs in batch deployments.

Warning

Do not output complex data types (or lists of complex data types) rather than pandas.DataFrame in the run function. Those outputs will be transformed to string and they will be hard to read.

The resulting DataFrame or array is appended to the output file indicated. There's no requirement on the cardinality of the results (1 file can generate 1 or many rows/elements in the output). All elements in the result DataFrame or array are written to the output file as-is (considering the output_action isn't summary_only).

Python packages for scoring

Any library that your scoring script requires to run needs to be indicated in the environment where your batch deployment runs. As for scoring scripts, environments are indicated per deployment. Usually, you indicate your requirements using a conda.yml dependencies file, which may look as follows:

mnist/environment/conda.yaml

name: mnist-env
channels:
  - conda-forge
dependencies:
  - python=3.8.5
  - pip<22.0
  - pip:
    - torch==1.13.0
    - torchvision==0.14.0
    - pytorch-lightning
    - pandas
    - azureml-core
    - azureml-dataset-runtime[fuse]

Refer to Create a batch deployment for more details about how to indicate the environment for your model.

Writing predictions in a different way

By default, the batch deployment writes the model's predictions in a single file as indicated in the deployment. However, there are some cases where you need to write the predictions in multiple files. For instance, if the input data is partitioned, you typically would want to generate your output partitioned too. On those cases you can Customize outputs in batch deployments to indicate:

  • The file format used (CSV, parquet, json, etc) to write predictions.
  • The way data is partitioned in the output.

Read the article Customize outputs in batch deployments for an example about how to achieve it.

Source control of scoring scripts

It is highly advisable to put scoring scripts under source control.

Best practices for writing scoring scripts

When writing scoring scripts that work with big amounts of data, you need to take into account several factors, including:

  • The size of each file.
  • The amount of data on each file.
  • The amount of memory required to read each file.
  • The amount of memory required to read an entire batch of files.
  • The memory footprint of the model.
  • The memory footprint of the model when running over the input data.
  • The available memory in your compute.

Batch deployments distribute work at the file level, which means that a folder containing 100 files with mini-batches of 10 files will generate 10 batches of 10 files each (regardless of the size of the files involved). If your files are too big to be processed in large mini-batches, we suggest to either split the files in smaller files to achieve a higher level of parallelism or to decrease the number of files per mini-batch. At this moment, batch deployment can't account for skews in the file's size distribution.

Relationship between the degree of parallelism and the scoring script

Your deployment configuration controls the size of each mini-batch and the number of workers on each node. Take into account them when deciding if you want to read the entire mini-batch to perform inference, or if you want to run inference file by file, or row by row (for tabular). See Running inference at the mini-batch, file or the row level to see the different approaches.

When running multiple workers on the same instance, take into account that memory is shared across all the workers. Usually, increasing the number of workers per node should be accompanied by a decrease in the mini-batch size or by a change in the scoring strategy (if data size and compute SKU remains the same).

Running inference at the mini-batch, file or the row level

Batch endpoints will call the run() function in your scoring script once per mini-batch. However, you will have the power to decide if you want to run the inference over the entire batch, over one file at a time, or over one row at a time (if your data happens to be tabular).

Mini-batch level

You will typically want to run inference over the batch all at once when you want to achieve high throughput in your batch scoring process. This is the case for instance if you run inference over a GPU where you want to achieve saturation of the inference device. You may also be relying on a data loader that can handle the batching itself if data doesn't fit on memory, like TensorFlow or PyTorch data loaders. On those cases, you may want to consider running inference on the entire batch.

Warning

Running inference at the batch level may require having high control over the input data size to be able to correctly account for the memory requirements and avoid out of memory exceptions. Whether you are able or not of loading the entire mini-batch in memory will depend on the size of the mini-batch, the size of the instances in the cluster, the number of workers on each node, and the size of the mini-batch.

For an example about how to achieve it, see High throughput deployments. This example processes an entire batch of files at a time.

File level

One of the easiest ways to perform inference is by iterating over all the files in the mini-batch and run your model over it. In some cases, like image processing, this may be a good idea. If your data is tabular, you may need to make a good estimation about the number of rows on each file to estimate if your model is able to handle the memory requirements to not just load the entire data into memory but also to perform inference over it. Remember that some models (specially those based on recurrent neural networks) will unfold and present a memory footprint that may not be linear with the number of rows. If your model is expensive in terms of memory, please consider running inference at the row level.

Tip

If file sizes are too big to be readed even at once, please consider breaking down files into multiple smaller files to account for better parallelization.

For an example about how to achieve it see Image processing with batch deployments. This example processes a file at a time.

Row level (tabular)

For models that present challenges in the size of their inputs, you may want to consider running inference at the row level. Your batch deployment will still provide your scoring script with a mini-batch of files, however, you will read one file, one row at a time. This may look inefficient but for some deep learning models may be the only way to perform inference without scaling up your hardware requirements.

For an example about how to achieve it see Text processing with batch deployments. This example processes a row at a time.

Using models that are folders

The environment variable AZUREML_MODEL_DIR contains the path to where the selected model is located and it is typically used in the init() function to load the model into memory. However, some models may contain their files inside of a folder and you may need to account for that when loading them. You can identify the folder structure of your model as follows:

  1. Go to Azure Machine Learning portal.

  2. Go to the section Models.

  3. Select the model you are trying to deploy and click on the tab Artifacts.

  4. Take note of the folder that is displayed. This folder was indicated when the model was registered.

    Screenshot showing the folder where the model artifacts are placed.

Then you can use this path to load the model:

def init():
    global model

    # AZUREML_MODEL_DIR is an environment variable created during deployment
    # The path "model" is the name of the registered model's folder
    model_path = os.path.join(os.environ["AZUREML_MODEL_DIR"], "model")

    model = load_model(model_path)

Next steps