Learn how to use the Microsoft Visual Studio Code debugger to test and debug online endpoints locally before deploying them to Azure.
Azure Machine Learning local endpoints help you test and debug your scoring script, environment configuration, code configuration, and machine learning model locally.
Important
This feature is currently in public preview. This preview version is provided without a service-level agreement, and we don't recommend it for production workloads. Certain features might not be supported or might have constrained capabilities.
Debugging endpoints locally before deploying them to the cloud can help you catch errors in your code and configuration earlier. You have different options for debugging endpoints locally with Visual Studio Code.
The examples in this article are based on code samples contained in the azureml-examples GitHub repository. To run the commands locally without having to copy/paste YAML and other files, clone the repo and then change directories to azureml-examples/cli:
git clone https://github.com/Azure/azureml-examples --depth 1
cd azureml-examples/cli
If you haven't already set the defaults for the Azure CLI, save your default settings. To avoid passing in the values for your subscription, workspace, and resource group multiple times, use the following commands. Replace the following parameters with values for your specific configuration:
Replace <subscription> with your Azure subscription ID.
Replace <workspace> with your Azure Machine Learning workspace name.
Replace <resource-group> with the Azure resource group that contains your workspace.
Replace <location> with the Azure region that contains your workspace.
az account set --subscription <subscription>
az configure --defaults workspace=<workspace> group=<resource-group> location=<location>
Tip
You can see what your current defaults are by using the az configure -l command.
The examples in this article can be found in the Jupyter notebook called Debug online endpoints locally in Visual Studio Code within the azureml-examples repository. To run the code locally, clone the repo and then change directories to the notebook's parent directory sdk/endpoints/online/managed.
git clone https://github.com/Azure/azureml-examples --depth 1
cd azureml-examples
cd sdk/python/endpoints/online/managed
Open the Jupyter notebook and import the required modules:
from azure.ai.ml import MLClient
from azure.ai.ml.entities import (
ManagedOnlineEndpoint,
ManagedOnlineDeployment,
Model,
CodeConfiguration,
Environment,
)
from azure.identity import DefaultAzureCredential
Azure Machine Learning local endpoints use Docker and Visual Studio Code development containers (dev containers) to build and configure a local debugging environment. With dev containers, you can take advantage of Visual Studio Code features from inside a Docker container. For more information on dev containers, see Create a development container.
To debug online endpoints locally in Visual Studio Code, use the --vscode-debug flag when creating or updating and Azure Machine Learning online deployment. The following command uses a deployment example from the examples repo:
az ml online-deployment create --file endpoints/online/managed/sample/blue-deployment.yml --local --vscode-debug
Important
On Windows Subsystem for Linux (WSL), you'll need to update your PATH environment variable to include the path to the Visual Studio Code executable or use WSL interop. For more information, see Windows interoperability with Linux.
A Docker image is built locally. Any environment configuration or model file errors are surfaced at this stage of the process.
Note
The first time you launch a new or updated dev container, it can take several minutes.
Once the image successfully builds, your dev container opens in a Visual Studio Code window.
You'll use a few Visual Studio Code extensions to debug your deployments in the dev container. Azure Machine Learning automatically installs these extensions in your dev container.
Before starting your debug session, make sure that the Visual Studio Code extensions have finished installing in your dev container.
Azure Machine Learning local endpoints use Docker and Visual Studio Code development containers (dev containers) to build and configure a local debugging environment. With dev containers, you can take advantage of Visual Studio Code features from inside a Docker container. For more information on dev containers, see Create a development container.
To debug online endpoints locally in Visual Studio Code, set the vscode-debug and local flags when creating or updating an Azure Machine Learning online deployment. The following code mirrors a deployment example from the examples repo:
On Windows Subsystem for Linux (WSL), you'll need to update your PATH environment variable to include the path to the Visual Studio Code executable or use WSL interop. For more information, see Windows interoperability with Linux.
A Docker image is built locally. Any environment configuration or model file errors are surfaced at this stage of the process.
Note
It can take several minutes to launch a new or updated dev container for the first time.
Once the image successfully builds, your dev container opens in a Visual Studio Code window.
You'll use a few Visual Studio Code extensions to debug your deployments in the dev container. Azure Machine Learning automatically installs these extensions in your dev container.
Before starting your debug session, make sure that the Visual Studio Code extensions have finished installing in your dev container.
Start debug session
Once your environment is set up, use the Visual Studio Code debugger to test and debug your deployment locally.
Open your scoring script in Visual Studio Code.
Tip
The score.py script used by the endpoint deployed earlier is located at azureml-samples/cli/endpoints/online/managed/sample/score.py in the repository you cloned. However, the steps in this guide work with any scoring script.
Set a breakpoint anywhere in your scoring script.
To debug startup behavior, place your breakpoint(s) inside the init function.
To debug scoring behavior, place your breakpoint(s) inside the run function.
Select the Visual Studio Code Job view.
In the Run and Debug dropdown, select AzureML: Debug Local Endpoint to start debugging your endpoint locally.
In the Breakpoints section of the Run view, check that:
Raised Exceptions is unchecked
Uncaught Exceptions is checked
Select the play icon next to the Run and Debug dropdown to start your debugging session.
At this point, any breakpoints in your init function are caught. Use the debug actions to step through your code. For more information on debug actions, see the debug actions guide.
For more information on the Visual Studio Code debugger, see Debugging.
The scoring URI can be found in the scoring_uri property.
At this point, any breakpoints in your run function are caught. Use the debug actions to step through your code. For more information on debug actions, see the debug actions guide.
Now that your application is running in the debugger, try making a prediction to debug your scoring script.
Use the invoke method on your MLClient object to make a request to your local endpoint.
The scoring URI is the address where your endpoint listens for requests. The as_dict method of endpoint objects returns information similar to show in the Azure CLI. The endpoint object can be obtained through .get.
The scoring URI can be found in the scoring_uri key.
At this point, any breakpoints in your run function are caught. Use the debug actions to step through your code. For more information on debug actions, see the debug actions guide.
As you debug and troubleshoot your application, there are scenarios where you need to update your scoring script and configurations.
To apply changes to your code:
Update your code.
Restart your debug session using the Developer: Reload Window command in the command palette. For more information, see the command palette documentation.
Note
Since the directory containing your code and endpoint assets is mounted onto the dev container, any changes you make in the dev container are synced with your local file system.
For more extensive changes involving updates to your environment and endpoint configuration, use the ml extension update command. Doing so triggers a full image rebuild with your changes.
az ml online-deployment update --file <DEPLOYMENT-YAML-SPECIFICATION-FILE> --local --vscode-debug
Once the updated image is built and your development container launches, use the Visual Studio Code debugger to test and troubleshoot your updated endpoint.
As you debug and troubleshoot your application, there are scenarios where you need to update your scoring script and configurations.
To apply changes to your code:
Update your code.
Restart your debug session using the Developer: Reload Window command in the command palette. For more information, see the command palette documentation.
Note
Since the directory containing your code and endpoint assets is mounted onto the dev container, any changes you make in the dev container are synced with your local file system.
For more extensive changes involving updates to your environment and endpoint configuration, use your MLClient's online_deployments.update module/method. Doing so triggers a full image rebuild with your changes.
Once the updated image is built and your development container launches, use the Visual Studio Code debugger to test and troubleshoot your updated endpoint.
Manage data ingestion and preparation, model training and deployment, and machine learning solution monitoring with Python, Azure Machine Learning and MLflow.