What is an Azure Machine Learning compute instance?

An Azure Machine Learning compute instance is a managed cloud-based workstation for data scientists. Each compute instance has only one owner, although you can share files between multiple compute instances.

Compute instances make it easy to get started with Azure Machine Learning development and provide management and enterprise readiness capabilities for IT administrators.

Use a compute instance as your fully configured and managed development environment in the cloud for machine learning. They can also be used as a compute target for training and inferencing for development and testing purposes.

For compute instance Jupyter functionality to work, ensure that web socket communication isn't disabled. Ensure your network allows websocket connections to *.instances.azureml.net and *.instances.azureml.ms.

Important

Items marked (preview) in this article are currently in public preview. The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see Supplemental Terms of Use for Microsoft Azure Previews.

Why use a compute instance?

A compute instance is a fully managed cloud-based workstation optimized for your machine learning development environment. It provides the following benefits:

Key benefits Description
Productivity You can build and deploy models using integrated notebooks and the following tools in Azure Machine Learning studio:
- Jupyter
- JupyterLab
- VS Code (preview)
Compute instance is fully integrated with Azure Machine Learning workspace and studio. You can share notebooks and data with other data scientists in the workspace.
Managed & secure Reduce your security footprint and add compliance with enterprise security requirements. Compute instances provide robust management policies and secure networking configurations such as:

- Autoprovisioning from Resource Manager templates or Azure Machine Learning SDK
- Azure role-based access control (Azure RBAC)
- Virtual network support
- Azure policy to disable SSH access
- Azure policy to enforce creation in a virtual network
- Auto-shutdown/auto-start based on schedule
- TLS 1.2 enabled
Preconfigured for ML Save time on setup tasks with preconfigured and up-to-date ML packages, deep learning frameworks, GPU drivers.
Fully customizable Broad support for Azure VM types including GPUs and persisted low-level customization such as installing packages and drivers makes advanced scenarios a breeze. You can also use setup scripts to automate customization

Tools and environments

Azure Machine Learning compute instance enables you to author, train, and deploy models in a fully integrated notebook experience in your workspace.

You can run notebooks from your Azure Machine Learning workspace, Jupyter, JupyterLab, or Visual Studio Code. VS Code Desktop can be configured to access your compute instance. Or use VS Code for the Web, directly from the browser, and without any required installations or dependencies.

We recommend you try VS Code for the Web to take advantage of the easy integration and rich development environment it provides. VS Code for the Web gives you many of the features of VS Code Desktop that you love, including search and syntax highlighting while browsing and editing. For more information about using VS Code Desktop and VS Code for the Web, see Launch Visual Studio Code integrated with Azure Machine Learning (preview) and Work in VS Code remotely connected to a compute instance (preview).

You can install packages and add kernels to your compute instance.

The following tools and environments are already installed on the compute instance:

General tools & environments Details
Drivers CUDA
cuDNN
NVIDIA
Blob FUSE
Intel MPI library
Azure CLI
Azure Machine Learning samples
Docker
Nginx
NCCL 2.0
Protobuf
R tools & environments Details
R kernel

You can Add RStudio or Posit Workbench (formerly RStudio Workbench) when you create the instance.

PYTHON tools & environments Details
Anaconda Python
Jupyter and extensions
Jupyterlab and extensions
Azure Machine Learning SDK
for Python
from PyPI
Includes azure-ai-ml and many common Azure extra packages. To see the full list,
open a terminal window on your compute instance and run
conda list -n azureml_py310_sdkv2 ^azure
Other PyPI packages jupytext
tensorboard
nbconvert
notebook
Pillow
Conda packages cython
numpy
ipykernel
scikit-learn
matplotlib
tqdm
joblib
nodejs
Deep learning packages PyTorch
TensorFlow
Keras
Horovod
MLFlow
pandas-ml
scrapbook
ONNX packages keras2onnx
onnx
onnxconverter-common
skl2onnx
onnxmltools
Azure Machine Learning Python samples

The compute instance has Ubuntu as the base OS.

Accessing files

Notebooks and Python scripts are stored in the default storage account of your workspace in Azure file share. These files are located under your "User files" directory. This storage makes it easy to share notebooks between compute instances. The storage account also keeps your notebooks safely preserved when you stop or delete a compute instance.

The Azure file share account of your workspace is mounted as a drive on the compute instance. This drive is the default working directory for Jupyter, Jupyter Labs, RStudio, and Posit Workbench. This means that the notebooks and other files you create in Jupyter, JupyterLab, VS Code for Web, RStudio, or Posit are automatically stored on the file share and available to use in other compute instances as well.

The files in the file share are accessible from all compute instances in the same workspace. Any changes to these files on the compute instance are reliably persisted back to the file share.

You can also clone the latest Azure Machine Learning samples to your folder under the user files directory in the workspace file share.

Writing small files can be slower on network drives than writing to the compute instance local disk itself. If you're writing many small files, try using a directory directly on the compute instance, such as a /tmp directory. Note files on the compute instance aren't accessible from other compute instances.

Don't store training data on the notebooks file share. For information on the various options to store data, see Access data in a job.

You can use the /tmp directory on the compute instance for your temporary data. However, don't write large files of data on the OS disk of the compute instance. OS disk on compute instance has 120-GB capacity. You can also store temporary training data on temporary disk mounted on /mnt. Temporary disk size is based on the VM size chosen and can store larger amounts of data if a higher size VM is chosen. Any software packages you install are saved on the OS disk of compute instance. Note customer managed key encryption is currently not supported for OS disk. The OS disk for compute instance is encrypted with Microsoft-managed keys.

You can also mount datastores and datasets.

Create

Follow the steps in Create resources you need to get started to create a basic compute instance.

For more options, see create a new compute instance.

As an administrator, you can create a compute instance for others in the workspace. SSO has to be disabled for such a compute instance.

You can also use a setup script for an automated way to customize and configure the compute instance.

Other ways to create a compute instance:

The dedicated cores per region per VM family quota and total regional quota, which applies to compute instance creation, is unified and shared with Azure Machine Learning training compute cluster quota. Stopping the compute instance doesn't release quota to ensure you are able to restart the compute instance. Don't stop the compute instance through the OS terminal by doing a sudo shutdown.

Compute instance comes with P10 OS disk. Temp disk type depends on the VM size chosen. Currently, it isn't possible to change the OS disk type.

Compute target

Compute instances can be used as a training compute target similar to Azure Machine Learning compute training clusters. But a compute instance has only a single node, while a compute cluster can have more nodes.

A compute instance:

  • Has a job queue.
  • Runs jobs securely in a virtual network environment, without requiring enterprises to open up SSH port. The job executes in a containerized environment and packages your model dependencies in a Docker container.
  • Can run multiple small jobs in parallel. One job per vCPU can run in parallel while the rest of the jobs are queued.
  • Supports single-node multi-GPU distributed training jobs

You can use compute instance as a local inferencing deployment target for test/debug scenarios.

Tip

The compute instance has 120GB OS disk. If you run out of disk space and get into an unusable state, please clear at least 5 GB disk space on OS disk (mounted on /) through the compute instance terminal by removing files/folders and then do sudo reboot. Temporary disk will be freed after restart; you do not need to clear space on temp disk manually. To access the terminal go to compute list page or compute instance details page and click on Terminal link. You can check available disk space by running df -h on the terminal. Clear at least 5 GB space before doing sudo reboot. Please do not stop or restart the compute instance through the Studio until 5 GB disk space has been cleared. Auto shutdowns, including scheduled start or stop as well as idle shutdowns, will not work if the CI disk is full.