Track ML experiments and models with MLflow

Tracking refers to process of saving all experiment's related information that you may find relevant for every experiment you run. Such metadata varies based on your project, but it may include:

  • Code
  • Environment details (OS version, Python packages)
  • Input data
  • Parameter configurations
  • Models
  • Evaluation metrics
  • Evaluation visualizations (confusion matrix, importance plots)
  • Evaluation results (including some evaluation predictions)

Some of these elements are automatically tracked by Azure Machine Learning when working with jobs (including code, environment, and input and output data). However, others like models, parameters, and metrics, need to be instrumented by the model builder as it's specific to the particular scenario.

In this article, you'll learn how to use MLflow for tracking your experiments and runs in Azure Machine Learning workspaces.

Note

If you want to track experiments running on Azure Databricks or Azure Synapse Analytics, see the dedicated articles Track Azure Databricks ML experiments with MLflow and Azure Machine Learning or Track Azure Synapse Analytics ML experiments with MLflow and Azure Machine Learning.

Benefits of tracking experiments

We highly encourage machine learning practitioners to instrument their experimentation by tracking them, regardless if they're training with jobs in Azure Machine Learning or interactively in notebooks. Benefits include:

  • All of your ML experiments are organized in a single place, allowing you to search and filter experiments to find the information and drill down to see what exactly it was that you tried before.
  • Compare experiments, analyze results, and debug model training with little extra work.
  • Reproduce or re-run experiments to validate results.
  • Improve collaboration by seeing what everyone is doing, sharing experiment results, and access experiment data programmatically.

Why MLflow

Azure Machine Learning workspaces are MLflow-compatible, which means you can use MLflow to track runs, metrics, parameters, and artifacts with your Azure Machine Learning workspaces. By using MLflow for tracking, you don't need to change your training routines to work with Azure Machine Learning or inject any cloud-specific syntax, which is one of the main advantages of the approach.

See MLflow and Azure Machine Learning for all supported MLflow and Azure Machine Learning functionality including MLflow Project support (preview) and model deployment.

Prerequisites

  • Install Mlflow SDK package mlflow and Azure Machine Learning plug-in for MLflow azureml-mlflow.

    pip install mlflow azureml-mlflow
    

    Tip

    You can use the package mlflow-skinny, which is a lightweight MLflow package without SQL storage, server, UI, or data science dependencies. It is recommended for users who primarily need the tracking and logging capabilities without importing the full suite of MLflow features including deployments.

  • You need an Azure Machine Learning workspace. You can create one following this tutorial.

  • If you're doing remote tracking (tracking experiments running outside Azure Machine Learning), configure MLflow to point to your Azure Machine Learning workspace's tracking URI as explained at Configure MLflow for Azure Machine Learning.

Configuring the experiment

MLflow organizes the information in experiments and runs (in Azure Machine Learning, runs are called Jobs). By default, runs are logged to an experiment named Default that is automatically created for you. You can configure the experiment where tracking is happening.

When training interactively, such as in a Jupyter Notebook, use MLflow command mlflow.set_experiment(). For example, the following code snippet demonstrates configuring the experiment, and then logging during a job:

experiment_name = 'hello-world-example'
mlflow.set_experiment(experiment_name)

Configure the run

Azure Machine Learning tracks any training job in what MLflow calls a run. Use runs to capture all the processing that your job performs.

When working interactively, MLflow starts tracking your training routine as soon as you try to log information that requires an active run. For instance, when you log a metric, log a parameter, or when you start a training cycle when Mlflow's autologging functionality is enabled. However, it's usually helpful to start the run explicitly, specially if you want to capture the total time of your experiment in the field Duration. To start the run explicitly, use mlflow.start_run().

Regardless if you started the run manually or not, you'll eventually need to stop the run to inform MLflow that your experiment run has finished and marks its status as Completed. To do that, all mlflow.end_run(). We strongly recommend starting runs manually so you don't forget to end them when working on notebooks.

mlflow.start_run()

# Your code

mlflow.end_run()

To help you avoid forgetting to end the run, it's usually helpful to use the context manager paradigm:

with mlflow.start_run() as run:
    # Your code

When you start a new run with mlflow.start_run(), it may be interesting to indicate the parameter run_name which will then translate to the name of the run in Azure Machine Learning user interface and help you identify the run quicker:

with mlflow.start_run(run_name="hello-world-example") as run:
    # Your code

Autologging

You can log metrics, parameters and files with MLflow manually. However, you can also rely on MLflow automatic logging capability. Each machine learning framework supported by MLflow decides what to track automatically for you.

To enable automatic logging insert the following code before your training code:

mlflow.autolog()

View metrics and artifacts in your workspace

The metrics and artifacts from MLflow logging are tracked in your workspace. To view them anytime, navigate to your workspace and find the experiment by name in your workspace in Azure Machine Learning studio.

Screenshot of the metrics view.

Select the logged metrics to render charts on the right side. You can customize the charts by applying smoothing, changing the color, or plotting multiple metrics on a single graph. You can also resize and rearrange the layout as you wish. Once you've created your desired view, you can save it for future use and share it with your teammates using a direct link.

You can also access or query metrics, parameters and artifacts programatically using the MLflow SDK. Use mlflow.get_run() as explained bellow:

import mlflow

run = mlflow.get_run("<RUN_ID>")

metrics = run.data.metrics
params = run.data.params
tags = run.data.tags

print(metrics, params, tags)

Tip

For metrics, the previous example will only return the last value of a given metric. If you want to retrieve all the values of a given metric, use mlflow.get_metric_history method as explained at Getting params and metrics from a run.

To download artifacts you've logged, like files and models, you can use mlflow.artifacts.download_artifacts()

mlflow.artifacts.download_artifacts(run_id="<RUN_ID>", artifact_path="helloworld.txt")

For more details about how to retrieve or compare information from experiments and runs in Azure Machine Learning using MLflow view Query & compare experiments and runs with MLflow

Example notebooks

If you're looking for examples about how to use MLflow in Jupyter notebooks, please see our example's repository Using MLflow (Jupyter Notebooks).

Limitations

Some methods available in the MLflow API may not be available when connected to Azure Machine Learning. For details about supported and unsupported operations please read Support matrix for querying runs and experiments.

Next steps