Track model development using MLflow

This article contains examples of tracking model development in Azure Databricks. Log and track ML and deep learning models automatically with MLflow or manually with the MLflow API.

Model tracking & MLflow

The model development process is iterative, and it can be challenging to keep track of your work as you develop and optimize a model. In Azure Databricks, you can use MLflow tracking to help you keep track of the model development process, including parameter settings or combinations you have tried and how they affected the model’s performance.

MLflow tracking uses experiments and runs to log and track your ML and deep learning model development. A run is a single execution of model code. During an MLflow run, you can log model parameters and results. An experiment is a collection of related runs. In an experiment, you can compare and filter runs to understand how your model performs and how its performance depends on the parameter settings, input data, and so on.

The notebooks in this article provide simple examples that can help you quickly get started using MLflow to track your model development. For more details on using MLflow tracking in Azure Databricks, see Track ML and deep learning training runs.

Note

MLflow tracking does not support jobs submitted with spark_submit_task in the Jobs API. Instead, you can use MLflow Projects to run Spark code.

Use autologging to track model development

MLflow can automatically log training code written in many ML and deep learning frameworks. This is the easiest way to get started using MLflow tracking.

This example notebook shows how to use autologging with scikit-learn. For information about autologging with other Python libraries, see Automatically log training runs to MLflow.

MLflow autologging Python notebook

Get notebook

Use the logging API to track model development

This notebook illustrates how to use the MLflow logging API. Using the logging API gives you more control over the metrics logged and lets you log additional artifacts such as tables or plots.

This example notebook shows how to use the Python logging API. MLflow also has REST, R, and Java APIs.

MLflow logging API Python notebook

Get notebook

End-to-end example

This tutorial notebook presents an end-to-end example of training a model in Azure Databricks, including loading data, visualizing the data, setting up a parallel hyperparameter optimization, and using MLflow to review the results, register the model, and perform inference on new data using the registered model in a Spark UDF.

Requirements

Databricks Runtime ML

Example notebook

If your workspace is enabled for Unity Catalog, use this version of the notebook:

Use scikit-learn with MLflow integration on Databricks (Unity Catalog)

Get notebook

If your workspace is not enabled for Unity Catalog, use this version of the notebook:

Use scikit-learn with MLflow integration on Databricks

Get notebook