Train and track models with MLflow and experiments
Anytime you train a model, you want the results to be reproducible. By tracking and logging your work, you can review your work at any time and decide what the best approach is to train a model.
MLflow is an open-source library for tracking and managing your machine learning experiments. In particular, MLflow Tracking is a component of MLflow that logs everything about the model you're training, such as parameters, metrics, and artifacts.
MLflow is already installed when you open a notebook in Microsoft Fabric. To use MLflow to track your models, you only need to import the library (with import mlflow
) and start logging.
Create an experiment
Whenever you want to track your work in Microsoft Fabric, you first need to create an experiment. Each time you train a model, it's tracked as an experiment run in your workspace. You can create an experiment using the user interface (UI), or by running the following code:
mlflow.set_experiment("<EXPERIMENT_NAME>")
When you run set_experiment()
, you set the given experiment as the active experiment. If an experiment with the provided name doesn't exist, a new experiment is created.
After the experiment is set, you can start tracking your work with MLflow by using:
- Autologging: Automatically logs metrics, parameters, and models without the need for explicit log statements.
- Custom logging: Explicitly log any metrics, parameters, models, or artifacts you create during model training.
When you want to track any custom parameters, metrics, or artifacts, you can use logging functions like:
mlflow.log_param()
: Logs a single key-value parameter. Use this function for an input parameter you want to log.mlflow.log_metric()
: Logs a single key-value metric. Value must be a number. Use this function for any output you want to store with the run.mlflow.log_artifact()
: Logs a file. Use this function for any plot you want to log, save as image file first.mlflow.log_model()
: Logs a model. Use this function to create an MLflow model, which may include a custom signature, environment, and input examples.
Tip
Learn more about how to track models with MLflow by exploring the official MLflow documentation.
To use the logging functions in a notebook, start a run with mlflow.start_run()
and log any metric you want:
from xgboost import XGBClassifier
from sklearn.metrics import accuracy_score
import mlflow
with mlflow.start_run():
model = XGBClassifier(use_label_encoder=False, eval_metric="logloss")
model.fit(X_train, y_train, eval_set=[(X_test, y_test)], verbose=False)
y_pred = model.predict(X_test)
accuracy = accuracy_score(y_test, y_pred)
mlflow.log_metric("accuracy", accuracy)
Whenever you train and track a model, a new experiment run is created within an experiment. When you train multiple models that you want to compare, it's recommended to group them under the same experiment name.
Retrieve metrics with MLflow in a notebook
To review experiment runs and compare models, you can use the UI or use the MLflow library in a notebook.
You can get all the active experiments in the workspace using MLFlow:
experiments = mlflow.search_experiments(max_results=2)
for exp in experiments:
print(exp.name)
To retrieve a specific experiment, you can run:
exp = mlflow.get_experiment_by_name(experiment_name)
print(exp)
Tip
Explore the documentation on how to search experiments with MLflow
Retrieve runs
MLflow allows you to search for runs inside of any experiment. You need either the experiment ID or the experiment name.
For example, when you want to retrieve the metrics of a specific run:
mlflow.search_runs(exp.experiment_id)
By default, experiments are ordered descending by start_time
, which is the time the experiment was queued in Microsoft Fabric. However, you can change this default by using the parameter order_by
.
For example, if you want to sort by start time and only show the last two results:
mlflow.search_runs(exp.experiment_id, order_by=["start_time DESC"], max_results=2)
You can also look for a run with a specific combination in the hyperparameters:
mlflow.search_runs(
exp.experiment_id, filter_string="params.num_boost_round='100'", max_results=2
)
Tip
Explore the documentation on how to search runs with MLflow