Enable logging in Azure Machine Learning designer pipelines

In this article, you learn how to add logging code to designer pipelines. You also learn how to view those logs using the Azure Machine Learning studio web portal.

For more information on logging metrics using the SDK authoring experience, see Monitor Azure Machine Learning experiment runs and metrics.

Enable logging with Execute Python Script

Use the Execute Python Script component to enable logging in designer pipelines. Although you can log any value with this workflow, it's especially useful to log metrics from the Evaluate Model component to track model performance across runs.

The following example shows you how to log the mean squared error of two trained models using the Evaluate Model and Execute Python Script components.

  1. Connect an Execute Python Script component to the output of the Evaluate Model component.

    Connect Execute Python Script component to Evaluate Model component

  2. Paste the following code into the Execute Python Script code editor to log the mean absolute error for your trained model. You can use a similar pattern to log any other value in the designer:

    APPLIES TO: Python SDK azureml v1

    # dataframe1 contains the values from Evaluate Model
    def azureml_main(dataframe1=None, dataframe2=None):
        print(f'Input pandas.DataFrame #1: {dataframe1}')
    
        from azureml.core import Run
    
        run = Run.get_context()
    
        # Log the mean absolute error to the parent run to see the metric in the run details page.
        # Note: 'run.parent.log()' should not be called multiple times because of performance issues.
        # If repeated calls are necessary, cache 'run.parent' as a local variable and call 'log()' on that variable.
        parent_run = Run.get_context().parent
    
        # Log left output port result of Evaluate Model. This also works when evaluate only 1 model.
        parent_run.log(name='Mean_Absolute_Error (left port)', value=dataframe1['Mean_Absolute_Error'][0])
        # Log right output port result of Evaluate Model. The following line should be deleted if you only connect one Score component to the` left port of Evaluate Model component.
        parent_run.log(name='Mean_Absolute_Error (right port)', value=dataframe1['Mean_Absolute_Error'][1])
    
        return dataframe1,
    

This code uses the Azure Machine Learning Python SDK to log values. It uses Run.get_context() to get the context of the current run. It then logs values to that context with the run.parent.log() method. It uses parent to log values to the parent pipeline run rather than the component run.

For more information on how to use the Python SDK to log values, see Enable logging in Azure Machine Learning training runs.

View logs

After the pipeline run completes, you can see the Mean_Absolute_Error in the Experiments page.

  1. Navigate to the Jobs section.

  2. Select your experiment.

  3. Select the job in your experiment you want to view.

  4. Select Metrics.

    View job metrics in the studio

Next steps

In this article, you learned how to use logs in the designer. For next steps, see these related articles: