CANNOT LOG METRICS AND ARTIFACT TO AZUREML

Nguyễn Thanh Tú 170 Reputation points
2024-08-07T04:41:23.68+00:00

Hi,

I am building a AML pipeline for training models and I want to log the evaluation metrics using mlflow.log_metrics(metric) method, but no metrics data is logged for some reason.
My code is below:
1. Get the evaluate component
def get_evaluator(env_name: str, env_version: str) -> Component:

    evaluation_component = command(

        name="uci_heart_evaluate",

        version="1",

        display_name="Evaluate XGBoost classifier",

        description="Evaluates the XGBoost classifier using evaluation results",

        type="command",

        inputs={

            "evaluation_results": Input(type=AssetTypes.URI_FOLDER),

        },

        code="src/sdk/classification_conmponents/evaluate",

        command="""python evaluate.py  \

            --evaluation_results ${{inputs.evaluation_results}} \

            """,

        environment=f"azureml:{env_name}:{env_version}",

    )

    return evaluation_component

2. Create pipeline
@pipeline()  ``# type: ignore[call-overload,misc]

    def uci_heart_classifier_trainer_scorer(input_data: Input, score_mode: str) -> dict[str, Any]:

        """The pipeline demonstrates how to make batch inference using a model from the Heart Disease Data Set problem, where pre and post processing is required as steps. The pre and post processing steps can be components reusable from the training pipeline."""

        prepared_data = prepare_data(

            data=input_data,

            transformations=Input(type=AssetTypes.CUSTOM_MODEL, path=transformation_model.id),

        )

        trained_model = trainer(

            data=prepared_data.outputs.prepared_data,

            target_column="target",

            register_best_model=True,

            registered_model_name=get_dotenv().model_name,

            eval_size=0.3,

        )

        evaluate_metrics = evaluator(

            evaluation_results=trained_model.outputs.evaluation_results

        )

        scored_data = scorer(

            data=prepared_data.outputs.prepared_data,

            model=trained_model.outputs.model,

            score_mode=score_mode,

        )

 

        return {

            "scores": scored_data.outputs.scores,

            "trained_model": trained_model.outputs.model,

        }

3. Code log metrics in evaluate component
true_labels = training_data_df['target']

predictions = training_data_df['Labels']

 

# Calculate metrics

accuracy = accuracy_score(true_labels, predictions)

precision = precision_score(true_labels, predictions, average='weighted')

recall = recall_score(true_labels, predictions, average='weighted')

f1 = f1_score(true_labels, predictions, average='weighted')

 

# Save metrics

metrics = {

    "accuracy": accuracy,

    "precision": precision,

    "recall": recall,

    "f1_score": f1,

}

 

mlflow.log_metrics(metrics)

No metrics are logged on AzureMLUser's image

Are there any possible reasons why this error occurs, and how to fix it?

Azure Machine Learning
Azure Machine Learning
An Azure machine learning service for building and deploying models.
3,334 questions
{count} votes

Accepted answer
  1. dupammi 8,615 Reputation points Microsoft External Staff
    2024-08-08T03:13:31.9766667+00:00

    Hi @Tú Nguyễn

    I'm glad that you were able to resolve your issue, with the insights I provided earlier. Let me reiterate the solution here, so that others experiencing the same thing can easily reference this!

    Question: CANNOT LOG METRICS AND ARTIFACT TO AZUREML

    Solution: To log metrics correctly to AzureML, ensure to have an active MLflow run context by wrapping the logging code within with mlflow.start_run(): in evaluate.py. Enable MLflow autologging by adding mlflow.autolog() at the beginning of the script.

    I hope this helps!

    Thank you again for your time and patience throughout this issue.


    Please don’t forget to Accept Answer and Yes for "was this answer helpful" wherever the information provided helps you, this can be beneficial to other community members.

    1 person found this answer helpful.
    0 comments No comments

0 additional answers

Sort by: Most helpful

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.