Azure Automated ML(interface) how do models created from an Automated ML experiment handle Imbalanced Data?

J. Jeong 61 Reputation points
2020-06-29T05:15:18.737+00:00

I have run automated ML experiments with imbalanced data (10:1, 20:1, sometimes 30:1) and deployed the best models which all showed fantastic results.

When I looked up the link
https://learn.microsoft.com/en-us/azure/machine-learning/concept-manage-ml-pitfalls#identify-models-with-imbalanced-data
, it says Azure automated ML can properly handle imbalance of up to 20:1.
I started to wonder where the ratio 20:1 came from.

As far as I understand, Azure automated ML doesn't use upsampling, downsampling or resampling, and is more focused on a column of weights to make a class more or less important, and a performance metric dealing better with imbalanced data.

  • Does this 20:1 come from some theory? or from tons of experiments already conducted?

Azure automated ML shows the result with warning when I use 30:1(or more) imbalanced data, but I still wonder why it is 20:1.

Azure Machine Learning
Azure Machine Learning
An Azure machine learning service for building and deploying models.
2,657 questions
{count} vote

Accepted answer
  1. Ramr-msft 17,631 Reputation points
    2020-07-06T10:22:39.317+00:00

    In AutoML we use 5% minority class as threshold to classify imbalance/non-imbalance. This is a heuristic, and is one guideline produced in the Guardrails to the question “At x% threshold level is the dataset balanced?”. Since it is not possible to absolutely classify imbalance in all cases (depending on the dataset and its size and distribution, 5% or 10% or even higher may mean imbalance, whereas for very large datasets the minority class may have sufficient training samples for model to learn and get a reasonable imbalance-appropriate metric such as weighted AUC or balanced accuracy), current Guardrails serve the goal of surfacing “substantial” imbalance to user so the user can take any of the following measures:

    • When the user knows (either from their knowledge of their own data or from guardrails) that there is imbalance, Automated ML provides an option in the Automated ML config to provide sample weights – a user-specified weight array where user can specify to weight each sample with a weight. That way they can weigh the minority class more when submitting the data into Automated ML config. We will soon provide weighting option for imbalance classes from within AutoML that will be activated automatically when imbalance is detected.

    0 comments No comments

1 additional answer

Sort by: Most helpful
  1. Ramr-msft 17,631 Reputation points
    2020-07-01T06:12:08.183+00:00

    @JiinJeong-9636 The following is the road-map for this. The ratio for detecting imbalance has been updated to 1:5 rather than 1:20, meaning that AutoML would identify a dataset to have imbalance when the number of samples in the least common class is equal to or fewer than the number of samples in the most common class. This should be available within a week. The reason for doing this is as follows:
    The ratio of 1:20 only detects very severe imbalance, whereas we've noticed both in our experiments as well as literature & industry practices, that even the treatment of mild imbalance (something like 1:5) could offer better results.
    The ratio is based on comparing least common to most common as opposed to least common class to all the samples, because the former gives more consistent results empirically.
    The solution to tackle imbalanced data is to apply weights internally to the dataset in the inverse proportion of the number of samples belonging to a particular class. Here's how we do it:
    If the 1:5 ratio isn't satisfied, we trigger a message via Guardrails saying that "PASSED: No Class Imbalance" If the ratio isn't satisfied, i.e. imbalance is detected, then we run an experiment with sub-sampled data and check if the above solution of "applying weights for class balancing" proves to be better.
    If the experiment is not leading to better results, we don't apply weights, and trigger a message in Guardrails saying that "ALERT: Class Imbalance is present" If the experiment does lead to better results, we apply the weights and fix the imbalance, and trigger a message in Guardrails saying that "DONE: Class Imbalance was fixed".
    The documentation update is in-progress for Handling imbalance data of the following document.