This browser is no longer supported.
Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support.
You are training a binary classification model to support admission approval decisions for a college degree program. How can you evaluate if the model is fair, and doesn't discriminate based on ethnicity?
Evaluate each trained model with a validation dataset, and use the model with the highest accuracy score. An accurate model is inherently fair.
Remove the ethnicity feature from the training dataset.
Compare disparity between selection rates and performance metrics across ethnicities.
You have used Fairlearn to evaluate a model in a notebook. You register the model in your Azure Machine Learning workspace. You want to be able to select the model in Azure Machine Learning studio and from there view its fairness dashboard to compare disparity for performance metrics. What should you do?
Run an experiment in which you upload the dashboard metrics for the model.
Save the notebook in your Azure Machine Learning workspace.
Use the selection_rate_group_summary function to get the fairness data, and save it as a file dataset in your Azure Machine Learning workspace.
You plan to use the Grid Search mitigation technique to find an optimal model for a binary classifier that predicts whether or not a candidate will be successful in an employment role. You want to ensure that the model selects an equal number of candidates from each category in the Gender feature. Which parity constraint should you use?
Demographic parity.
Error rate parity.
Bounded group loss.
You must answer all questions before checking your work.
Continue
Need help? See our troubleshooting guide or provide specific feedback by reporting an issue.