Mitigating Unfair Bias in ML Models with the MinDiff Framework

By Google AI Blog - 2020-11-29

Description

Posted by Flavien Prost, Senior Software Engineer and Alex Beutel, Staff Research Scientist, Google Research The responsible research and...

Summary

  • One broad category for applying ML responsibly is the task of classification — systems that sort data into labeled categories.
  • Unfair Biases in Classifiers To illustrate how MinDiff can be used, consider an example of a product policy classifier that is tasked with identifying and removing text comments that could be considered toxic.
  • One of the most common metrics is equality of opportunity, which, in our example, means measuring and seeking to minimize the difference in false positive rate (FPR) across groups.
  • Because any decrease in accuracy caused by the mitigation approach could result in the moderation model allowing more toxic comments, striking the right balance is crucial.

 

Topics

  1. UX (0.2)
  2. NLP (0.17)
  3. Backend (0.16)

Similar Articles

Six Levels of Auto ML. TL;DR

By Medium - 2020-02-24

In this blog post we propose a taxonomy of 6 levels of Auto ML, similar to the taxonomy used for self-driving cars. Here are the 6 levels: ●Level 3: Automatic (technical) feature engineering and…

Time-Series Forecasting with Google BigQuery ML

By Medium - 2021-02-16

If you have worked with any kind of forecasting models, you will know how laborious it can be at times especially when trying to predict multiple variables. From identifying if a time-series is…