Get featured on INDIAai

Contribute your expertise or opinions and become part of the ecosystem!

Despite multiple industries and academia benefiting from the rise of machine learning, on major criticism towards this transformative technology has been the unfair biases the systems created as a result of the quality of datasets and algorithms used.

According to Google Research’s Catherina Xu and Tulsee Doshi, one of Google’s AI principle “Avoid creating or reinforcing unfair bias,” outlines Google’s commitment to reduce unjust biases and minimize their impacts on people. 

As part of this commitment, earlier this month at TensorFlow World, Google released a beta version of Fairness Indicators, a suite of tools that enable regular computation and visualization of fairness metrics for binary and multi-class classification, helping teams take a first step towards identifying unjust impacts.

“Fairness Indicators can be used to generate metrics for transparency reporting, such as those used for model cards, to help developers make better decisions about how to deploy models responsibly. Because fairness concerns and evaluations differ case by case, we also include in this release an interactive case study with Jigsaw’s Unintended Bias in Toxicity dataset to illustrate how Fairness Indicators can be used to detect and remediate bias in a production machine learning (ML) model, depending on the context in which it is deployed,” writes Xu and Doshi in a blog post. 

“The Fairness Indicators tool suite also enables computation and visualization of commonly-identified fairness metrics for classification models, such as false positive rate and false negative rate, making it easy to compare performance across slices or to a baseline slice. The tool computes confidence intervals, which can surface statistically significant disparities, and performs evaluation over multiple thresholds. In the UI, it is possible to toggle the baseline slice and investigate the performance of various other metrics,” they added. 

Google Research states that these Fairness Indicators is only the first step and they have plans to expand vertically by enabling more supported metrics, such as metrics that enable you to evaluate classifiers without thresholds, and horizontally by creating remediation libraries that utilize methods, such as active learning and min-diff. However, with the exponential growth in data and advancements in ML and RL systems, it still yet to be seen how these fairness indicators can address the larger problem of AI bias. 

Want to publish your content?

Publish an article and share your insights to the world.

Get Published Icon
ALSO EXPLORE