Results for ""
To ensure that humans can identify and manage the detrimental effects of bias in artificial intelligence (AI) systems, researchers at US's National Institute of Standards and Technology (NIST) (https://www.nist.gov/) have recommended that we look beyond machine learning processes and data to identify the sources of these biases and consider studying the larger narrative - the societal factors that influence the development of technology.
This recommendation is at the hear of a revised NIST publication, Towards a Standard for Identifying and Managing Bias in Artificial Intelligence (NIST Special Publication 1270) that features a series of public comments that they had received after publication of the paper's draft version last summer. NIST's document also offers a risk management framework to develop trustworthy and responsible AI, which is currently a work in progress.
"Context is everything," said NIST's Reva Schwartz, principal investigator for AI bias and one of the report's authors. "AI systems do not operate in isolation. They help people make decisions that directly affect other people's lives. If we are to develop trustworthy AI systems, we need to consider all the factors that can chip away at the public's trust in AI. Many of these factors go beyond the technology itself to the impacts of the technology, and the comments we received from a wide range of people and organizations emphasized this point."
The revised publication acknowledges that computational and statistical sources of bias, the picture is much larger. For example, it is a commonly acknowledged fact that AI has demonstrated that biased data and sources will produce biased results - machine learning software could be trained on a dataset that underrepresents a particular gender or ethnic group.
Researchers propose a "socio-technical" approach to mitigating the multitude of biases in AI as we know that computational bias is influenced by a collective dataset that represents interactions and inputs with a society. Human biases can influence how institutions operate in manners that can be disadvantageous to certain social groups - it can be race, ethnicity, neighbourhoods, name, etc., and this creates systemic biases. When human, systemic and computational biases combine, they can form a pernicious mixture — especially when explicit guidance is lacking for addressing the risks associated with using AI systems.
"Organizations often default to overly technical solutions for AI bias issues," Schwartz said. "But these approaches do not adequately capture the societal impact of AI systems. The expansion of AI into many aspects of public life requires extending our view to consider AI within the larger social system in which it operates."
"It's important to bring in experts from various fields — not just engineering — and to listen to other organizations and communities about the impact of AI," she said.