Results for ""
AI systems have the potential to make human life easier by enabling them to make better and informed decisions. Over the past few years, we have seen many uses of AI in areas such as job recruitment, healthcare, student admissions, identifying a potential criminal threat, and so on. More often than not, we can see that there is always an ongoing debate about how accurate and fair these AI systems are.
Amazon stopped using the hiring algorithm when they found out that it favored applicants based on certain words such as ’executed’ or ‘captured’ which were mostly found on resumes of men. A few months back, we saw the movement ‘Black Lives Matter’ which forced the companies such as IBM, Amazon, and Microsoft to stop selling or at least pause the use of the facial recognition software used by the police. The authorities justify the use of such automated surveillance for security reasons, it is, in fact, one of the most powerful surveillance tools ever invented but the challenge is that it is less accurate in identifying women and minorities. This can negatively impact people of color through misidentification.
The effort to remove bias from AI systems is difficult as it relies on humans to train them. As humans, we have certain biases that can make their way into the AI systems causing dangerous results. Biases can creep into the systems through the training data which can reflect social or cultural inequities, even if we remove variables such as gender and/or race, the systems can still remain biased. There are times when such biases can creep in long before the data is even collected. There are two ways in which the bias can enter the training data- either the data collected reflects the existing prejudices or when using historical data without making any changes. It is important to note that while collecting data, we need to take into consideration different perspectives of people. The team collecting data or preparing data should be diverse in order to reduce bias. Women and people of color are often underrepresented in the field of AI, and thus, their opinions and perspectives are often ignored by the majority.
Another major concern is that how can we determine how fair the AI system is, or what is the level of fairness of the AI system. A system that is designed for a particular region based on its demographics might not work effectively in other regions within the same country.
Researchers all over the world are working to solve the problem of bias in AI systems by making use of different approaches such as algorithms that help in detecting and mitigating bias in training data, discussions which hash out different definitions of fairness, taking into consideration how humans and machines can work together to remove bias, collect more data (training data), and involve more people having diversified backgrounds in the field of AI. A diverse AI team will be better able to identify, anticipate, and reduce bias, this means we need to invest in education and be more inclusive of underrepresented communities.