Get featured on IndiaAI

Contribute your expertise or opinions and become part of the ecosystem!

Scientists worldwide have been attempting to imitate the human brain. However, just as humans are exposed to systemic injustices, machines learn human-like stereotypes and cultural norms from sociocultural data-acquiring biases and associations. According to research by Brookings, bias is not only reflected in the patterns of language but in the image, datasets used to train Computer Vision models.  

Computer Vision models have wide applications. It is used for security, surveillance, job candidate assessment, border controls, information retrieval and many more. Implicit biases manifest in the decision-making processes of machines. Creating lasting impacts on people’s dignity and opportunities.  

Nefarious actors might use readily available pre-trained models to impersonate public figures. Blackmail, deceive, plagiarize, cause cognitive distortion and sway public opinion. These kinds of machine-generated data pose a significant threat to information integrity in the public sphere. 

 Even though machines have been rapidly advancing and can offer some opportunities for public interest use, their application in a societal context without proper regulation, scientific understanding and public awareness of their safety and societal implications raises serious ethical concerns. 

Gender Bias 

In the Brookings research, to understand how gender associations manifest in downstream tasks, they prompted iGPT to complete an image given a women’s face. iGPT is a self-supervised model trained on large image sets to predict the next pixel value, allowing for image generation. Fifty-two per cent of autocompleted images had bikinis or low-cut tops. In comparison, the faces of men were autocompleted with suits or career-related attire 42 per cent of the time. Only 7 per cent of male autocompleted images featured revealing clothing. 

They also developed the image embedding association test to quantify the implore associations of the model that might lead to biased outcomes. Their findings revealed that the model contained innocuous associations, such as flowers and musical instruments being more pleasant than insects and weapons. 

However, the model also ends biased and potentially harmful social group associations related to age, gender, body weight, and race or ethnicity.  

Impact of bias 

The perpetuation of biases that have been maintained through structural and historical inequalities by these models has significant social implications. For example, biased job candidate assessment tools perpetuated discrimination among members of historically disadvantaged groups and predetermined the applicants’ economic opportunities.  

When the administration of justice and policing relies on models that associate certain skin tones, races or ethnicities with negative valence, people of colour wrongfully suffer life-changing consequences.  

State-of-the-art pre-trained computer vision models like iGPT are incorporated into consequential decision-making in complex AI systems. Recent advances in multi-modal AI effectively combine language and vision models. The integration of various modalities in an AI system further complicates the safety implications of cutting-edge technology.  

Although pre-trained AI is highly costly to build and operate, models made available to the public are freely deployed in commercial and critical decision-making settings and facilitate decisions made in well-regulated domains, such as administration, justice, education, workforce and healthcare.  

Steps to tackle bias 

Primarily there are three steps to tackle the issue of bias in the model. Firstly, it is necessary to establish unacceptable uses of AI. Secondly, it requires extra checks and safety for high-risk products. Finally, it is important to standardize the model improvement process for each modality and multi-modal combination to issue safety updates. 

 

Want to publish your content?

Publish an article and share your insights to the world.

Get Published Icon
ALSO EXPLORE