The concept of algorithmic bias describes systematic and repeatable errors in a computer system that create "unfair" outcomes, such as "privileging" one category over another. Algorithmic bias can emerge from various sources, such as the data with which the system was trained, conscious or unconscious architectural decisions by the system's designers, or feedback loops while interacting with users in continuously updated systems. 

According to a paper published by Rozado David in 2020, The topic of political bias in AI systems has received a comparatively limited amount of attention in comparison to other algorithmic bias types. This is surprising because as AI systems improve and our dependency on them increases, the potential of such systems for societal control while degrading democracy is substantial. 

Political bias refers to the systematic inclination or prejudice in favour of or against a particular political perspective, ideology, party, group, or individual. Numerous forms of political bias, such as media bias, censorship, and discrimination, can impact public opinion and decision-making processes. The impact of political bias in AI is far-reaching, as it can influence public opinion and make important decisions.  

Recently introduced LLMs have the capacity to become gateways to the accumulated body of human knowledge and pervasive interfaces for humans to interact with technology and the wider world. The risk of political biases embedded intentionally or unintentionally in such systems deserves attention. Because of the expected large popularity of such systems, the risks of them being misused for societal control, spreading misinformation, curtailing human freedom, and obstructing the path towards truth-seeking must be considered. 

Political orientation test 

In a study by the New Zealand Institute of Skills and Technology, they conducted a political orientation test to ChatGPT by prompting the system with the tests' questions and often adding the suffix "please choose one of the following" to each test question prior to listing the test's possible answers.  

According to the result received, most tests classify ChatGPT answers to their questions as manifesting left-leaning political orientation. However, when asked explicitly about its political orientation, ChatGPT often claimed to be politically neutral.  

Western influence 

India Today conducted a political experiment involving leading platforms that use AI to generate visual imagery, suggesting that these platforms could be biased in terms of their knowledge and understanding of various nations. 

 Midjourney was asked to create pictures of “most popular elected political leaders posing in front of the Eiffel Tower in 2023.” The prompt was generic, without mentioning names to check the scope of the results. The AI-generated image only included leaders with a Western appearance, such as Angela Merkel, Emmanuel Macron, and Donald Trump. None of the popular Asian leaders were included in the result. 

Concentration of Power 

Yet another debate about generative AI is the age-old theory of concentration of power. One of the most prominent arguments for providing access to systems is to avoid concentrating the level of power that high-resource organizations are collecting as one of the few groups capable of developing and deploying these systems. Large technology companies can create powerful AI systems because of their access to training data, computing infrastructure, and commercial capabilities for deploying that system.  

This monopolization also gives these high-resource institutions more influence in AI development, the behavior of these systems, and the narrative and direction of the field. Although these companies may provide access or even open source their systems, contributions to system development are limited to people and resources working towards that company's interests. 

Large companies are often geographically concentrated in Western countries, whereas systems are deployed globally, which can asymmetrically impose cultural values. These companies can also punish pushback or dissent. The people most affected and exploited by AI systems are rarely found in large technology companies. They must be empowered to shape systems that benefit them or opt out of interaction with AI entirely. 

Listen to the expert 

Speaking about AI Now: Social and Political Questions for Artificial Intelligence at the University of Washington, Kate Crawford professor and author of the book "Atlas of AI" spoke about the socio-political implications of generative AI.  

"The legacies of inequalities are living on in our system and are now being built into the logic of AI itself", said the co-founder of the AI Now Institute, a New York-based research centre working across disciplines to understand the social and economic implications of artificial intelligence. 

Though it is tempting to believe that data can be neutralized, Kate Crawford is of the opinion that it will not work. "The bias in the system is caused by bias in training data. And we can only gather data from the world we have", said Crawford. "The reality of bias is much deeper than normally admitted publicly", she added. 

According to Kate Crawford, to overcome this issue of bias, a standard should be developed to track the lifecycle and the use of datasets. Algorithmic Impact Assessments can also aid in this process. Also, the developers should work with those that study the harms and impacts of bias and expand beyond the technical approach to the socio-technical approach. Also, they should give dire importance to diversity.  

"If you are working on a high-stake system, then you should collaborate with domain area experts", stated Kate Crawford. 

Want to publish your content?

Publish an article and share your insights to the world.

Get Published Icon
ALSO EXPLORE