While people like to believe that they are rational and logical, the fact is that people are continually under the influence of cognitive biases. And while some biases can be readily identified, some are more on the subconscious and unconscious levels.

Social media is also one place where these biases get propagated, and sadly with the kind of reach and influence that social media has, these biases are amplified.

While AI is celebrated as autonomous technology that can develop away from human intervention, it is inherently biased. Here, we highlight algorithm biases in the context of Instagram, Facebook, Tik-Tok and similar platforms:

Linguistic Algorithm Bias

Many Black Lives Matter (BLM) activists were left frustrated when Facebook flagged or even blocked their accounts as a violation of policies but didn’t do enough to stop posts that were racist against the black community.

Was this just a technical glitch or a result of platforms’ discriminatory and biased policies and practices? Surprisingly the answer is somewhere in between.

Most of the NLP algorithms used in the backend of social media are trained on dataset standard English, or that is spoken by a particular group/community. It is a known problem of how dialects and language variations can affect natural language processing accuracy on what’s marked offensive and what’s not. 

In two 2019 computational linguistic studies, researchers discovered that AI intended to identify hate speech might actually end up amplifying racial bias. 

The tricky part of dialects and language fluidity is that what’s considered offensive and what is not is bout to social context. Some slurs can be offensive in different social settings, while the same slurs might be totally acceptable. One study found that tweets written in African American English commonly spoken by Black Americans were up to twice as likely to be flagged as offensive compared to others. Another study that used 155,800 tweets found a similar widespread racial bias against Black Americans' speeches. 

Further, in mid-2020, Facebook’s algorithm deleted close accounts of Syrian journalists and activists on the pretext of terrorism while, in reality, they were campaigning against violence and terrorism.

These studies show how potentially dangerous algorithm bias can be. The false positives may jeopardize some people who are already at risk by wrongly categorizing them as offensive, criminals or even terrorists.

Shadow banning

Shadow banning refers to removing or obscuring the content from some areas of the online community without warning in ways not apparent to users. 

All social media apps use content moderation algorithms. These algorithms essentially learn user preferences by monitoring engagement so that users will mostly see/recommend posts or brands they are expected to engage with. Additionally, they are also trained on the historical dataset to look out for particular feature sets. However, the goal of these platforms is to make money, and these algorithms are profit driven. On the downside, the users who do not generate engagement or profit will eventually become less visible.

On the surface, this content moderation may appear benign or even a good recommendation system, but these algorithms have been under scrutiny for banning under-represented groups such as several black, Hispanic, plus-sized women, LGBTQ and minorities. Because these groups have fewer people who agree with their voices and therefore generate lesser engagement. Suppressing voices of underrepresented groups lead to systematic discrimination and polarization of ideas. 

In 2018, after rigorous media coverage, Facebook, as part of a legal settlement with civil rights groups, had to disable its tool and allowed options for advertisers to filter out and exclude multiple ethnic groups, religious groups, and other protected classes from not just housing ads. 

In the summer of 2020, Instagram removed pictures of Black plus-size model Nyome Nicholas-Williams for ‘explicit images’ despite the fact that Nyome’s pictures were completely concealed, at least more than that of Kardashians or any thin influencer. Nyome herself argues that the "algorithms on Instagram are biased against women, more so Black women and minorities, especially fat Black women. When a body type is not understood, I think the algorithm goes against what it’s taught as the norm, which in the media is white, slim women as the ideal." Similarly, there has been a rise in accusations against Instagram for banning the accounts of plus-size women and other minority groups.

Potential fixes to these problems  

Inclusion of more people from diverse backgrounds in the entire development process from algorithm and model development. Diversity is one issue that many organizations still struggle with; as a result, these platforms are developed by predominantly homogeneous groups (white, male, American). Further, these potential issues are never considered during the development or training stage. Algorithms are trained on data gathered is mostly reflective of the history and experience of developers who are representative of fewer demographic profiles. Thus, the unconscious bias of developers remains embedded in the systems they create.

Stricter government policies push social media giants to identify and research the existing bias systematically and potentially dangerous and develop more robust AI models.

Also, government push to allow for more transparency and public oversight.

Sources of Article

https://toronto.citynews.ca/2021/04/05/the-growing-criticism-over-instagrams-algorithm-bias/

Want to publish your content?

Publish an article and share your insights to the world.

Get Published Icon
ALSO EXPLORE