Social media companies decide what counts as "problematic" content and how they will remove it. Some choose to moderate hashtags, blocking the results for specific tag searches and issuing public service announcements (PSAs) when users search for troubling terms. The hashtag has thus become an indicator of where problematic content can be found, but this has yet to produce a limited understanding of how such content circulates. Using pro-eating disorder (pro-ED) communities as a case study, this article explores the practices of circumventing hashtag moderation in online pro-ED communities.

In the GPAI Summit 2023, in a session titled "Is there a way to democratize harmful content moderation in social media?," the panelists discuss the role of AI in today's social media infrastructure and the means to moderate potentially damaging content.  

Social media is an exciting place to look. According to the study, 59% of the world's population uses social media. The average user spent over 2.5 hours on social media this year. Recommender systems decide what users get in their feeds. They learn what each user likes and then give them more of the same. Harmful content classifiers are used to moderate content on platforms; some content moderation is essential, and human moderators need help from automated tools.

Social media platforms are also going to be a key mechanism for the dissemination of AI-generated content. AI content generators are fast becoming infrastructure in their own right.

Public involvement

The public arguably needs to be more involved in the emerging AI infrastructure. For other types of infrastructure, this is quite normal. Citizens need to know more about what AI systems are doing, and they need to participate in the design of some of them.

Democracy is a great institution, but we only use it in a few parts of public life- local/national elections. Ai has the potential to introduce democracy in other areas of public life, such as the creation of training data sets as a possible use case. The project presented at the summit considers the case of harmful content classifiers in social media. 

Harmful content moderation

Social media platforms need some form of content moderation to keep them safe. Minimally, there are often laws about content that must be complied with. Beyond that, there is 'lawful but awful' content, which companies can do exciting things with. 

Content moderation processes need to involve AI content classifiers. There is too much for human moderators to deal with unaided. Classifiers can identify content for moderators to consider.

Compared to traditional media, the range of content moderation on social media is much more comprehensive. Like traditional media, there is an essential decision to remove or leave a content item. For the things that are left, there are new options. The users can downrank a given item in the platform's recommender algorithm to control how much it is disseminated. 

Project on harmful content classification

This project curated 1000 tweets to 600 political tweets related to the 2019 general election in India and 400 tweets related to the 2022 state election by appropriate strategies. 10 annotators have labeled each of these tweets under 'discrete' categories. In 2024, they plan to scale up the dataset annotators across India and develop AI classifiers for content moderation.

They will expand on the current dataset with tweets from the 2022/23 local election and the upcoming 2024 general election in India. Unlike before, where all annotators annotated each tweet, these tweets will be sparsely annotated by the annotators and will analyze the difference. 

Policy 4.0.

Policy 4.0 is a new-age policy think tank focused on addressing the systemic disruption created by emerging technology. By combining deep expertise in engineering, innovation, law and geopolitics, they develop unique and unbiased insights to support cutting-edge policy approaches. 

During their study, they analyzed global regulations of the USA, UK, Europe, UAE, China and Singapore. AI regulation is going to be different compared to other tech. Aspects that make it different include:

  • AI is self-learning
  • Unknown aspects of AI
  • Democratization of AI and the arms race
  • Civilizational risk

They presented an agile operational framework for the policy based on their study.

Want to publish your content?

Publish an article and share your insights to the world.

ALSO EXPLORE

DISCLAIMER

The information provided on this page has been procured through secondary sources. In case you would like to suggest any update, please write to us at support.ai@mail.nasscom.in