A study published by Indian Institute of Technology Madras (IIT Madras) Researchers and Vidhi Centre for Legal Policy, Delhi, has called for participatory approaches in developing and governance of Artificial Intelligence in India and abroad.

This study identified the primary reasons why a participatory approach in AI development can improve the outcomes of the AI algorithm and enhance the fairness of the process. Through an interdisciplinary collaboration, it sought to establish the need for and importance of a participatory approach to AI governance while grounding it in real-world use cases.

As AI increasingly automates operations in multiple domains, the various choices and decisions that go into their setup and execution can transform, become opaque, and obfuscate accountability. This model highlights the importance of involving relevant stakeholders in shaping AI systems' design, implementation, and oversight.

Researchers from the Centre for Responsible AI (CeRAI) under the Wadhwani School of Data Science and AI at IIT Madras and Vidhi Legal, a leading think-tank on legal and tech policy, between technologists, lawyers and policy researchers conducted this study in two parts. Their findings were published in a Pre-Print Paper in 'arXiv', an open-access archive for nearly 2.4 million scholarly articles in the fields of physics, mathematics, and computer science, among many others.

Highlighting the need for such studies, Prof. B. Ravindran, Head of Wadhwani School of Data Science and Artificial Intelligence (WSAI), IIT Madras, said, "The widespread adoption of AI technologies in the public and private sectors has resulted in them significantly impacting the lives of people in new and unexpected ways. In this context, it becomes important to inquire how their design, development and deployment take place. This study found that persons who the deployment of these systems will impact have little to no say in how they are developed. Seeing this as a major gap, this research study advances the premise that a participatory approach is beneficial to building and using more responsible, safe, and human-centric AI systems."

Key recommendations

The study recommends adopting a participatory approach to AI governance by engaging stakeholders throughout the entire AI lifecycle. Clear mechanisms for stakeholder identification should be established to develop robust processes for identifying relevant stakeholders, guided by criteria like power, legitimacy, urgency, and potential for harm.

Effective methods for collating and translating stakeholder input must be developed to create clear procedures for collecting, analyzing, and turning stakeholder feedback into actionable steps. Techniques like voting and consensus-building can be used, but it is important to be aware of their limitations and potential biases.

Furthermore ethical considerations must be addressed throughout the AI lifecycle by involving ethicists and social scientists from the beginning of AI development to ensure that fairness, bias mitigation, and accountability are prioritized at every stage. Also, even as AI systems become more advanced, it is essential to keep humans in control, especially in sensitive areas like law enforcement and healthcare.

In the first paper, the authors investigated various issues that have cropped up in the recent past when it comes to AI governance and explored viable solutions. By analyzing how beneficial a participatory approach has been in other domains, they proposed a framework that integrates these aspects.

The Second Paper analyzed two use cases of AI solutions and their governance, with one being a largely deployed solution in Facial Recognition Technologies which has been widely discussed and well documented. At the same time, the other is a possible future application of a relatively newer AI solution in a critical domain.

Sources of Article

Want to publish your content?

Publish an article and share your insights to the world.

Get Published Icon
ALSO EXPLORE