The recent boom in generative AI has led to vital questions on ensuring the ethical and responsible use of generative AI models. Furthermore, the question of rights, attribution and IP of AI-generated works that mimic specific human creators opens more ethical questions. 

 With these questions in mind, INDIAai recently organized the second edition of its generative AI roundtable. AI experts from various fields represented the panel. The roundtable shed light on topics such as managing the ethics of deepfakes and other harmful outcomes of generative AI. In addition, it addressed the bias in generative AI models and the content it creates and proposed regulatory frameworks needed for generative AI models. 

Anna Danes, AI Ethics Expert, Ethical Intelligence Associate Limited; Divya Dwivedi, Advocate, Supreme Court of India; Prateek Sibal, Programme Specialist, UNESCO; Vibhav Mithal, Managing Associate, ANAND AND ANAND; and Shashank Ramireddy were the panelists. Jibu Elias, Content & Research Lead- INDIAai, moderated the event.  

It was challenging to analyze the impact of the previous AI revolution, which was algorithms and information shared. It is also complicated to foresee the implications of deepfakes. According to Anna Danes, this is a threat to the democracies of the world which is hard to tackle. 

Bias in AI  

"Generative AI is interesting for people who are lazy," said Divya Dwivedi, speaking about the advent of generative AI and its impact on the Indian judiciary. 

 In her opinion, Indian citizens' blind belief in social media news is an example that could be used to analyze the future impact of generative AI. 

 The bias in generative AI is a significant concern about using it in law and regulations. This is because the human brain tends to rely on what it sees. Therefore, when generative AI produces content, there are chances that the human beings involved might ignore the need for a cross verification of facts. 

 Apart from the prejudices, and data quality, the usage of generative AI can also question AI ethics. Prateek opined that some publications have stopped accepting research papers co-authored by AI models due to these concerns and added that dealing with the present issue of information pollution is significant. 

 Therefore, the guidelines for each profession on how to use AI in their field are a necessity. Furthermore, it will spread awareness on the ethics of AI by enlightening the public on its do's and don'ts. 

Algorithmic auditing 

 Generative AI is definitely is a new phase in terms of what AI can achieve as a tool. Identifying generative AI output is one of the basic aspects of fighting fundamental problems. "AI output may not have human intervention, but a human involvement is present in the AI creation process", says Vibhav Mittal. 

 The audit is a criterion for the AI system to comply with the legal obligation. The obligation could be a hard law obligation, which may be an existing law, or it could be a company that is aware of the AI act or other regulations and is looking to be compliant with it. 

 The audit schemes and generative AI would be a tool. Different types of generative AI will have other impacts, so risk identification for generative AI will also be vital. 

Regulating AI 

 Shashank opines that the regulation of AI tends to focus on formal equality rather than substantial equality, even though the end goal is often equity. Regulating technology and regulating the impact of technology are two different ideas.  

"Where do you position the points of responsibility," asks Shashank. 

 The world is still debating whether generative AI is low-risk or high-risk tool. Hence, we will have to wait to implement the regulations on this tool. 

 As per our current learnings, bias in AI is a critical challenge that needs to be addressed. And that is where we should start. 

Want to publish your content?

Publish an article and share your insights to the world.

Get Published Icon
ALSO EXPLORE