Introduction

The research brief published in February 2022 by Institute for Ethics in Artificial Intelligence in collaboration with Technical University of Munich investigates the current challenges for accountability of AI systems from legal and ethical perspectives. As AI applications are making entry into multiple domains, it becomes essential to understand who is responsible for the predictions or decisions made by these systems.

The report introduces the challenges for accountability from a practical perspective through the example of autonomous vehicles. Autonomous vehicle systems are an ideal example of complex applications of AI that has flavours of ethics and responsibility embedded in it. The task of driving is highly complex which AI tries to achieve by using huge amounts of data. While human decision-making is not always transparent or obvious to external individuals, it comes into conflict with technology as users want techniques that are directly interpretable, comprehensible, and trustworthy.

The report also discusses the need for a broader accountability approach and the importance of explainability and social responsibility. As AI systems are highly dependent on the use of data and, in some cases, require the processing of personal data, the report points out that if the General Data Protection Regulation is applied, the data processor is obliged to provide the data subject with certain information on personal data. The second component of accountability from a legal perspective is liability, which in an AI scenario turns out to be critical due to system failure.


Relevance of the Report

The report highlights the most pressing questions regarding accountability of AI products. It also creates a broad framework and regulations on how accountability should be handled. Even though the framework does not provide clear answers on certain things, it provides a clear path to understanding AI systems and stakeholders involved in their creation and implementation.

 

Key Takeaways

  • As AI applications are finding an increasing use in complex daily activities, the issue of who is responsible for the predictions or decisions made by these systems has become pressing
  • One of the ways to address this concern is improving the general understandability, or explainability, of AI so as to enable define and delineate accountability
  • The explainability component of accountability is particularly relevant as transparency and responsibility are highly intertwined
  • The increasing use of AI in products will create accountability dependencies
  • A clear definition and distribution of responsibilities through frameworks and regulation is absolutely necessary

Want to publish your content?

Submit your case study and share your insights to the world.

ALSO EXPLORE

DISCLAIMER

The information provided on this page has been procured through secondary sources. In case you would like to suggest any update, please write to us at support.ai@mail.nasscom.in