Image

Transparency

One of the long standing concerns about the growing ubiquity of AI systems relates to understanding how AI really “works”, from which emerges a resistance to incorporating AI into various aspects of human life. This has led researchers and regulatory regimes to give careful thought to the “explainability” of AI systems. With recent instances of AI systems acting unprompted or in ways that may be unintelligible to the layperson, stakeholders have started questioning the kind of decisions to be left to AI systems, and the way in which these decisions are made.


Moreover, informed consent – another foundational principle for ethical AI – is predicated on the user understanding the technology and its impact. Transparency in AI systems, therefore, enables humans to understand what is happening in AI models and ensures that advanced or AI-powered algorithms are thoroughly tested, explainable and aligned with the principles of ethical conduct. Methods and guidelines to ensure that AI is fair, accountable and transparent is now one of the most crucial areas of AI research and has made way for an almost separate area of research referred to as “explainable AI” (XAI). The goal with XAI is to avoid black box algorithms, i.e. AI systems that are not explainable due to the complexity of the algorithm’s structure and/or use of algorithms that rely on geometric relationships that humans cannot visualize, as a pre-requisite to hold systems accountable. The urgency of the issue of transparency varies depending on the nature of the technology, For instance, rule-based AI systems, which function on the basis of rules programmed into the algorithm by humans or expert-based AI systems, which dip into a knowledge pool created by human experts, are less “autonomous” in their decision making. The scope of their operation and possible outcomes or decisions are circumscribed by a knowledge base created by humans, which makes them more predictable. Transparency and explainability becomes more difficult where AI systems “learn” and develop rules for functioning autonomously, as with deep learning AI systems.


The issue of avoiding or solving the black box problem has become even more complex, as technologies that use deep learning using programmable neural networks are deployed in fields as varied as targeted online purchasing recommendations to autonomous vehicles. The advantage of deep learning algorithms is the ability of such systems to “learn” by interpreting a continuous stream of data to identify links and connections that are of value without human guidance; since the decision is made autonomously by the AI system itself, the question of the process and basis for these machine made decisions would have a profound impact on the “value” of such decisions to human beings and society.Another question to be considered is whether designing transparency into such AI systems would negatively affect the accuracy of the outcomes of such systems at all. It has been suggested that interopretable AI systems should be considered the standard, especially where the system is used to make high stakes decisions, rather than assuming that the “black box” problem is a necessary feature of deep learning AI systems.


In May 2019, the Organisation for Economic Co-operation and Development (OECD) recommended the adoption of certain principles for responsible stewardship of trustworthy AI. One of the principles that were also later adopted by the G-20 counties in June 2019 was that of ‘transparency and explainability’. The recommendations state that the AI actors should commit to transparency and responsible disclosure regarding AI systems. It emphasised that to this end, they should provide meaningful information, appropriate context and must foster the general understanding of the AI systems so that those affected by the outcome of an AI system must know about it and are able to challenge the outcomes if the same are adverse.


In the light of the above, it becomes imperative for regulatory regimes to devise guidelines in order to make AI more transparent and hence accountable for the decisions it makes. Even though research has shown that achieving transparency in AI systems depends on the stakeholders interacting with the system, and therefore the process of achieving the same can vary on a case by case basis, many countries have incorporated the requirement for transparency as a high-level principle in their national strategies for AI.

ALSO EXPLORE