As AI systems continue to be deployed in a number of sectors that have an impact on the social, economic and political structure of society (prompting experts to declare a “Fourth Industrial Revolution“), the question of whether AI is “good” or “bad” for humankind continues to be debated. Some issues that arise for consideration are: will we face mass-scale unemployment due to AI systems replacing humans? how to avoid AI being used for inappropriate or dangerous purposes? the impact of AI on human dignity and personhood? and the implications of private and/or public ownership of AI systems on society’s structure issues such as manipulating information in the run up to elections and potential hacking into the election process itself are now becoming realities that democracies have to contend with. According to the World Economic Forum, unemployment, inequality, racism, security and the rights of a robot are some of the ethical concerns raised by the existence of AI systems. Some of these questions are being considered by national and international organizations of late, as part of an examination of policy to govern AI systems. The private sector is also putting out its views, along with industry associations and non-profits.For instance, private companies like Microsoft, Google, SAP and IBM have also formulated ethics guidelines to be considered while developing AI systems. Considering the scale and reach of these companies, and the fact that they are at the forefront of the development of AI technology, the perspectives of private companies on the ethical principles governing the use of AI systems is valuable.
The calls for ethical principles to guide AI converge around the following principles, although the discussion around each of these principles may vary in terms of their exact constituents and the context in which each is prioritised:
- Transparency - transparency is typically broken down into improving explainability and ensuring disclosure, and in the areas of data use, human-AI interaction, automated decision-making and understanding the purpose of AI systems, primarily with a view to increase trust in AI systems and as an important step to protect legal rights while using AI systems. There is a push to greater disclosure, in a manner that is understandable by non-experts, although the understanding of what may be disclosed is still uncertain, given the push to protect intellectual property rights of the developers of AI systems.
- Justice and fairness – the focus in this category is typically fairness or prevention of bias or discrimination; but in some cases, discussion has extended to the impact of AI on diversity, the labour market, democratic governance, due process rights, etc. There have been suggestions to improve AI systems in these areas by incorporating these norms into technical standards and codes; increasing transparency; increasing public awareness and education about the possible influences of AI systems on rights; increased auditing or monitoring of AI systems’ performance; strengthening existing legal systems to account for the issues that arise from AI systems, etc.
- Non-malfeasance – the discussion around this principle has largely pertaining to the need for security and safety in the deployment of AI systems, i.e. that AI systems should not cause any foreseeable or unintentional harm. More specifically, these discussions have considered cybersecurity threats such as hacking, and the risk that technology advancements may outpace the ability to regulate. The various kinds of harm that have been considered are erosion of privacy, safety, negative impact on social well-being, and even physical harm. Proposed solutions include interventions in AI at the design stage, including privacy by design, multidisciplinary cooperation, establishing industry standards, increased oversight, etc.
- Responsibility and accountability – the discussion relating to these principles have been quite varied, including recommendations on integrity, clarification on liability, and providing for remedies where AI systems could potentially cause harm. There is also lack of clarity on whether there is a difference in the way accountability is considered in the case of AI systems vis-à-vis humans.
- Privacy – in the case of privacy, most jurisdictions connect the discussion to the right to privacy, which must be protected, and the issue is generally presented as a data protection or data security issue. In terms of potential solutions, stakeholders have considered privacy by design, differential privacy, data minimization and access control. There have been calls for privacy laws to adapt to AI..
- Beneficence – this principle relates to the promotion of wellbeing, peace and happiness, the creation of socio-economic opportunities and economic prosperity, for all people or all society.
- Freedom and autonomy – the discussion around freedom and autonomy relates to measures ensuring that users are at the core of the system, protecting the freedom of expression, informational self-determination, freedom to use different platforms and other aspects of positive freedom. However, in some cases, freedom and autonomy has been interpreted to mean negative aspects of freedom as well, such as the freedom against technological experimentation, manipulation and surveillance. In most cases, freedom is believed to be served by ensuring that individuals have sufficient options and information about AI and its interactions with the world.
- Trust – discussions around the principle of trust have typically involved ensuring trust in AI systems from users and society in general. This is ensured through other aspects mentioned above, such as accountability, explainability, transparency, etc. as a means to fulfil public expectations.
- Dignity – dignity is discussed purely in the context of human beings, and that AI systems should be constructed such that they do not destroy, diminish or reduce human dignity in any way, and on the contrary, work to preserve and promote human dignity.
- Sustainability – the idea of sustainability is referenced in the context of developing and using AI to protect the environment, contribute to fairer and more equal societies, and create systems that are sustainable and endure over time.
- Solidarity – the principle of solidarity has been discussed as a fallout of AI systems on the labour market, and with the push for a strong safety net. The goal with this principle is to push for greater protections for vulnerable groups and ensure that AI does not destabilize social cohesion.
One of the most prominent AI ethics guidelines are the OECD Principles on AI in 2019 that formed the base for the human centred principles adopted in the G-20 Summit also in 2019. Both the instruments present a list of five principles that were adopted by the member nations, that included human-centred values and fairness, transparency and explainability, robustness, security and safety and accountability.
Similarly, the Global Partnership on AI (GPAI) was established in June 2020, with a view to support the responsible and human-centric development and use of AI in a manner consistent with human rights, fundamental freedoms, and our shared democratic values, as elaborated in the OECD Recommendations on AI. The GPAI proposes to involve multiple stakeholders across industry, civil society, governments and academia to collaborate across four Working Groups – (a) Responsible AI; (b) Data Governance; (c) the Future of Work; and (d) Innovation & Commercialisation. One of the first priorities of the GPAI is to consider how AI can be used to better respond to the COVID-19 pandemic. The GPAI is to comprise a Secretariat hosted by the OECD, along with two Centres of Expertise. The first GPAI Multistakeholder Experts Group Plenary is proposed to be held in December 2020, and hosted in Canada.
As is evident from the above discussion, AI ethics initiatives have largely generated vague, high-level principles and value statements that do not translate to very specific recommendations. The next concrete step from a policy perspective would be for international and national bodies to filter down these principles into concrete actionable form, that balances, to the extent possible, the business needs of private parties with the larger social good.
This report maps the discussions or frameworks that have been adopted by the governments of various nations to address the ethical issues around the AI systems and technology.