It is customary for almost every media platform of repute to publish a plethora of stories at the beginning of a calendar year especially on ‘trends to look out for’ in various fields. Artificial Intelligence and related trends notably topped this list for technology and business publications atleast. It was predicted that the worldwide AI market – hardware, software and services – would total $156.5 bn – a 12.3% growth from 2019, according to IDC. Despite a pandemic slowing down economic growth rates across the world, AI will emerge strong and continue to reign strong for the foreseeable future.

While the market for AI is booming, it has also spurred conversations on the nature of AI. The technology is no longer just an addendum – it is forming the core of business processes and is key to digital transformation worldover. As datasets become more available and accessible, AI is becoming more intelligent and intuitive. While the dialogue over ethical AI is an ongoing one among academicians and researchers worldwide, the conversations are just beginning in the larger corporate circles. And the time has come for companies to start listening more keenly to what the research has to say.

With nearly 80% of all AI revenue coming from software, specifically AI applications, it is very crucial now to ensure AI platforms are not propagating bias and other unethical beliefs – common to human nature. Conventionally, the pace of inventions often outstrip that of regulatory frameworks, and AI is no different. Google CEO Sundar Pichai warned governments of rushing into broad AI regulations, noting that regulations drawn up in haste would actually hamper innovation, not further it. Instead, he suggested existing laws could be repurposed ‘sector by sector’. Nicolas Economou, chief executive of H5 adds that a combination of industry-driven endeavours and regulation will prevail, like other technological domains, but the balance would fall on societal determinants.

In its research, Capgemini highlighted that the use of ethical AI is closely associated with consumer trust.

But despite pressing concerns on the nature of AI, nearly 90% of organisations have encountered challenges in ethical AI.

So how can this be done? Short answer is build awareness, ensure diversity within teams tasked to build AI systems, develop governance structures, educate users and developers of the pitfalls of an unbiased system and so on.

Building Ethical AI Guidelines: Some Global Efforts In Highlighting Best Practices & Guidelines 

Globally, there are five major ethical principles that encompass ethical AI – transparency, justice, fairness, responsibility, privacy and non-malfeasance. Over the years, as AI gained popularity, some common fears and concerns around AI included job losses, wrongdoing, illegal activities, surveillance, perpetrating bias and so on.

There are a range of think tanks, academic institutions, advocacy groups and more that are tirelessly working on developing frameworks and fine-tuning the conversations around ethical AI. In response to these growing fears, several adhoc committees have been established like the High Level Expert Group on AI by the European Commission, the Expert Group on AI in Society for the OECD, the Select Committee on AI of the House of Lords UK and the Advisory Council on Ethical Use of AI and Data in Singapore. Similar efforts are being undertaken by the private sector as well including Amnesty International and Association of Computer Machinery.

The Health Ethics & Policy Lab in Zurich found 84 organisations involved in developing ethical AI guidelines globally. Of these, a majority were from the USA and UK, followed by Japan, Germany, France and Finland. Most of the data was produced by private organisations and government-sanctioned agencies, followed by academic and research institutions, scientific societies and not-for-profit outfits.

In recent times, corporate entities have banded together to float initiatives like Partnership in AI 2018 – a consortium of private sector companies, startups, researchers, not-for-profit entities and more, such as Google, Accenture, AINow, The Alan Turing Institute, BBC, Carnegie Mellon University, Facebook, Open AI, Intel, IBM, Unicef, Wadhwani AI among many more. Prominent corporates have also struck out to launch their own AI policies, such as Google, IBM and Microsoft – considered highly reputable by industry.

Other notable examples include:

AI Now: This is a research institute in New York University that is exploring the social implications of AI, specifically pertaining to rights and liberties, labour and automation, bias and inclusion and safety & critical infrastructure.

AI4People: Launched in Feb 2018 at the European Parliament with the aim of establishing founding principles and practices to build a good AI society, AI4People has set up committees in automotive, banking and finance, energy, healthcare, insurance, legal and media – in a bid to set up broad international AI frameworks for these sectors.

Ethically Aligned Design: Initiated by IEEE, there are two versions of the Global Initiative on Ethics of Autonomous and Intelligent Systems with the mission of engaging every stakeholder in the design and development of autonomous and intelligent systems, and make this technology more real, accessible and accountable.

What’s Happening In India:

Compared to its western counterparts, India has arrived at the AI ethics much later. However, there are some significant steps being taken to strengthen policy frameworks. One of the earliest documents was by NITI Aayog in June 2018 titled National Strategy for Artificial Intelligence with an emphasis on #AI4All. Contributions were made by Wadhwani AI, NVIDIA, Intel, IBM, NASSCOM, McKinsey, Accenture, MIT Media Labs and VideoKen. The section on Ethics in AI broadly touches upon the commonly discussed tenets under ethics, in addition to a note on Explainable AI, which goes a step further in building trust in AI.

Another comprehensive effort is by the Department for Promotion of Industry & Internal Trade (DIPP) – which produced the Report Of The Artificial Intelligence Task Force, which urges a multi-discliplinary approach to making AI a useful technology for all, and has shortlisted 10 domains to drive this change – manufacturing, healthcare, fintech, agriculture, education, retail, environment, national security, public utility and accessibility.

The Department of Defense constituted a taskforce headed by Tata Sons chairman N Chandrashekaran to study the application of AI in defence.

AI practitioners and experts understand that making AI ethical is an ongoing process, and requires continuous monitoring. Companies need to understand these frameworks at the early stages of technology development, for simultaneous implementation and improvisation. Its no longer about the technology alone, but the humanization of technology. And there is never one right answer there. 

Want to publish your content?

Publish an article and share your insights to the world.

ALSO EXPLORE

DISCLAIMER

The information provided on this page has been procured through secondary sources. In case you would like to suggest any update, please write to us at support.ai@mail.nasscom.in