Many organizations are starting to generate significant value with artificial intelligence (AI) and recognize that this technology will shape the future. At the same time, many are discovering that AI could expose them to a fast-changing universe of risks and ethical pitfalls. Just as AI deployment will be core to future success, leading organizations will be those that identify and manage the associated risks. But where do companies start? This is a subject that was touched upon in detail by Dr Michael Chui, Partner, McKinsey Global Institute (MGI) at the NASSCOM Experience AI Summit. 

As a technology, AI is looked at as beneficial for businesses. That’s because every single sector and function has the potential to use AI to increase revenues and improve resilience. Moreover, on the personal front, it also helps to improve the quality of life. But at the same time, it does pose risks and challenges to enterprises, whether reputational or performance-based. At an individual level, there are potential challenges to autonomy and privacy.  

Quoting Peter Parker aka Spiderman, Chui said that “With great power comes great responsibility.” At the end of the day, AI comes with great potential, huge responsibility and immense risks. When it comes to operationalizing ethical AI, the challenge is far from what can be imagined. Chui explains that we are already dealing with a lack of regulations as well as holistic views, which are both critical to operationalize AI within an enterprise or company. 

A case for ethical AI

“We know ethical AI is not just a technology challenge, it cannot be simply addressed by a diagnostic tool. It is important to understand what it means to leaders and leadership in an enterprise to deploy and operationalize ethical AI. It is essential to understand that strategy matters the most and also in an organisation, do we know what we mean by ethical AI? Or the values and guidelines that are necessary and how do we make sure that the most senior leaders within an organisation are aligned on these things? This is not a case where you can appoint the CEO to solve this problem for you,” explains Chui. 

He adds that although there are hundreds of use cases available, how is it that one ensures that ethical AI is embedded throughout that life cycle? Moreover, on the people's side, what does it mean to embed ethical AI within the culture? Citing a survey, Chui says that there are many organisations that are capturing real value, but is it possible at an enterprise level, and how does one scale the ability to deploy ethical AI?

“Most companies have not taken into account ethical risks as being relevant to their enterprises plus they have barely taken any steps to mitigate these risks,” says Chui. 

What’s the solution?

Ethical principles must be operationalised through a set of practices across an organisation. Chui advises choosing a multi-disciplinary team for a holistic and collaborative result. The next step is to articulate organisational values to understand what ethical AI means. Furthermore, a framework needs to be put together on how to talk about it. 

“Most people are not trained in ethics; it comes from a non-representative sample set. The next step is to focus on capabilities, including Education of human and machine, Building corporate practices, Promoting user autonomy, Activating explainability as well as Scrutinising data pipelines,” adds Chui. 

When it comes to adoption and scaling, Chui recommends Continuous monitoring and update, Incorporating into culture, Enabling collaboration and Communicating internally and externally. 

Truly, with great power comes great responsibility, and AI has the power to change the world!

Want to publish your content?

Publish an article and share your insights to the world.

Get Published Icon
ALSO EXPLORE