Governments worldwide are developing laws and regulations to govern how organizations use AI. The industry is at a point where AI and its capabilities continually out-space effective rule-making, and there may be more prudent approaches than waiting for government mandates.  

Finding a solution to this puzzle are two AI leaders- Debjani Ghosh, President of NASSCOM and Kay Firth-Butterfield, Head of AI & Machine Learning and Member of the Executive Committee of the World Economic Forum, in an event organized by Deloitte AI Institute. Beena Ammanath, Executive Director of the Global Deloitte AI Institute, moderated the event.  

Governments worldwide are formulating AI policies in their own way. Global AI leaders, including China, India and the US, have their perspectives on bringing AI into their strategies. What the world lack is a universal framework. However, the government policies are at a phase of trial and error. Therefore, the time is still early to tag an approach as best. 

Regulating AI 

There are some qualities of AI that are being targeted for regulations. Bias in AI is one such parameter that worries many. However, in Kay’s opinion, fairness, explainability and accountability in AI are also on the list. 

Debjani Ghosh put forward a new perspective for AI regulation based on the cultural values of different countries. “The way Indians think about privacy is different from Chinese. And there are no right or wrong answers here”, she said. In her opinion, AI regulations should be customized based on each nation’s needs. 

Also, regulating technology is never a solution to existing problems. Instead, restrictions should be implemented on its usage, as the real issue is always created by the humans behind it. “Countries should come together and make a list of use cases based on its risk level and customize based on their individual needs,” stated Debjani. 

Importance of self-regulations 

“Fundamental basis of self-regulation is a must for any company”, stated Kay. Every organization today is using AI in one way or the other. Regulating themselves is significant as it influences brand value. The failure of proper regulation will affect the product quality and the brand as a whole, which will lead to a bias in AI.  

The complete eradication of bias has limitations as it is inbuilt into the human thought process. Holding up the mirror to identify unconscious bias from development to implementation is vital. NASSCOM’s attempt at developing the Responsible AI Resource Kit was to accept the reality of bias and inject the solution into the DNA of every organization. Also, identifying the right level of bias based on each use case is significant. 

Finding the balance 

The sandbox approach is critical for finding the right balance. It will allow companies to take creative risks without the fear of penalty. The balance will keep on changing based on the development of technology. Therefore, organizations and governments should provide themselves with space to experiment and learn. 

This sectorial approach will allow companies to come inside the sandbox to experiment with their technologies. Here, organizations could interact among themselves and figure out the risks involved.  

Successful AI governance 

Trust and success go hand in hand. Inclusion should grow along with a citizen-centric approach. 

According to Debjani Gosh, there are four components that should be a part of the AI governance model: 

  1. Companies should assess where they are 
  2. Figuring out strategies is not a parallel process 
  3. Checkpoints to analyze the result of strategy after putting it into practice 
  4. Leadership accountability 

Companies and governments working with AI should ensure that their technology is trustworthy, responsible and inclusive. They should read and learn about established regulations and practices and choose as per their needs. 

 

Want to publish your content?

Publish an article and share your insights to the world.

Get Published Icon
ALSO EXPLORE