Against a background of global competition to seize the opportunities that Artificial Intelligence (AI) promised, many countries and regions are explicitly taking part in a 'race to AI'. Yet the increased visibility of the technology's risks has led to ever-louder calls for regulators to look beyond the benefits and also secure appropriate regulation to ensure AI is 'trustworthy' – i.e. legal, ethical and robust. Besides minimizing risks, such regulation could facilitate AI's uptake, boost legal certainty, and advance countries' position in the race. 

The session on International AI Regulations: Catalyzing Innovation while Ensuring Safety, Trust and Accountability sheds light on various strategies used by international organizations, governments, corporate companies, and startups to ensure uniform regulations across the globe. Abhishek Singh, President and CEO of NeGD, moderated the panel. 

The panelists for the session included H.E Amandeep Singh Hill, Under-Secretary-General, Envoy on Technology, United Nations; Mr Thomas Schnieder, Chair, Council of Europe Committee on AI; Ms Mary Snapp, Vice President, Strategic Initiatives, Microsoft; Ms Bharatang Miya, Founder and CEO, Girl Hyp Women Who Code; Ms Sue Daley, Director, Technology and Innovation; Mr David Weller, Senior Director- Emerging Technologies, Competitiveness and Sustainable Policy, Google. 

Harmonizing AI

In a world where the approach towards AI regulations varies from nation to nation, certain harmony is a necessity in AI policies. A global definition of AI is an initial step towards this harmony. It gives clarity and certainty to the AI industry.

However, this is a mundane task with AI revolutions with varied objectives carried out across the world. Cultural differences, the way regulations are treated, changes in use cases among nations and differences in legal traditions are yet another factors that pose a challenge in this process. 

In this context, organizations such as the Council of Europe are attempting to develop a 'Binding Treaty on AI', which is meant to be a global treaty. The challenge is to find common ground that the countries could share.

Companies ensuring ethical standards

While international organizations attempt to find a common ground, multinational corporations such as Microsoft and Google, which have companies in almost every other country, act as agents for responsible AI. How do they do it in this evolving regulatory landscape?

Microsoft has been thinking about AI for the past eight years. They function with the help of a diverse team. There are some factors that the team takes into consideration while deploying AI in every nation. These factors include:

  • Learnings from existing systems
  • Uniform regulatory framework maintained by the organization across the globe.
  • Secured data centers with humans in control and AI acting as a co-pilot
  • Information sharing with academic institutions and researchers
  • Using tech for social good

The following are how Google ensure ethical standards:

  • This means measuring ethical standards such as trustworthiness
  • Harmonization of regulatory approach
  • Cooperation around research

Inclusive development

Tackling the issues of bias and ethical and responsible development of AI can be ensured by involving stakeholders from the initial stage. Research, innovation and infrastructure investment are significant elements that provide inclusive growth. 

Fairness means different things to different individuals. For instance, women face several issues in this field. Problems faced by women are because of men who think that women don't deserve what they work for. People need to be educated to overcome this difference in opinion, data quality and bias mitigation should be ensured, and every technology should be built for humanness. 

Tips for Smaller Companies

It might be easier for multinational companies to ensure uniform regulation. But how do smaller companies with regional markets enforce it? Here are some tips from the experts:

  • They should follow a risk-based approach
  • Seek aid from big companies, and corporates should be willing to offer it
  • Ethical principles should be embedded from an initial stage
  • Furthermore, there should be means to operationalize it

Role of UN

The United Nations is the ideal place that can be a platform to address global needs as it brings various stakeholders together. There are some benefits that AI possesses. The primary one is that we don't need to reinvent the principles of AI. What we need is a place where standardization happens. Standardization does not just mean ‘coming together’ but instead ‘developing things together’. The UN is the best place where we can have this conversation.

If we rightly educate implementers, work to improve data quality and develop trustworthy AI, AI will be a technology that has tremendous hope. 

Want to publish your content?

Publish an article and share your insights to the world.

Get Published Icon
ALSO EXPLORE