There is a parallel debate going on across the globe on AI’s potential to revolutionize the world and the risks associated with it. As AI becomes increasingly sophisticated and integrated into our lives, we cannot overlook the harms it could manifest in various ways, such as bias and discrimination, privacy violations and job displacement.

To ensure public trust in AI and its long-term success, two things are going to be pivotal. First is developing safe and trustworthy AI systems that are transparent, accountable, and aligned with human values; and second is development of frameworks and guidelines by governments and regulatory bodies to ensure its safety and ethical development.

This very discussion was taken a step forward at the Global Partnership on Artificial Intelligence (GPAI) Summit 2023, which is being held in Delhi. During a fire chat session, themed ‘Government’s vision and role in ensuring safe and entrusted artificial intelligence’, India, UK and Japan discussed the importance of forming guardrails and legal framework around AI and how to implement them, to harness the power of AI to create a better future for all.  

Speaking at the session, India’s Minister of State for Electronics, Information & Technology Rajeev Chandrashekhar said noted that conversations on formulating regulations will be crucial to ensure safe and entrusted artificial intelligence.

“We are at an absolute inflection point and for the first time in the history of tech, the governments and the civil society are aware, not just the good part about technology, but also the harms emanating from it. The governments are beginning to understand and learning from what we have gone through the last 15 years – the social media, the toxicity, the harms, the dark internet,” said Mr. Chandrashekhar. 

“We will fully get into AI and ensure that AI is exploited for the good of humanity. But, at this very critical stage of its growth, we need to be having these very important conversations of what the guardrails ought to be,” he added.

Viscount Camrose, UK’s first minister for AI and intellectual property, also echoed similar sentiments as he called for the need of safe and trustworthy AI.

Speaking about the AI Safety Summit, held in UK last month, Camrose said: “We cannot innovate successfully unless we have trustworthy technology. The Bletchley Declaration, signed by many countries and thinkers in this field of AI, is really supporting the goal that says ‘we really want AI to innovate, it is going to solve so many societal problems. It is going to make a more prosperous world, provided that we make it safe and trustworthy’.”

“We, the undersigned, agreed that there are both benefits and risks to AI. There are certain steps for government to take going forward. There were roles for companies to play as well, making sure that they weren’t any longer doing their own testing with AI systems, but allowing governments and safety institutes to do the testing for them,” he added.

AI has the potential to address many global challenges, such as climate change and poverty. However, it is crucial to ensure that AI is used responsibly and ethically to contribute to a sustainable future for generations to come.

Hiroshi Yoshida, Ministry of Internal Affairs and Communications, Government of Japan, made a note of this very fact while acknowledging the potential risks involved with the powerful technology. He spoke at length about the Hiroshima AI Process, agreed upon during the G7 under the leadership of Japan, in which the importance of inclusive AI governance and a vision of trustworthy AI aligned with shared democratic values was recognized. 

“Hiroshima Yoshida – There have been discussions that there might be serious risks with AI, but on the other hand, it doesn’t mean that we stop using it. We should use AI for developments, for getting over climate change, bridging gaps and many other things, while facing those risks and overcoming them,” he said.

Meanwhile, Mr. Chandrashekhar highlighted the fact that just like the technology keeps evolving with time, the guardrails around it also need to evolving in nature. He said that a progress in ensuring safe and trustworthy AI can be made if like-minded countries agree upon mutually accepted frameworks.  

“Our approach is slightly different; we see harm as we see it today and the framework that we are building in terms of legislation, is that this universe of harms will keep evolving as we encounter it. The difference between trying to regulate the AI ecosystem versus creating guardrails around the platform that enables or delivers AI is again an approach issue. Our approach so far is that we very clearly build these clearly understandable guardrails around the platform. These conversations have to happen because if, over the next 6-9 months, there can be an agreement around like-minded countries of the world about what the basic principles are, what the basic building blocks and ground rules around AI ought to be, the we will make some progress,” he said.

Globally agreed-upon guardrails will not only ensure safe and ethical use of AI, but will also help mitigate the risks associated with it, promote public trust, and unlock the full potential of this transformative technology.

  

Want to publish your content?

Publish an article and share your insights to the world.

Get Published Icon
ALSO EXPLORE