Results for ""
Indian government understands AI-induced social risks and also not being “overtly possessive”.
“AI should be used for benefit of citizens in areas like healthcare, education. The Indian government is serious about ‘AI for all’,” Ajay Sood, the Principal Scientific Advisor to the Government of India.
He was speaking on “Mitigating Social Risk For Safe and Resilient Society” during the second day of the three-day Global Partnership Artificial Intelligence Summit being held at Delhi’s Bharat Mandapam.
It is worth a mention that yesterday, expressing concern over the misuse of AI, India’s PM Narendra Modi, on the inauguration day, had said, “AI can become the biggest tool to help humanity's development in the 21st century. But it can also play a major role in destroying us. Deepfake, for example, is a challenge for the world…If terrorists get AI weapons, this will have a huge impact on global security. We need to plan how to tackle this."
Sood revealed India is working on a techno-legal framework which not only focuses on regulating AI but regulation for AI. “India has proven its mettle in creating strong resilient platforms. It should not only be about data sharing but data collaboration sharing. This approach focuses on model training and ensures privacy and security,” he explained.
“Let us forge a path where AI amplifies human ability and propels us towards a harmonious world. India is committed to fostering an AI ecosystem that is inclusive, diverse and helps mankind” Sood said in conclusion.
The session, which also had an address by Gabriela Ramos, the Assistant Director-General (ADG) for Social and Human Sciences, UNESCO, focused on challenges faced in a post-AI world and why mechanisms are required so that AI helps mankind.
At the onset of the session, Ramos set the tone for the agenda, and perhaps UNESCO’s biggest concern, which is, ‘How can tech help address major challenges.’ “This is not a discussion of technology but society,” Ramos said, adding that not many countries have the privilege to nurture technology. Further, the UNESCO official highlighted that only 22% of the labour force is made up of women.
On UNESCO’s approach to the AI shift, Ramos said, “We recommend three things – using tech for human help, ensuring the rule of law and also using it to frame policy.” She also added that the idea is not to stop technology but have an ethical framework to address the issues. “To be honest, we felt lonely talking about redressal mechanism around two years ago, but now countries are also discussing it,” Ramos said.
Taking the conversation forward after Ramos, Sood said, “My concern is long-time social risks stemming from the use of AI in non-military contexts. It is about bias - human bias, and biases in algorithms that can aggravate already existing issues.” He added that deep fake is one dangerous outcome that can’t be overlooked.
Subsequently, Pandu Nayak, Vice President, Google Search, highlighted how his company is working towards making AI more responsible. “Whenever there’s a new tech change, it affects human development. Think about the invention of the wheel, how it changed transportation or the printing press that helped the dissemination of information. We are at the point of history where AI can do the same thing,” he stressed.
Nayak added that there’s a lot of excitement about generative AI but AI is much more than that. He cited the example of Alphafold, which is generated by Google DeepMind, and helps in generating 3D images of protein. This, Nayak said, is contributing towards cancer research.
The Google official also said we should focus on making AI more responsible and remember that AI generates opportunities. “If a decade from now, we want to develop AI responsibly, what are the choices we need to make today, to make that? If we get this right, we can use tech to make progress in fundamental science and make progress in the economy,” Nayak added.
Meanwhile, Karya CEO Manu Chopra claimed AI today is an unjust market. He underlined that his organisation doesn’t take on work that affects workers’ mental health. “Higher wages lead to quality data sets,” he added.
Further, David Leslie, Director of Ethics and Responsible Innovation Research at Alan Turing Institute said, “We live in a discriminatory world…We need to understand that problems of bias in AI are our problems, our culture and our history…We have to acknowledge that we need to look into this ecosystem.”