Results for ""
As pioneers in the world of tech face the challenge of making AI more responsible, Generative AI experts have said that this technology mustn't be over-regulated before it reaches those who have little to no access to it. While speaking at the ‘Generative AI: Issues and Challenges’ session on Thursday, the concluding day of the Global Partnership of Artificial Intelligence Summit in Delhi, Arun K. Subramanian, the Vice President - of Cloud and AI, Strategy and Execution, at Intel, said, “I am optimistic on regulation but policymakers should remember that going overboard may affect innovation…We are still playing the first over of the first innings of a test cricket match and have a long way to go.”
Subramanian also called for making generative AI as accessible as electricity, adding that a large quantity of data is still with the governments and enterprises. He said the tech developers must focus on creating a code that has the potential to expand AI. “At this juncture, we should think about how learning can be made more efficient with AI. How can students use generative AI to write better essays and get their questions answered,” he added.
Dr Janine Berg, International Labour Organization expert in GPAI Working Group Future of Work, ILO, highlighted that the biggest challenge of generative AI is the risk it poses to employment while revealing that clerical support workers face the highest risk of being displaced due to this technology. Showing research-based data, she noted that women employees face a greater risk than men and added that richer countries have access than poorer nations, hence, ensuring equity must be a top priority of policymakers. “It would be better if countries follow a democratic process to decide on regulations. Further, it is also important for tech companies to work with labour unions,” she underlined. Adding on to Subramanian's point of introducing generative AI to students, she said that it’s equally crucial to educate teachers.
The panel of experts also agreed that the challenge of bias in generative AI needs to be addressed. On this, Pandu Nayak, Global Vice President of Search at Google opined that undesirable data needs to be kept out when training models for generative AI. “Cross-border data transfer, with trusted guardrails, of course, would make the whole system more inclusive and would address the issue of bias to some extent which exists in areas like finance and health,” he suggested. Nayak, who has worked with Google for nearly two decades, also listed the steps his company has been taking to make generative AI more factually correct. “We taught BARD to corroborate the information it produces…BARD has been trained on over a trillion words, which is akin to reading millions of books, which is why we believe it has been able to predict things correctly as it does,” he said.
Notably, an example of how bias works in generative AI was reflected in a recent study. Researchers at the University of Tasmania, Australia, and Massey University, New Zealand, concluded after assessing AI-generated content that male leaders were depicted as strong, courageous, and competent, while women leaders were often portrayed as emotional and ineffective. “Any mention of women leaders was completely omitted in the initial data generated about leadership, with the AI tool providing zero examples of women leaders until it was specifically asked to generate content about women in leadership,” the study said.
Separately, when asked about the steps that should be taken to overcome challenges posed by generative AI, Microsoft’s Mary Snapp said the time is now for international collaboration to not only work for physical but ethical safety. “The possibilities of generative AI are limitless. From telling a small farmer when to sow a seed based on weather prediction and soil moisture to helping a small business holder collaborate with e-commerce giants, it can play a huge role in driving the GDP growth,” she added.
The session also delved into the limits of ChatGPT, OpenAI’s much-talked-about project which boasts millions of dollars of investment from Microsoft. Sarvam AI Co-Founder Dr Pratyush Kumar highlighted that ChatGPT has limits when it comes to Indian languages and his company aims to fill this gap. Their Llama model is more token efficient and can answer questions in Hindi even if the referral material is in English.
Meanwhile, Vikram Adve, a Computer Science Professor at the University of Illinois, said generative AI holds promise in low-resource domains like agriculture, which also pose fewer risks than in arenas like finance, politics, defence and justice. “We are at a juncture where much can be done to revolutionise agriculture but deep research is needed,” he noted.