Results for ""
We are in an era where advances in AI research is not confined to the labs but has real-world applications in terms of facial recognition, ML algorithms, etc. AI if deployed appropriately can provide significant benefits to economies and society, as well as enable fairer, inclusive, and informed decision-making. However, such promise will not suffice without significant care and effort, which includes thinking about how the technology's creation and use should be governed, as well as what level of legal and ethical oversight — by whom and when — is required.
Governments, organisations and industries have a vital role to play to ensure good outcomes. To that end, listed down are some of the major developments in governance at the global forum.
From applying for loans and booking flights to steering autonomous cars, AI is everywhere and this very fact raises alarm among the international organisation to have some kind of agreement for tackling unprecedented challenges. All the Member states, 193 countries of the UN Educational, Scientific and Cultural Organization (UNESCO) adopted a historic agreement that defines the principles and common values and needed for the healthy development of AI. The cases of unreliable use of AI technologies by law enforcement agencies, challenges to privacy and dignity of individuals, cases of mass surveillance, gender and ethnic bias have prompted to set some universal standards, as per the statement from UNESCO.
The agreement made paves way for ensuring transparency, privacy and accountability, and a step towards achieving Sustainable Development Goals (SDGs).
After taking inputs from experts working in the AI domain like developers, private players, government agencies and those working for children’s rights, UNESCO presented policy guidance to promote the rights of children in the field of AI. The policy calls to ensure inclusion of and for children, support children’s development and well-being, prioritise fairness and non-discrimination for children, protect children’s data and privacy, ensure safety for children, provide transparency, explainability, and accountability for children, etc.
Public policy think-tank of the Government of India, NITI Aayog published a report titled ‘Towards Responsible AI for All’ in the starting months of 2021. Since the release of the National Strategy on Artificial Intelligence (NSAI) by the NITI Aayog in 2018, India has seen multifold adoption of AI across government, research institutions and the private sector. The report highlighted some of the most pressing issues that include: understanding the decision-making process of machines; black-box phenomenon with deep learning; cognitive, human and data biases; evaluating the performance of AI systems; accountability for harm; and privacy issues. The report highlights the absence of an overarching guidance framework for the use of AI systems. It further presented the commitment to establish such a framework crucial for providing guidance to various stakeholders in the responsible management of AI in India. One can read the entire report here.
With an aim to turn Europe into the global hub of trustable AI, the European Commission sets a first-ever legal framework on AI and agreed on a plan with the member states to uphold the fundamental rights of people and businesses. As per the commission, the new rules will ensure people’s trust in the emerging technology and enhance investment and invention across Europe. Europe's leadership position in human-centric, sustainable, secure, inclusive, and trustworthy AI will be bolstered through coordination. The Commission made it clear that it is dedicated to encouraging AI technology research and application across all industries and in all Member States in order to stay globally competitive.
The UK government’s Centre for Data Ethics and Innovation (CDEI) has come out with a roadmap setting out the steps required to build a world-leading AI assurance ecosystem. Additionally, the national AI strategy aims to achieve three objectives in advance: Invest and plan for the long-term needs of the AI ecosystem to continue leadership as a science and AI superpower; Support the transition to an AI-enabled economy, capture the benefits of innovation in the UK, and ensure AI benefits all sectors and regions; and ensure the UK gets the national and international governance of AI technologies right to encourage innovation, investment, and protect the public and our fundamental values. Further, DeepMind, Graphcore, Darktrace, BenevolentAI, and other leading AI startups are all based in the United Kingdom. The country also has a plethora of premier universities, research centres, and organisations that it may use to become a leader in the development of empathic AI.
After almost 18-months of due deliberation among experts from digital technology, law, ethics, human rights and even health ministries, WHO announced the ‘Ethics & Governance of Artificial Intelligence for Health’ Report. The new artificial intelligence based technologies hold great promise for improving diagnosis, treatment, health research, and drug development, as well as assisting governments in carrying out public health functions such as surveillance and outbreak response. Hence, such technologies, must prioritise human rights and ethics in their development, deployment, and use. The reports identify the risks and challenges associated with AI-based technologies and recommend certain principles for responsible use in the healthcare sector. One can read the entire report here.
Image