The world has witnessed the impact of AI on our individual lives and businesses. Governments, authorities, social media platforms, healthcare, defence, security, AI has made its mark everywhere, and there seems to be no looking back now on. However, what is also critical to understand is that with this much involvement, there is a dire need to take all precautionary measures to curb any kind of abuse caused by the technology.
There had been issues in the past where AI algorithms exhibited bias, disparity, or even human rights abuse. Now that AI is avidly in use by public authorities for various tasks such as rescue allocation, decisions on job roles, job allotments, personality evaluation, skill scoring, and more. Any malfunctioning of AI algorithms in these above-mentioned areas might lead to serious consequences relating to individual human rights.
There has to be a right balance between technological development and human rights protection.
Ethical considerations and human rights shall be infused early in the AI model creation or right from the inception of the ideas of using AI-aided technology. In addition, we also need more regulations, standards, governing bodies, and technical specifications to ensure better, safer, trustworthy, and secure AI ecosystems. Having a considerate approach that incorporates risks and assesses them at each stage of the lifecycle is mandatory.
UN Global Pulse is the UN Secretary-General's initiative on big data and artificial intelligence for development, humanitarian action, and peace. This initiative works towards assessing the risks, harms, and benefits of an AI or data analytics project. It gives a detailed analysis of privacy risks, chances, intensity, and severity/significance of potential harm. It also helps stakeholders make decisions comparing the risks and benefits of the project.
NITI Aayog’s National Strategy for Artificial Intelligence (NSAI) recommended the establishment of clear mechanisms to ensure that the technology is used in a responsible manner by instilling trust in their functioning as a critical enabling factor for large scale adoption in a manner that harnesses
the best that the technology has to offer while protecting citizens’ rights.
Principles and tools that shall be considered to make AI ecosystems more aligned with human rights:
- Non-discrimination- It is essential to have a holistic approach in terms of humanitarian con-text while conceptualizing an AI system. There shall be a thorough analysis of the environment and people where the system will operate and how it will affect people's lives, with special attention on the vulnerable aspects.
- Transparency, explainability, and accountability- Technical as well as organizational transparency in AI models, algorithms, data sets, and most importantly, the purpose of the whole project is essential. A human-centric approach requires a human to think and analyze scenarios, and hence humans must be involved as decision-makers and observers at every aspect and stage of the project. Accountability on all fronts i.e., technical, social, and legal is much needed. For this, we require strong national, international or industry level monitoring to achieve a robust structure in line with human rights.
- Internal AI principles- Having a clear and well-defined draft of AI principles that are based on human rights and ethics can be a total game-changer in operationalizing human rights. This would guide developers, testers, and all stakeholders to focus on human rights and needs at every stage of the AI project. These principles could also secure and strengthen compliance tools and audits.
- Explanatory models- The idea of an explanatory model is to make each team member in the technical staff be well equipped to explain the working, the principles, intent, and progress of the project and the model to non-technical people too. This approach would lead the involved manpower to understand and think deeply about the inherent risks involved and make an informed decision.
- Well, thought partnerships- Another important point in bettering the AI ecosystems is to have the ethical principles and tools applied not only across the life cycle but also in partnerships. The partnering organizations shall also have compatibility and must be in sync with the humanitarian approach and development.
- Knowledge sharing- More training and knowledge sharing is important and shall be taken care of by organizations. They should identify gaps and shall fill them up. This can collaboratively be done with the help of humanitarian agencies that can educate staff on the human rights dimensions of AI.
- AI human rights and ethics review board- Such bodies can be given the authority to give green signals to new projects, review and audit them or even halt an ongoing project owing to any discrepancy.
- More human engagement and impact assessments- A comprehensive understanding of human rights can be achieved by engaging with the people and thoroughly analyzing the rights and needs of the vulnerable, or even the impacted ones, will lead to more human right friendly technological advancements.
- Audits- Audits are critical to achieve transparency and accountability. There is a dire need for a mechanism where even the private sector players make their products auditable. Government regulators and other institutional mechanisms can do this.
There is no way we can say no to technological advancements, and what better time could it be than the pandemic. We are grateful to technological innovations for providing us with shield and weapons, and tools to fight the crisis. However, in the long run, it is vital that we develop more inclusive and human right friendly AI systems. The way to minimize risks is to have principles, the right context, ethics, diversity, inclusivity, collaborations, knowledge sharing, and a robust regulatory framework in place.