Several AI systems fail or turn off due to their abnormal behaviour harming societal norms, ethics, morals, and values. Along with the benefits, AI comes with some challenges, fears, and Ethical risks. Organizations cannot ignore this risk. For instance, US health providers use AI algorithms to guide health decisions, such as patients that require extra care or medical privileges. Researchers at UC Berkeley, Obermeyer et al. identify signs of racial bias in algorithms. The algorithm was assigning the same risk level to Black patients yet is sicker than white patients. It gave higher risk scores to white patients, thus more likely to be selected for extra care. Bias tends to reduce more than half the number of black patients identified for extra care compared to White.

The main reason for this is that the algorithm uses health costs rather than illness for health needs. Less money is spent on Black patients than White, yet they have the same level of need. Thus the algorithm considers it falsely that black patients are healthier than white patients for the same disease. 

Thus this system is demolishing the law of equality.

To solve such issues, enterprises can adopt AI without any headache and risk by implementing ethics. Operational principles are also needed to spot the issues and resolve them with the high-level AI Ethics principles, which allows us to set priorities. To operationalise Ethical AI, we put its principles into practice and company-wide rules.

Human Agency and oversight

AI systems should work for humans. It should empower human beings by making informed and insightful decisions by considering human rights. Moreover, a proper interference of humans is also required. We can achieve human oversight by keeping human-in-loop or human-in-command. 

  • Responsibility and Accountability: Responsible AI is an approach that makes it possible for AI systems to work responsibly that incorporates transparency, responsibility, and explainability. It makes sure to build an AI system that fits social values. Some of the AI systems have become popular due to their decision. Remember the Uber cross red light in San Francisco and Google photos labeled a black person as a Gorilla. These algorithms were at their worst. Imagine the same issue in cases like self-driving cars, drones, and health diagnosis processes; how disastrous the situation will be. The primary reason for these issues is biased data and the inflexibility of the model that cannot consider different scenarios. Therefore it is required to increase algorithm Accountability.
  • Transparency and Explainability: The DARPA program popularises explainable AI. Their objective is to create more explained ML model techniques with a high level of learning that enables users to understand trust and the effective use of AI systems. The growing need for a Transparent AI system demands Explainable AI. It makes the process understandable for others by interpreting internal logic and the system’s decisions. It allows the non-technical person also to understand, “How does a system work?” Many approaches, libraries, and tools are available to interpret model logic and justify their decisions.


No Harm

The no-harm principle states that AI and the data collected from the users should use for societal wellbeing. It should not harm their thoughts, beliefs, and social life.

  • Privacy and Data Governance: For AI systems, organizations collect user data and access them to infer things about them. Inference data steer path in life. They can use it for tasks such as getting a job, firing, credit, insurance, etc. They can collect data about professions like and dislike and use it as an inference. Suppose a person is a "lawyer, viennese and vegetarian". They infer that a person may like "books, coffee, and animals". But it may be wrong to suppose a person stops coffee due to migration to the UK. It means it is not a reasonable infer. Many tech giants use their customer’s personnel data and share it with third parties that use their data to infer credit, insurance, etc. Their customer is unaware of this activity. It is required that the system should operate according to the regulations, norms, and values. It should be acceptable and comprehensible for the use and public.
  • Social and Environmental Sustainability: The result of AI should be for societal and environmental wellbeing. AI language processing can generate 1400 carbon emissions. It is equivalent to 5X the lifetime emissions of the American car or the equivalent of 300 round-trip flights between San Francisco and New York. When using Edge AI, the amount of emission increases a lot. Evaluating the environmental impact of AI is significant for evaluating an investment. To solve this issue, enterprises must use cloud services to reduce the carbon footprint. 
  • Robustness and Reliability: The system can spot malfunction, failure and avoid them. It keeps the system and user data secure. It can identify vulnerabilities of models, especially in some crucial cases like autonomous driverless vehicles.


Justice

In society, human rights are defined as giving equal value to all in the world. Therefore AI systems are also expected to facilitate diversity and equal treatment.

  • Social Non-Discrimination: According to the Universal Declaration of Human Rights, every human being has a right to entitle equality and freedom without the bias of caste, race, color, sex, religion, etc. But the increasing cases of bias in AI systems are increasing anxiety. Such as one of the AI systems used in the US(United States) to know the requirement of health care for patients is biased against the black person. Choosing a white person to provide healthcare is 35% more than choosing black persons, yet both require the same care and health treatment. To resolve these types of issues, we must spot these types of biases and mitigate them. Algorithmic fairness is one of the solutions to solve it.


Ethical AI helps to reduce risks such as mass surveillance and human rights violations using AI regulations. AI must have sensible regulation that could balance the benefits and potential harms of AI. The initiative to develop AI that has ethical standards is increasing rapidly. Ethical frameworks minimize AI risks and ensure safe, human-centered, and fair AI.

There are some feature of Ethical AI that would tell us better that how it make AI systems more safe and fair:

  • Social Wellbeing: Ethical AI makes the system available for the individual, society, and the environment’s sake. It will work for the benefit of mankind. 
  • Avoid Unfair Bias: The AI system that is designed is ethically fair. It will not do any unfair discrimination against individuals or groups. It provides equitable access and treatment. It detects and reduces unfair biases based on race, gender, nationality, etc. 
  • Privacy and Security: AI systems keep data security at the top. Ethical AI-designed systems provide proper data governance and model management systems. Privacy and preserving AI principles help to keep the data secure.
  • Reliable and Safe: The AI system works only for the intended purpose, thus reducing unknown mis-happening chances. 
  • Transparency and Explainability: Ethical system explains each prediction and output. It provides transparency for the logic of the model. Users get to know the contribution of data for the output. This disclosure justifies the output and builds trust. Akira AI systems obey the principles of Explainable AI; therefore, it provides complete transparency and explainability of systems that build users' trust.
  • Governable: We are designing a system that works on intended tasks. It detects and avoids unintended consequences. 
  • Value alignment: Humans make decisions by considering universal values. Ethical frameworks help to consider those universal values. 
  • Human-centered: Ethical AI system values human diversity, freedom, autonomy, and rights. It serves humans by respecting human values. The system does not perform any unfair and unjustified actions. It respects individual freedom and autonomy. Our systems are fair and protected. Our system respects the rights of individuals. 

Want to publish your content?

Publish an article and share your insights to the world.

ALSO EXPLORE

DISCLAIMER

The information provided on this page has been procured through secondary sources. In case you would like to suggest any update, please write to us at support.ai@mail.nasscom.in