Introduction

While discussing AI & society, the report highlights that in the process to design a trustworthy and better ML, an in-depth research is required in order to align machine learning with human norms. It also expresses the fact that the cost of training machines is becoming a problem. In this context, it cites the most recent GPT-3 model with 175 billion parameters as not something that one could train on a few GPU instances that you spin up in the cloud. As far as bias and algorithmic injustice is concerned, the reader is made aware of the fact that algorithms, unless explicitly instructed to, will not consider individual or group fairness principles when categorizing objects. If left unchecked, algorithms can create highly skewed and homogenous clusters that do not represent the demographics of the dataset. A new clustering method called Fair K-Means can address these problems by forming statistical methods for decision-making algorithms that abide by principles of equality, equity, justice, and respect for diversity, the core tenets of democratic societies.

The report also discusses the fact that online disinformation, often intentional dissemination through social media, is becoming a powerful and persuasive tool to influence political views and decisions. This paper taps into a new line of research by focusing on multimedia types of disinformation which include both text and images. It outlines two recommendations: a) fact checkers should be widely used as journalistic tools as they are effective ways to debunk false information online and b) importance of media literacy in fostering readers’ ability to spot misinformation and rely on reliable news sources.

The other interesting area that is highlighted is humans & AI where the authors have identified potential areas where AI may impact people with disabilities. To make this possible, computer vision, speech systems, text processing, integrative AI and other AI techniques can be used to get the desired outcomes. At the same point of time, the report also highlights the risks of full automation and the importance of designing decision pipelines that provide humans with autonomy, avoiding the so-called token human problem when it comes to human-in-the-loop systems. It also identifies some of the challenges that arise when we try to compare machine perception performance against that of humans.

Realizing that AI represents one of the greatest challenges and opportunities facing workers and society, the report analyses the labour impact of AI. It points out the fact that AI is fundamentally a prediction technology, and as it improves it will decrease the cost of prediction resulting in wider use of the technology in prediction-related tasks. The effect this will have on labor then depends on the relative importance of prediction in a given job and can be associated with four major possibilities: a) AIs are threatening jobs that are pure prediction such as forecasting work within operations departments, legal summary work done by paralegals, and email response work done by executive assistants, b) while a task may have a decision-component beyond prediction, this would no longer be important if predictions were better and cheaper. For example, autonomous vehicles where driving is a common task that involves both prediction and judgement and AI driven cars are almost at the offing, c) AI results in greater labor need as expert judgement becomes an important complement to better prediction. For example, in emergency medicine, AI-driven diagnostics are better, faster and cheaper through which medical staff are now able to have a more accurate understanding of patient needs and d) new tasks may be created with the advent of AI which will demand a different set of skilled labor. 


Relevance of the Report

The report touches on a wide variety of relevant topics which has presently taken the AI world on storm such as privacy and impact of AI on labour. It provides the reader a holistic and unbiased view of the latest development of AI on ethical front and what needs to be done so as to ensure a fair and unbiased outcome. What sets this report apart is the fact that instead of merely writing what we already know about AI and ethics, it talks about what is actually being done. The report emphasizes the point that as AI development continues to expand rapidly across the globe, reaping its full potential and benefits will require international cooperation in the areas of AI ethics and governance. Most importantly, the report has an optimistic tone where it sees that AI has the potential to do good than harm. 


Key Takeaways

  1. The report focuses on topics across a wide variety of developments and implementation pathways.
  2. The report shares a glaring challenge with most AI ethics initiatives. Most of the initiatives are Western-focused and if not addressed immediately will lead to long-run hindrances. For example, while many organizations have moved from principles to action, many are still grappling to understand what AI ethics mean.
  3. The report analyses ethics in AI from nine different perspectives with the most interesting being the environmental impacts of AI and how very large compute requirements can have externalities.
  4. The report lays the foundation for thinking beyond the core AI ethics principles and taking into consideration the impact of the technology on the society as a whole. 

Want to publish your content?

Submit your case study and share your insights to the world.

ALSO EXPLORE

DISCLAIMER

The information provided on this page has been procured through secondary sources. In case you would like to suggest any update, please write to us at support.ai@mail.nasscom.in