Results for ""
Speakers: Richa Singh, Professor at IIT Jodhpur and Connor Wright, Researcher, Montreal AI Research Institute
Moderator: Jibu Elias
One of the biggest finds this decade was data’s immense potential to help businesses expand. As technology around data has evolved, it has brought with it a new set of complexities and challenges. One of them is bias in data, which has opened a huge debate on the ethical and moral aspects of AI.
If technology is meant to be inclusive, then why does bias creep in and how can it be addressed? This week’s webinar on Mitigating Bias in Facial Recognition Systems was led by Richa Singh, Professor at IIT Jodhpur and Connor Wright, Researcher, Montreal AI Research Institute. Facial recognition is one of the most widely used technologies today across domains. It plays a key role in surveillance, law enforcement, and is even finding extensive use in retail.
Work in facial recognition has been around since the 1960s about a decade after AI was discovered. At the time, geometric spaces between key points of a person’s face like their eyes, nose and mouth. Singh began working in this field in the year 2000 and while the technology was nearly four decades old, the performance was limited. In the last 20 years, this technology has come a long way. From a rudimentary use of merely identifying faces, facial recognition is today being used to rebuild faces in surgery, auto tagging in social media platforms and more. Specifically, Singh’s work between 2009-10 was focused on facial recognition for surgical requirements. Since the advent of deep learning, there has been great strides in the use of this technology, and has invariably led to propagating bias. Building systems to counter this bias is the need of the hour, especially with the machine’s accuracy levels being very high (mostly in the high 90s) While in the West, there is a greater emphasis on research, its still early days for India. Singh’s current research relates to fortifying a machine’s dependability by ensuring explainability, interpretability, bias, fairness, accuracy, in various areas of AI related technology including facial recognition.
Wright was able to provide a contrarian view to the aspect of managing bias. He believes that the initial and surface level use of the technology is very helpful, but a closer look at the way the machine functions reveals many flaws. Citing examples of IBM stepping back from its plans to design facial recognition systems for the UK police and the widespread resistance to the police forces in South Wales to use facial recognition on criminals, for the risk of infringing human rights, Wright believes a lot of work needs to be done in addressing bias in algorithms that make the machine a liability in situations, instead of fulfilling its very purpose. Regardless of accuracy, bias should be addressed for what it is, and what it represents. And addressing this bias goes back to the data itself. For a less biased and varied experience, millions of datasets need to be utilized that allow the machine to learn more and not get restricted to one specific kind of data, thereby promoting certain biases. For instance, if facial recognition systems are fed data of images from a certain geographic region, it will recognize only those kinds of facial features and de-classify the rest. In a country like USA, there are white Americans, African Americans, Hispanics, Asian Americans and Native Americans. And these are just broad categories. But if the machine predominantly is fed images of white Americans, it will only recognize them and not the rest. This bias, propagated by a machine, is projected into by the data fed into it. Search engines, which many companies seek as a source of data or images, are generally not the best places to expect a wide range of datasets. They are optimized to deliver results based on geography and cultural preferences. For bias to truly be addressed, the entire ecosystem needs to be involved in the decision making process. This will allow the engineers and data scientists to also understand better the intent and purpose of the algorithm.
Singh and her group of researchers at IIT Jodhpur are working on algorithms that can help detect bias and with bias mitigation. She explained some interesting use cases of bias, driven by geography and culture, such as the Aadhaar implementation trials at the beginning of the decade – a project she was involved with. A consistent challenge was the machine unable to read fingerprints on hands that had henna on them – commonly seen in India. Once they understood why the machine was unable to read henna-ed prints, this anomaly was fixed. Now, the presence of henna on one’s hand is considered a global guideline for fingerprint recognition. While working on a gender prediction algorithm, Singh learned that the system was constantly classifying her as a male, until she wore a bindi - worn by a vast majority of Indian women.
There is a fair amount of resistance towards banning facial recognition technology until regulations are framed, believes Wright. Singh, on the other hand, seeks regulation and not a blanket ban. Given that this is a rapidly emerging and developing field within AI and with a significant utility in business and governance, it is important to build its ethical frameworks gradually. Ultimately, human discretion needs to always reign – a machine cannot completely replicate the human cognitive experience
Freepik