Facial recognition is arguably amongst the more controversial applications of AI today. Experts believe the benefits far outweigh the criticisms the technology attracts, if there was more awareness about its impact, utility and clarity over terms of use. Dr. Ananthakrishnan, CTO and cofounder of DaveAI, in Conversations with INDIAai, discusses the history of facial recognition, the moral and legal quagmire this technology continues to attract, and how the technology community should rally together to debunk misconceptions and promote fair use of facial recognition technology.

Can you give a brief timeline of the progression of Facial Recognition from the time it began to where it stands today? 

The idea of machines detecting and recognising human faces has been around since the 1950s. The first attempt was made by scientists at Stanford and MIT. In a pre-semiconductor era, they were able to recognise a human face within the limitations of the pre-existing analogue settings so this was a real breakthrough. However, some real work in facial recognition tech began in the 1990s, following the resurgence of neural network algorithms. These discoveries were still restricted to labs, didn't solve any real use cases. It was in the late 80s that facial recognition started becoming more practical in application, and was being used by governments and private companies for surveillance and security mostly. The tech has become more available now to developers and scientists, and databases are more prevalent now - this can be considered among the biggest milestones to spur research and application of facial recognition technology. It was mostly universities in the USA, China and Europe that began collecting raw data - i.e. pictures - and computers were taught to recognise human faces in pictures. The technology eventually progressed from facial recognition to feature detection and identification, with an understandably higher level of sophistication. 

Can you touch upon the most commonly seen problems that Facial Recognition poses and how can tech companies address these challenges?

The biggest problem is privacy. It was and still remains the most challenging aspect of this particular branch of AI. A person's face is their most identifiable feature. Using images of a person's face for monetary gains or for surveillance purposes without their consent, is technically illegal. When people are not permitted to violate another's privacy, machines certainly cannot, even if it may be for the purpose of one's security. The most pressing challenge is the manhandling or inappropriate use of data. Personal data is worth a lot to companies and governments today, especially when bundled with other data identifiers. Biometric spoofing is another cause of concern, stemming from ATMs that require the capture of a person's facial features to disburse cash. The biggest challenge tech companies face is how personal data should be used, and more importantly, how can consumers be made aware of the terms of use. If my face is being used by some other company for any reason, then I should be aware, I should be compensated or even have the choice to deny consent. 

So, how can facial recognition experts develop a more equitable landscape for the technology to thrive?

You may have come across reports of facial recognition technology not identifying people of colour, women and so on. The simplest explanation is lack of sufficient and varied data. While gender biases expedited by facial recognition technologies are reducing, issues do crop up from time to time. But with more data, these biases can be mitigated and developers can strive for accuracy. This also comes down to increasing the opportunities to capture data, in this case, installation of more cameras in public. Other hurdles that hamper the pace of development of facial recognition technology include face art, beards, tattoos, unusual hairstyles etc. Researchers are working on classifying these features and making them part of primary datasets. Ultimately, a machine has limitations in ways that a human brain doesn't. Technologies shouldn't be dismissed as unethical or unfit for social use based on a macro perspective. They have to be gauged and assessed on a daily basis. "Have algorithms improved from yesterday? Have they limited their biases over a period of time? Is their learning curve moving in the right direction?" The answers to these questions form the legitimacy of machines, and more often than not, come back to richness and integrity of data. This is how we can hope for some level of equability among machines. 

Many policy makers are pushing for regulation of facial recognition, esp in areas like law enforcement and if the use involves minors. How can the tech community work with policy makers/legislators to regulate wisely that will not thwart the progress of technology?

Technologists should work with lawmakers to identify how tech can be misused. Tech should not be altogether dismissed but cases need to be identified that could lead to a gross misuse. And this is where the law should come down harshly. Europe is a pioneer in establishing personal agency and privacy with a strong consent-based system, and it reflects the ethos of the region and its inhabitants. I don't think we need to follow the same standards here but there should be some clarity afforded by the law on terms of use of technologies. As I mentioned earlier, companies are starved for data but there should be some guard rails within which companies can ethically operate to procure this data. The companies should not decide for themselves how far they can go, and legal and regulatory compliance measures are absolutely essential. Such practices ensure user trust and longevity of the technology and its applications. 

Facial recognition is among the more controversial elements of AI despite the benefits it provides. Is this technology always going to have flaws in it?

No, I don't think so. The tech is improving everyday. Let's not forget some of the obvious advantages this tech affords - for instance, the average human brain can identify 30-40 faces from a group quite well. But machines can recognise millions at a time. The scale of performance augurs well for mass surveillance exercises, which are humanly impossible. It all comes down to how the technology is being used as well. China is an example of an extreme police state. This is where policy makers have to step in to present the advantages of technology while also laying out the drawbacks, and sensibly debate for judicious use of technology.

About: Dr. G. Ananthakrishnan, Co-founder and CTO of DaveAI, has more than 15 years experience in Machine Learning. He has a PhD from KTH, Stockholm on Speech Communication Systems, MSc from IISc on Signal Processing. He has over 30 papers published in various reputed conferences and journals and three patents to his name. 

Want to publish your content?

Publish an article and share your insights to the world.

ALSO EXPLORE

DISCLAIMER

The information provided on this page has been procured through secondary sources. In case you would like to suggest any update, please write to us at support.ai@mail.nasscom.in