Results for ""
Artificial Intelligence has proved to be one of the most innovative yet polarising technologies out there. Today, AI is a critical part of our daily lives - from social media to digital assistants, email communication to recommendations on video streaming apps, everything makes use of this technology. One such area where AI is widely used is facial recognition, though there is much speculation about its pros and cons.
As the term suggests, facial recognition is an AI-based application that makes use of biometrics to map facial features from an image or a video. Apple introduced Face ID in 2017, as a privacy feature to its users to ensure only they have access to the content on their device. Today, banks follow the same principle; they offer facial recognition for enhanced security to its users. Even Facebook uses it to predict photo tags. So, is facial recognition blurring the line between innovation and privacy?
With the widespread use of this application, especially in public spaces, people fear the rise of surveillance. It is akin to being on the ‘Big Brother’ show, and being watched over all the time. While AI and facial recognition provide a host of benefits in the area of safety and wellness, the question of privacy always arises. How can we ensure technology is used in the service of humanity, rather than to control it?
Before that, let’s try and understand what facial recognition is all about.
When facial recognition was first heard of, it was considered useful but relied heavily on the involvement of humans. However, with the rise in AI and machine learning, things have drastically changed. Today, it isn’t difficult to capture, aggregate or analyse a ton of facial feature images from cameras, sensors, smartphones, and social media sites. Using the right algorithm, computer scientists can quickly and efficiently create models of what a person looks like, even if the picture is blurred or grainy. And honestly, this is an important breakthrough, yet it makes us worried.
Technology is evolving at a rapid pace, and when it comes to AI and facial recognition, things are becoming more refined with time. Tech giants like Amazon, Facebook, Google or Microsoft are working towards improving their facial detection software for enhanced precision and accuracy. Dell and HP, too, are testing their capabilities in next-generation servers to enable faster and seamless facial-image sharing.
As per The National Institute of Standards and Technology (NIST), USA, facial recognition algorithms have improved by over 20 times between 2014-2018. Error rates have come down by 95 per cent - this is a significant achievement.
The advancement of technology might lead to responsible adoption and use, feel experts. Let’s take an example - The Indian government has started using this technology in a big way to enhance its law enforcement capabilities. Reports suggest that they plan to build a nation-wide Automated Facial Recognition System (AFRS) to identify and track criminals across the country.
The Delhi Police has also been using facial recognition as an important tool to enhance their functioning. Developed by India-based INNEFU Labs, this tool uses AI to sift through a massive data set to match individuals, thereby identifying them against their personal data. The technology detects and extracts faces out of an image; each image is then converted into a vector 512 values. The software then calculates the shortest distance between two vectors in a chosen database. The closest matches are then deemed as the final result.
The benefits of facial recognition are many, from finding missing children to identifying criminals to ensuring safety in public areas, and preventing human trafficking. A case in point is Delhi Police’s victory in tracing nearly 3,000 missing children within four days of adopting a new facial recognition system. With the help of a database called TrackChild, the system compared and analysed previous images of missing children, with about 45,000 current images of kids in the city.
According to Gartner, there will be an 80 per cent reduction in missing people in mature markets in 2023, as compared to 2018.
The Ministry of Railways, Government of India, also plans to use facial recognition to tackle crime. The system is under trial in Bengaluru, every about half a million faces are scanned every day, and using AI, these are matched against faces part of a police database of criminals. Besides, the Ministry also plans to extend these facial recognition applications onboard trains.
There are several other cases where AI and facial recognition is being used for the right reasons. FishEyeBox is a team of scientists, programmers, designers and automotive enthusiasts trying to make an AI for Innovation for connected things. Their work ranges from high-level driving behavior modelling, robotic scene estimation to low level driving firmware actuation for universal compatibility.
Pinaki Laskar, Founder and CEO of AI startup FishEyeBox, says that “AI systems are now measuring people’s facial expressions to assess everything from mental health, whether someone should be hired, to whether a person is going to commit a crime.”
“By looking at the images in this data collection, and seeing how people’s personal photographs have been labelled, raises two essential questions - where are the boundaries between science, history, politics, prejudice and ideology in artificial intelligence? And who has the power to build and benefit from these AI systems?,” Mr. Laskar adds.
With the rise of facial recognition technology, there are misuse warnings from the legal fraternity, privacy watchdogs, and human rights activists. Many cite the example of China, where everything is being monitored from security surveillance to jaywalking, speeding to border control. While the country is a leader in the adoption of high AI standards, it has been heavily criticised for ‘spying’ on residents.
In the United States, San Francisco has banned the use of facial recognition by law enforcement and other agencies, because people feel they are being ‘watched on’ all the time.
Besides these concerns, there are some other factors that have led to this public outcry. People are always under the fear of their images and data being misused, especially in the absence of stringent regulations related to facial recognition technology. Moreover, less accuracy and higher bias is a concern, especially when it comes to its application in law enforcement. It can also lead to misidentifying someone and wrongful convictions, especially if the technology has not been updated. Accuracy and accountability are both important when it comes to the use of technology, especially regarding the justice system.
There’s also distrust in the security of data privacy and the potential loss of important information due to a data breach.
Advocates of technology and privacy both agree that a careful balance needs to be maintained. To build trust, technology must be used in an appropriate manner with safeguards in place.
According to Anirudh Rastogi, founder of Ikigai Law, “Valid privacy concerns arise out of the improper use of facial recognition technology by governments and law enforcement agencies- especially in the absence of any law/policy governing its use. That being said, it is important not to discount the value of facial recognition tech for business purposes and also in cases of law enforcement.”
“It’s important to put in place a transparent oversight and guidance framework to balance the benefits of facial recognition with potential risk,” he further stated.
Banning technologies proves to be a counterproductive measure; it stifles public debate rather than encouraging it. As Mr. Laskar rightly says, “AI and face recognition technology’s goal is to use technology to help us see ourselves, but not to use ourselves to see technology.”
Image by Mike MacKenzie via Flickr