Some of you may remember that I originally started Leftovers to share pieces no one else wanted to publish (hence the name). Here’s a script I once wrote for an internship application:
Quick, tell me: who has access to photographs of your face?
Yourself, obviously. Anyone who’s ever taken a photo of you. Facebook, Instagram, Twitter, Snapchat, even LinkedIn. Everyone who follows you (or your friends) on social media. Your phone gallery, so by extension, Apple or Google Photos. Any websites you’ve joined with your Google account. Your school’s ID card photos.
CCTV footage, of course—in educational institutions, your housing complex, airports and railway stations, the pharmacy, the bank, shopping malls, centres for competitive exams. And don’t forget the state—your Aadhar card, PAN card, Voter ID, Passport, driving license, marriage certificate—you get the idea.
Now imagine if a surveillance camera in a laptop store in Delhi could figure out what movie you watched last weekend in Chennai. Sounds dystopian? Stick around, it’s time to talk about faces.
Facial Recognition Technologies (FRT) have become more mainstream (and more sophisticated) in recent years. In fact, you’ve probably used some kind of FRT already. It’s what makes the filters work on Instagram and Snapchat. Some Apple and Samsung phones use it to secure your device. And Google Photos and Apple use it to automatically tag your photos.
Here’s how it works: When you input an image, the software first detects all the faces by finding edges and patterns. Then the algorithms measure the distances between distinctive facial features. These measurements are unique to each face. Finally, the faces are matched with a database of faces and names.
There are two main ways of using FRT: Verification and Identification.
Verification is when an image is compared to another image from an existing database to verify that the person is who they claim to be. In other words, authentication of identity on a 1:1 basis. When you unlock your iPhone with Face ID, it compares your face with the one stored on the device. If they match, you’re in.
Identification, on the other hand, happens when FRT is used to identify an individual from a pool of many people. This kind of 1:N identification is typically used for security and surveillance purposes, like finding missing persons, catching shop-lifters, or handsfree financial transactions, like China’s ‘pay with a smile.’ This is also the principle behind main doors that open automatically for employees or residents but not for strangers.
Governments all over the world are particularly excited about the possibility of using FRT to catch criminals and anti-social elements. Wouldn’t it be brilliant, they say, if an amusement park CCTV camera could scan crowds, cross-check faces with a criminal database, and send an alert when it spots a wanted murderer?
But it’s never as simple as that.
To begin with, no FRT system has a 100% accuracy. Research has shown higher error rates for women and people with darker skins. These technologies are also less accurate in crowds or when people are wearing face masks. In 2019, for example, the FRT system to locate missing children in Delhi had an accuracy rate of less than 1 per cent. The Ministry of Women and Child Development reported that it couldn’t even accurately distinguish between boys and girls.
Even the most advanced algorithms could misidentify an innocent person as a criminal. The police officers’ confidence in the technology, coupled with religious, class and caste biases, could swiftly lead to injustice. Despite these very valid concerns, governments and law enforcement have continued to push for FRT use.
The solution to this is not to make FRT more accurate, because facial recognition itself is a huge danger to privacy and offline anonymity.
Think back to the last time you were in a public space. There must have been hundreds of strangers all around. And yet, you could buy any book, meet anyone, get off at any station, and no one would care to look, except out of idle curiosity, and certainly no one would care to remember. If you tripped, you could still walk away confident that you’d never meet any of them again. This anonymity is crucial. It offers the freedom to participate in public life without compromising your privacy.
Facial recognition allows anyone—people around you, corporations, the state—to know your name. You would no longer be able to choose whether to introduce yourself. Your consent would not be necessary.
As facial recognition apps become more accessible, they could enable abusers and stalkers. A creepy guy on the train could take a picture of you and instantly find your social media profiles (The hugely popular FindFace does just that in Russia).
Your face is also useful to corporations. Imagine a future where CCTV cameras at the gym identify you and sell that data, sending you targeted advertisements for running shoes. And it’s not just for consumers. Imagine you send a job application, and the company could look up the number of times you’ve visited bars or protest sites. Horrifying, right?
Even more troubling is the concentration of power in the hands of states. Increasingly, the goal of greater security has led to privacy violations and potential (or actual) mass surveillance. States could easily collect data about patterns of behaviour or trace your movements, even if you leave your phone at home.
States mostly adopt FRT for the purpose of identifying and catching criminals. But who defines a criminal? Is it illegal or wrong to be a student activist or a protester? Some states seem to think so.
Law enforcement agencies in the US used a facial-recognition database created by Clearview AI (the “Search engine for faces”)—with billions of public photos scraped from social media platforms—to identify and target BLM protesters.
In India, the Delhi police have used photos, videos and (illegal) drone footage to identify more than a thousand anti-government protesters and alleged rioters. This move, clearly exceeding the established scope for FRT, was an attempt to intimidate individuals, target the leaders and clamp down on dissent.
Similar technologies have been adopted by the police departments of other states. Facial recognition has also made inroads in airports to authenticate the identity of passengers, with trials underway for the Digi Yatra Programme at Hyderabad, Delhi and Bengaluru.
Astonishingly, the development and use of facial recognition has continued in the absence of any strong individual privacy protections, rules for data capture or a concrete surveillance law regime.
Earlier this year, the National Crime Records Bureau announced the National Automated Facial Recognition System, a centralised facial recognition surveillance system, with the aim of modernising the police force and criminal identification. According to the Request for Proposals, this system will create a national database of photographs, pulling images from Passports, prison records, newspapers, CCTV cameras, social media, National Automated Fingerprint Identification System (AFIS), Ministry of women and child development (KhoyaPaya), Crime and Criminal Tracking Network and Systems (CCTNS), Interoperable Criminal Justice System (ICJS) and so on.
The supposed purpose of this system is to identify criminals and patterns of criminal activity, as well as missing persons and unidentified dead bodies. However, in this legal vacuum, it’s not hard to imagine this technology being used to monitor and target political opponents, civil rights activists, government critics, journalists and citizens. This technology is an obvious danger to our rights to privacy, personal freedom, and political participation.
Ideally, we would have strong data protection legislation, laying out standards of data collection, storage and usage, as well as checks and balances on the use of facial recognition systems by corporations and the state. Further, this technology should be regulated by an independent authority consisting of relevant stake-holders.
So what can we do about Facial Recognition?
There are some basic precautions everyone should take. Be careful what you put up on Social Media, especially when you’re going for a protest. Don’t post pictures of people without their consent. Opt out of face-recognition features when you can. For example, turn off face recognition on Facebook.
In addition, we need to raise awareness and demand transparency & accountability, to show a firm citizens’ opposition to this bull-dozing of our rights.
All this provokes a larger question: Is it too late? Will we live in the spotlight of technology forever? Will facial recognition take over the world?
Fortunately or unfortunately, the answer depends on what we do with it today.
Dangerously true! Reminds me of the 'smartphone dystopia' (https://www.theguardian.com/technology/2017/oct/05/smartphone-addiction-silicon-valley-dystopia) and Jenny Odell's work on the attention economy. The politics of technology and resultant surveillance needs to be actively engaged with and resisted against. Indeed the answer depends on what we do with it today.