In Your Face

Evgeny Morozov

  • BuyOur Biometric Future: Facial Recognition Technology and the Culture of Surveillance by Kelly Gates
    NYU Press, 261 pp, £15.99, March 2011, ISBN 978 0 8147 3210 6

Until last summer, hi-tech riots – broadcast on YouTube and organised by BlackBerry – were mostly the preserve of enterprising dissidents in Iran and China. But in June hordes of ice hockey fans in Vancouver, outraged by the local team’s loss to a Boston rival, filmed themselves smashing cars and burning shops. Then it happened here. The crackdowns that follow such riots are equally hi-tech. In both Britain and Canada ordinary members of the public set up Facebook groups to share pictures and videos from the riots, using Twitter to name any identified perpetrators and alert the police. This was cyber-vigilantism at its most creative.

The day after the Vancouver riots, the Insurance Corporation of British Columbia – a state-owned insurance company which also handles drivers’ licences and vehicle registration – offered to help the Vancouver police by running its facial-recognition software on photos from the riots, comparing them with its database, a collection of photos of more than three million individuals, normally used in investigations of fraud and identity theft. Not much came of it: there were no reports of any arrests made thanks to the database. Attempts to automate the process of facial recognition after the British riots failed too: most rioters, after all, didn’t already have their mugshots in police records. Since the UK doesn’t (yet?) have a Canada-style photo database and Canada doesn’t (yet?) have a UK-style CCTV surveillance infrastructure, such efforts in both countries were probably doomed. China and Iran – where excessive surveillance goes hand in hand with excessive documentation requirements and weak or non-existent privacy laws – are a different story. And the technology is improving.

In September 2010, satellite photos of Abbottabad showed a man who looked a lot like Osama bin Laden exercising in a yard; the satellite’s facial recognition system confirmed it was him. It’s said that after shooting him, the Navy Seals ran his picture through another facial recognition system, which reported that there was a 95 per cent chance they had got the right man. Given that half of bin Laden’s face was presumably missing, they must be rather proud of their technology. The Navy Seals may have been using gear similar to the Robocop-style glasses the Brazilian police have developed in preparation for the 2014 World Cup. Fitted with a small camera that sees as far as 12 miles, the glasses can capture 400 images a second and compare them with a central computer database of 13 million faces – or so the police claim. The surest sign that facial recognition technology has made it comes from China, where at last year’s Sex Culture Exhibition in Xi’an a firm called the Love Sex Company presented a £3000 sex doll that speaks in a variety of languages and, thanks to onboard software, can recognise its ‘owner’.

It isn’t easy to teach a computer to recognise a face. Definitions don’t help: if you describe a face as ‘a blob-like region with two eyes, two ears, a nose and a mouth’, you still need to define an eye, a nose, an ear and a mouth. Humans can easily locate a face in a picture even if parts of it aren’t clearly visible; for computers this is very hard. What computers can recognise is the similarity between specific regions in two or more pictures. Given enough computational resources, they can be trained to calculate what a particular segment might look like under certain abnormal conditions – e.g. when the lighting is low or when the person in the photo has aged. As the number of potential differences between any two pictures of the same face is infinite, it’s impossible to write an algorithm that can take account of all such variations. However, even imperfect FRT can be useful.

Suppose you have just photographed a man who claims to be John Smith. How can a computer establish whether he is the same John Smith who exists in your database? First, it needs to find the man’s face in the picture – by looking for blob-like regions with consistent brightness and colour. Then it has to find facial landmarks – nose, mouth, eyes etc (there are more than a hundred significant features). Then the face must be ‘normalised’ by making it look like other images in the database with regard to size, pose, colour intensity and illumination. Finally, the computer has to produce a numerical representation of the face and compare it with the equivalent representation of the picture associated with the John Smith in the database. There are two ways to generate such representations. One is geometric, relying on the shape and position of facial landmarks; the other is photometric, using statistics to distil an image into values.

You are not logged in