Last December, someone – hacktivists from the Anonymous movement take credit for it – stole the internal emails of the middling intelligence firm Stratfor. The emails eventually made their way to Julian Assange. And now WikiLeaks, just when everyone thought it might be finished, is publishing them in chunks. WikiLeaks wouldn’t be WikiLeaks if everything had gone smoothly: as it was preparing to publish the latest batch of Stratfor emails last week, its website went down, with Assange blaming a series of crippling cyberattacks (a group called AntiLeaks – led by someone known as Diet Pepsi – has claimed responsibility). The emails in question detail Stratfor’s dealings with TrapWire Inc, a security company in Virginia. But it didn’t need WikiLeaks to reveal TrapWire’s activities: the company boasts about them on its website.
Its signature software system reportedly collects and analyses cues about suspicious events and behaviour in order to predict and prevent terrorist acts. The data appear to come from CCTV cameras and calls to anti-terrorist and police hotlines by concerned citizens. TrapWire claims to provide the technological backbone for the iWATCH programmes in Los Angeles and Washington and See Something Say Something in Las Vegas and New York.
Since TrapWire can pool and analyse data from different locations, the idea is that someone seen behaving suspiciously on the subway in New York could be spotted if they showed up in Washington. The ‘idea’ is, of course, not the same as reality: most facial recognition systems are pretty hopeless. According to the New York Times, the Department of Homeland Security set up a trial of TrapWire on 15 surveillance cameras in Washington and Seattle but abandoned it last year ‘because it did not seem promising’. A representative of the New York Police Department said that the widely reported claim that New York has 500 cameras connected to TrapWire is false, and the NYPD does not use TrapWire.
In 2005, TrapWire’s founder said that ‘it can collect information about people and vehicles that is more accurate than facial recognition’. The claim has been widely quoted in recent days, but such self-aggrandising talk is ubiquitous in the security industry. In reality, few such systems deliver consistent and reliable results. As Shoshana Amielle Magnet shows in her excellent book When Biometrics Fail, the public’s unthinking acceptance of the rhetoric of perfection and seamlessness that accompanies the marketing of such systems is itself something that ought to be explained.
Leaving its effectiveness aside for a moment, the existence of TrapWire is not in itself surprising. Law enforcements agencies in America have long embraced the predictive potential of modern technology. There are two distinct approaches here: one relies on historical trends, the other on real-time surveillance.
PredPol, developed by the LAPD in collaboration with a group of academics, is an example of the former. It’s a piece of software that breaks the city down into 500-by-500-foot zones and, drawing on past crime figures, tells the police which zones need more patrolling when. There is some limited evidence that PredPol is effective – several police divisions using it report decreasing crime rates – but there are also many thorny ethical and legal issues. If white police officers stop a black man for no apparent reason, can they blame the algorithms? Given that in the future such software might be built by private companies rather than academics, how would the algorithms be analysed for any potential biases?
Oakland, like many other American cities, is covered with hundreds of hidden microphones and sensors, part of a system known as ShotSpotter, which alert the police to the sounds of gunshots in real time. It’s been in use since the 1990s but it’s not hard to imagine how such a system could be improved. The decreasing costs of computing, considerable advances in sensor technology, and the ability to tap into vast online databases, make it possible to move from the mere identification of crime – which is what the ShotSpotter is meant to do – to predicting it. Instead of detecting gunshots, new and smarter systems can detect the sounds that usually precede gunshots.
The historical approach and the real-time one are bound to merge. And they are likely to develop in other ways, too: why bother installing expensive cameras and sound sensors, if almost everyone now carries a mobile phone? It shouldn’t be too hard to convince people to install an app that would passively monitor and report on the world around them. The Metropolitan Police has taken a first step in that direction. And mobile phones, needless to say, are great sources of predictive data in their own right. Researchers from the University of Birmingham have recently developed an algorithm that uses tracking data on our phones to predict where we’ll be in 24 hours (the algorithm is out on average by only 20 metres).
TrapWire is interesting primarily because it uses all three of these methods: it draws on historical data, it relies on various sensors to monitor the environment in real-time and it engages citizens through a ‘see something, say something’ hotline. Its executives claim that no identifiable or private information changes hands, but that’s hard to verify. A photo of a suspect’s face may not be immediately identifiable, but it can easily become so when run through an image search engine. Once the photo is linked to the suspect’s social networking profile, a whole new data universe opens up: now the investigators can study the suspect’s online activities. And why wouldn’t they?
After the recent shootings in Aurora, the Wall Street Journal asked: ‘Can Data Mining Stop the Killing?’ The article expressed regret that the much-maligned Total Information Awareness initiative – one of the most emblematic acts of Bush-era surveillance overreach – was never allowed to get off the ground. The hope here is that, perhaps with TrapWire’s help, the police would be able to spot a man with orange hair behaving oddly in a public place; his face would match a photo uploaded to a bizarre profile on an adult site; a close analysis of his online activities would suggest that he is up to some mischief; and a bloody massacre would be avoided as a result.
Aside from questions about privacy and false positives, it’s not obvious just how far this urge for crime prevention would or should go. Would the police use predictive methods only to avoid serious crimes like mass shootings? Or would all sorts of minor crimes – from pot possession to vandalism – be prevented as well? And what would happen to the people prevented from committing these crimes? Would they be treated as criminals before the fact?