A year ago yesterday the European Court of Justice passed down the so-called ‘right to be forgotten’ ruling. It held that, under the EU’s 1995 Data Protection Directive, members of the public could request that search engines remove ‘information relating to a person from the list of results displayed following a search made on the basis of that person’s name’ if the information was ‘inadequate, irrelevant or no longer relevant’. Since then, Google has received about 250,000 requests to remove more than 900,000 links. It has accepted around 40 per cent of them.
One of the flaws in the right to be forgotten ruling is how much power it puts in Google’s hands by letting it decide which requests should be complied with and which should not. An open letter from 80 academics calls for greater transparency in Google’s decision-making process:
Beyond anecdote, we know very little about what kind and quantity of information is being delisted from search results, what sources are being delisted and on what scale, what kinds of requests fail and in what proportion, and what are Google’s guidelines in striking the balance between individual privacy and freedom of expression interests.
Google has so far successfully represented any attempt to regulate it as a form of censorship, and it has almost completely won the argument about search engines: most people think of them as neutral aggregators of information, rather than carefully modulated systems that take account of a range of interests – searchers’, advertisers’ and, most of all, Google’s. It has also managed, with some justification, to represent the right to be forgotten as an enormous burden on the company.
But the idea that Google can’t or shouldn’t comply with requests to remove personal data is strange: they already remove huge volumes of links to illegal material, and are far more likely to comply with requests on copyright grounds than because of the right to be forgotten: their published figures (from the second half of 2011) put the rate at 97 per cent. Most copyright enforcement is in the service of powerful economic interests such as film and music production companies, not to mention Google itself; giving citizens more control over their personal data challenges those economic interests.
The anecdotal examples of its process that Google has released seem quite reasonable. A request to remove a patient’s medical details was successful. A request by a politician to remove links to a scandal he was involved in was not. But without more data, this tells us little about the general pattern of Google’s decision-making process. Releasing the data would at least give a sense of the way Google is balancing public and private interests in its implementation of the ruling.