This Week In The Digital Panopticon: Google And The Right To Be Forgotten

The ongoing tug of war between data capture and individual privacy in the digital sphere involves myriad threads, usually moving in different directions. Getting an overview of and a handle on developments can therefore be almost impossible. It’s as if — ironically enough — this issue itself needs to be observed within a Panopticon.

This week has seen a particularly interesting development that embodies some of the nuances at play. Google has started removing certain types of information from search results in Europe — granting requests from private individuals for the removal of outdated or irrelevant information returned when a search is made for their name.

This follows a landmark ruling by the European Court of Justice last month that has been loosely termed a ‘right to be forgotten’, but is in fact related to European data protection legislated that dates back to 1995. The ruling stipulates that Google must accept and process requests by private individuals to remove links to outdated information about them. So really it is a classic case of technology developments outstripping and circumventing legal structures that were fashioned in an earlier era.

The basic problem remains that technology develops faster than legislators legislate. And also evidently faster than consumers’, politicians’ and even the judiciary’s ability to grasp how the implementation of a new technology might (or might not) be infringing on existing laws.

Technology by its nature inevitably overspills the neat categories prior law was founded on. It creates new categories and processes whose fit within the legal status quo becomes questionable — allowing for wiggle room and disruption of an extant system of order. In many ways that’s how technologically driven progress happens.

But it can also lead to problems of overreaching behaviour and a lack of accountability that tips the scales and disturbs existing checks and balances.

The complaint that led to the ECJ requirement dates back to 2010. A Spanish citizen lodged a complaint with a local data protection agency, against a local newspaper and against Google, requesting the removal of information about him that dated back more than a decade.

The data protection agency rejected the complaint against the newspaper, but upheld it against Google. Google attempted to have the decision quashed, and the Spanish high court then referred the matter to the ECJ — which handed down its landmark ruling in May. And the rest, as they say, is European digital history.

The ECJ specifically ruled that search engines like Google are data controllers, and are indeed required to adhere to European law if they have a branch or subsidiary in a European Union member state. It also crucially confirmed that individuals, under certain conditions, have a right to ask search engines to remove links to personal data about them.

The conditions apply when the information is “inaccurate, inadequate, irrelevant or excessive for the purposes of the data processing”. So it’s by no means a right for anyone to have anything overwritten. The data in question has to be prejudicial to an individual and ‘beyond its sell-by-date’. Or just plain wrong.

The court also ruled that the right to ask for something to be removed from Google’s search engine should also be balanced against other fundamental rights, such as the freedom of expression and of the media. So again it’s not carte blanche for individuals to whitewash their personal accounts.

In essence, the ruling requires an assessment of each individual request that weighs up the sensitivity relating to the individual’s private life vs the interest the public might have in knowing whatever it is. It requires a nuanced, case-by-case judgement that an algorithm-loving business like Google is clearly going to object to. Building a algorithm that can fairly process those sort of nuanced considerations is obviously going to be a tough call.

Now the ECJ ruling is undoubtedly controversial. It has been vocally attacked as “censorship of knowledge” by the likes of Wikipedia’s Jimmy Wales, who has since become a member of a Google advisory committee on privacy created following the ECJ ruling to help the company weigh the issues at hand.

But even people you might expect to support a pro-privacy ruling have had misgivings. At times the hand-wringing has been almost audible.

In one attack on the ruling, the privacy commissioner for Ontario Canada co-authored an article comparing Google to a librarian in charge of a catalogue of books — and the ECJ ruling as creating a “swiss-cheese of gaps and holes” in the library’s card index. So again an accusation of censorship.

What is evident is that Google has been very successful at arguing it’s an impartial middleman in the data delivery pipeline. It’s not the publisher — it just points you to the published stuff, is what it says. But that’s something of a disingenuous argument when you consider how much power the Google pointer wields.

And that that pointer is directed by an unquantified algorithm that works in the background to rank the information you are most likely to encounter.

It’s also worth pointing out that Google has a massively dominant market share in Europe — circa 90 percent of the search engine market. If it’s a library, it’s a chain of commercial libraries with a branch in every European town. So combine hugely dominant market share with the reordering that Google performs on the information it indexes and its position, vis-a-vis controlling access to personal data, looks far more influential than mere middleman.

Google’s algorithms are designed to process information and foreground certain bits in response to a specific query. Its processes structure others’ data and the resulting Google-created order becomes a hierarchy — meaning that certain pieces of information become more visible and accessible than other bits.

The reader walking into Google’s ‘library’ is directed to a shelf containing a single massive volume — located right at their eye level.

The point is: Ranking information is always going to be a subjective endeavor, whether it’s a human doing it — or an algorithm.This is not an equal index. It’s designed to be far more useful to the end user than that — or else they’d have to wade through every virtual index card starting with the same letter to find whatever it is they are looking for.

Google’s hierarchy of search results means the ‘shelves’ in its ‘library’ are not alphabetically ordered and equally spaced. Rather the books on these shelves are ranked and positioned for relative prominence based on factors such as popularity or timeliness or the number of other books in the library that reference a particular piece of information.

The algorithms Google uses to order and create its hierarchy of information are of course proprietary and undisclosed. We don’t exactly know how Google determines what to foreground and what else to fade away. But however those algorithms work, Google’s ordering of search results effectively changes our relationship to the information we are looking for — by pushing some of it right at us, making it more likely that’s where our search will end.

That’s not a library; that’s something very different.

The reader walking into Google’s ‘library’ is directed to a shelf containing a single massive volume — located right at their eye level. If they don’t reach for the book being proffered, the volume on the shelf below is the next closest tome their eyes will fall on, and so on. It’s Google-curated hierarchy all the way down.

That ordering of information does not mean Google is a publisher. But nor is it just a bystander. It has a crucial role to play in shaping what we click on and what we therefore discover. That’s the nuance that the ECJ ruling nails.

Its judgement notes specify that Google is “processing” data by organizing it and making it available to users “in the form of lists of results.” And that processing and ranking of data absolutely makes Google a data controller — meaning the company’s search results should have to comply with existing data protection legislation:

The Court further holds that the operator of the search engine is the ‘controller’ in respect of that processing, within the meaning of the directive, given that it is the operator which determines the purposes and means of the processing. The Court observes in this regard that, inasmuch as the activity of a search engine is additional to that of publishers of websites and is liable to affect significantly the fundamental rights to privacy and to the protection of personal data, the operator of the search engine must ensure, within the framework of its responsibilities, powers and capabilities, that its activity complies with the directive’s requirements. This is the only way that the guarantees laid down by the directive will be able to have full effect and that effective and complete protection of data subjects (in particular of their privacy) may actually be achieved.

Google clearly has a massive impact on the things that are drawn to people’s attention. To the point where the ECJ is basically saying that if Google’s processes are allowed to be exempt from the data protection directive there is no way for individual privacy to be effectively protected.

What’s very clear is that the concept of data protection needs to evolve to avoid trailing way, way behind the blistering pace of information technology evolution.

That means that a court ruling which is sensitive to the nuance of Google’s position as both a data indexer and an information foregrounder makes sense, however much complexity it introduces by putting a requirement on a data indexer to balance considerations of individual privacy — in specific requested instances — with a possible public interest to know.

(To be clear, the ECJ ruling refers to private individuals. Public individuals aren’t going to be able to use the ruling to personally edit search results in a bid to present a better public face.)

In any case, to argue this is “knowledge censorship” also falsely equates Google with the sum total of knowledge on the Internet. Yes it’s a hugely dominate gateway to access digital information in Europe, but it’s not the same as the sum total of all the information out there.

There are alternative avenues to unearth information, so removing a link from Google’s index does not mean the information itself is gone forever (indeed, the Spanish court specifically said the newspaper should not be required to remove the requested data — only that Google should stop flagging it up).

The idea of personal data being able to have a sell-by date online, where there is the path of least resistance to discover it, frankly feels like a far more human implementation of technology than having your every action recorded and recoverable forevermore.

Just because technology enables infinite capture and storage of granular data does not mean that that perpetual total recall is helpful to individuals or desirable for societies. Or that a corporate entity putting a public emphasis on whatever bit of your past personal history their algorithm determines is the most clickable is some kind of inalienable right.

There are alternative ways of operating in the digital sphere. Let’s not forget these tools we have built, and are continuing to build, are highly capable and highly flexible. The technology can support nuance. And, unsurprisingly, people have an appetite for it. Just witness the interest in consumer products that allow information to be ephemeral, or quasi-anonymous, or constrained and contextual. Yes, there are startup opportunities aplenty to be had here.

In earlier human societies knowledge was often mutable. History was oral. Information was handed down verbally, frequently in song, with ballad singers amending and adapting stories for the present age or even the present moment. The ‘social historical record’ evolved with its people.

So while the ECJ ruling may put a burden on information indexers to respond to individual requests to update the relative position of their pointers, that’s perhaps as it should be. Because with great power comes great responsibility.

The key paragraph of the ECJ’s ruling follows below:

Article 12(b) and subparagraph (a) of the first paragraph of Article 14 of Directive 95/46 are to be interpreted as meaning that when appraising the conditions for the application of those provisions, it should inter alia be examined whether the data subject has a right that the information in question relating to him personally should, at this point in time, no longer be linked to his name by a list of results displayed following a search made on the basis of his name, without it being necessary in order to find such a right that the inclusion of the information in question in that list causes prejudice to the data subject. As the data subject may, in the light of his fundamental rights under Articles 7 and 8 of the Charter, request that the information in question no longer be made available to the general public on account of its inclusion in such a list of results, those rights override, as a rule, not only the economic interest of the operator of the search engine but also the interest of the general public in having access to that information upon a search relating to the data subject’s name. However, that would not be the case if it appeared, for particular reasons, such as the role played by the data subject in public life, that the interference with his fundamental rights is justified by the preponderant interest of the general public in having, on account of its inclusion in the list of results, access to the information in question.

[Image by Paolo Trabattoni via Flickr]