As it stands, successful requests made by private individuals under the ruling for information to be de-indexed by Google in a search associated with their name are only implemented by Google on European sub domains, such as Google.co.uk or Google.de, not on Google.com.
And that’s not about to change, according to comments made by Schmidt today — presumably unless Google is compelled to expand de-indexing to .com by the European Court of Justice (ECJ) in the future.
It’s one of several problematic loopholes with Google’s implementation of the ECJ ruling, which was handed down in May. Problematic since it undermines the intended impact of the ruling by allowing for a simple workaround (i.e. searching on Google.com) to circumvent a de-listed search result on a private individual’s name.
The ECJ ruling judged Google and other search engines to be data controllers and therefore requires them to accept and process individual search de-listing requests where the information in question is deemed outdated, irrelevant or otherwise erroneous, weighing requests against any public interest considerations in the information remaining associated with a search for an individual’s name.
Schmidt was fielding questions about Google’s implementation of the ruling during a public meeting of Google’s so-called ‘right to be forgotten’ advisory council taking place in London today.
The council is made up of Google employees and outside experts from fields, including data protection law, philosophy and media ethics who were asked by Google to join its panel. The panel in turn asked the public for ‘evidence submissions‘ to feed its analysis of the process, and has been holding public meetings since early September, with various individuals invited to give evidence in person.
Asked by the audience at the four-hour London session today whether Google would be expanding the de-listing process to the .com domain, Schmidt suggested Google intends to continue to avoid doing this.
“As we read the law it applies to the EU which is the jurisdiction of the court. And of course we have now done the 150,000 reviews and so forth and so on. The Google.com domain is actually U.S. targeted, and what happens is when you come to Europe your default access is to the .uk or .de or .fr or what have you. And since the court focused on European users we’re going to focus on those domains,” said Schmidt.
“A very small percentage — less than 5 per cent — of European traffic goes to .com so 95 per cent or more are to these sites, I don’t know the exact number, and that’s where the action is,” he added.
Another question later in the session asked Schmidt whether the public should move to searching Google.com rather than searching sub-domains like Google.co.uk in order “to remove edited or removed information”.
“I am not recommending that, however we have reported that some people do it,” Schmidt answered.
Today was the penultimate public meeting of Google’s right to be forgotten advisory council, the sixth of seven, in which the panel has taken statements from people invited to testify in the public sessions, and asked and accepted questions.
During the Q&A session, Schmidt was also asked about the financial costs to Google’s business of implementing the ruling.
“There’s a perception that this is an economic issue for Google and I can assure you that it’s not. We make very little money from name searches and advertising on names. It’s minuscule. So this is about service quality and answering questions and so forth. It has nothing to do with actual revenue or ads or anything like that,” he said. “So the primary cost to use is the cost of what appears to me to be a permanent staff of people who will be doing the removals.”
Schmidt went on to suggest there may never be a way for Google to automate the process of judging individual requests. “It’s not at all obvious to me that this can be ever automated,” he said. “And therefore as the burden of reviewing them grows the costs will grow. We won’t go into the specific numbers but that’s the specific cost. There’s obviously some cost of this event, and the legal reviews and so forth.”
He was also subsequently asked specifically how Google “automates the ethics” of requests. This sparked broader comments from Schmidt about how the decision-making process around request claims may evolve, if individuals start filing legal action against denied requests.
“The answer is we don’t know how to [automate the ethics of requests],” he said. “We would if we could, because we like to automate things — it makes everything more efficient. But we have real people reviewing every request and every URL and every request.
“And what happens is, again, without giving the specific numbers, there’s things which are no brainers and there are things which are in the grey area and then we have policy experts in each of those areas who attempt to make these decisions. I don’t know this but I would suspect where there will be hard cases where the person is unhappy that they couldn’t get it off of Google, they’re unhappy to not get it off the original publisher and then they will file legal action.
“My guess, and the lawyers here will probably concur, that over time a body of law emerges… from the court filings that help define these things. I think that’s the way I suspect how it works for this.”
Another question asked Schmidt about the journalistic exception clause, which allowed the publishers of the original newspaper article — which triggered the legal action that resulted in the ECJ ruling on search engines being data controllers — to avoid being required to de-list the article itself. Schmidt confirmed Google did not ask for such an exemption.
“We view ourselves as an indexer and a pointer to those sources, we don’t create content, so we’re fairly clear where we are in that process,” he said.
Schmidt and panel member professor Luciano Floridi were also asked about how a ‘Streisand effect’ can be avoided in the implementation of the ruling — i.e. caused by the search de-listing process itself drawing attention to the content the individual was intending to obscure . This is another problematic loophole — especially so given media attention on the ruling which has led to outlets to publish stories about de-listed content, thereby putting the information back into the public domain.
“It sounds inevitable,” said Floridi, pointing out that the original obscurity-requester in Spain has now become a public figure owing to his role in bringing about the ECJ ruling. “It seems to be a counter effect. Unfortunately this is not just a property of this particular ruling. Sometimes doing the right thing has counter effects that undermine doing the right thing.”
Floridi also noted that Google’s decision to post a generic notice under searches for private individuals’ names warning that some links may have been removed “triggers some curiosity” — although he did not suggest that the solution to that problem might be for Google not to publish such a notice.
“I don’t think that we have a way out,” Floridi added. “At the moment we are stuck with this particular problem. Hopefully some members of panel will come up with a brilliant solution. I don’t quite see how this paradox can be solved.”
Schmidt, notably, did not answer this question — beyond saying the panel would “have to discuss this in your deliberations”, and spending an unnecessarily lengthy amount of time defining the ‘Streisand effect’. He did not take the opportunity here to comment on Google’s decision to inform publishers when it is de-listing links, or discuss its decision to post a generic warning notice on private name searches.
The advisory council was also asked for its views on the reasons for Facebook being the top targeted domain for search de-listing requests. Specifically the question probed whether links to Facebook content is being deleted from Google results because Facebook users are “insufficiently protecting their privacy through Facebook’s tools”.
“Whatever we say will be controversial. I think to some extent yes,” said panel member Lidia Kolucka-Zuk. “This is a very complex issue. People do not read agreements they sign with banks and people do not read the policy on privacy provided by Facebook. And this is a huge problem that people do not protect themselves on the Internet… Especially if they use a social network like Facebook is, they should somehow be aware of the consequences as well.”
A final meeting of the advisory council is scheduled to take place in Brussels on November 4 — before it meets privately to compile recommendations for Google on its implementation of the ECJ ruling.
“I for one cannot wait to hear the answers to these questions,” said Schmidt in closing remarks at the end of the day’s session — a theatrical flourish creeping into his tone that drew laughter from the audience.
“Literally call me, or email me, or text me the moment you figure these things out because the sooner you figure them out, the quicker we can implement them,” he added.