Google has made some adjustments to how Search works, which are aimed at improving the general quality of returned results, and more specifically at fighting the problem of “fake news,” which Google takes to mean “content on the web [that[ has contributed to the spread of blatantly misleading, low quality, offensive or downright false information,” according to a blog post by Google VP of Engineering for Search Ben Gomes.
Making sure such content doesn’t bubble up in response to search queries is a long-term effort, Google notes, but it’s making some basic changes to how Search works today that will hopefully help expedite that result. And it involves not only making changes to the technical aspects of how Search ranking works, but also providing more tools to users that make it easy to provide feedback, and making it easier for anyone to see how Search works, so people can better understand why things go wrong when they do, and what happens when they ask Google to fix it.
Google acknowledges that some queries, while innocuous on the surface, are returning “offensive or clearly misleading content” that is not actually what the searcher intended to find. It says this occurs in only around 0.25 percent of cases as a portion of their total overall daily search traffic – but with an active userbase as large as the one using Google Search daily, that’s still a significant number of queries.
Steps Google has taken to fix these negative results include changing the guidelines for Search Quality Raters, which tell the real people helping Google identify potentially offensive material what to look for in terms of bad results with more specificity. Google says that while Search Rater feedback doesn’t directly impact Search Ranking for individual pages, it does help them identify spots where the algorithm is falling down, and adjust accordingly. The new guidelines specifically call out the kind of misleading info, “unexpected offensive results, hoaxes and unsupported conspiracy theories” that have been the subject of criticism and anger when found returned as a result of relatively innocuous queries by Google Search users.
Google is also changing the signals it uses to directly influence rankings, with an eye towards pushing low-quality content down. It spells out exactly what it’s hoping to avoid with these changes: Anything like the incident last year, where Google’s top result in Search for “Did the Holocaust happen?” was a false article from neo-Nazi website Stormfront.
As for feedback, Google is now making it easier for individual users to offer up feedback about offensive or inaccurate content appearing in its Autocomplete and Featured Snippets features. These predict your query, in the first case, and provide an excerpt from a highly placed result in the second. These have also regularly been criticized for surfacing obviously harmful assumptions about queries, and making it easier for users to tell Google directly when that happens should help the company more easily identify and fix problems.
Google is also trying to make it easier to find out why bad stuff is showing up in Autocomplete when it does happen – the company says it’s been “asked tough questions about why shocking or offensive predictions were appearing” in the feature, which is an understatement given the scrutiny it’s received from the public and the media anytime the Autocomplete feature returns unsavory responses. Now, Google is making its policy (which is also updated in light of its changes) public and accessible to everyone so you can see what steps it takes when Autocomplete goes wrong.