SEO

Google Quality Raters Handbook Emerges Again

Time and time again, rumors of a Google “rater’s manual” take the web by storm. This manual is, in fact, a book of guidelines for a team of people assigned by Google to rate the quality and relevancy of webpages that are indexed in its search engine results. Now, The Register claim they have seen a copy of the book.

In October 2011, Miranda Miller, at Search Engine Watch, wrote an extensive piece about a 120+ page training manual for new URL raters, initially discovered by PotPieGirl. That book was called the 2011 Google Quality Raters Handbook, and it magically vanished from the Internet a few days after PotPieGirl (Jennifer Ledbetter) made the link public.

Today, The Register revealed the existence of a second manual, perhaps an updated version of the first. At 160+ pages, this is also supposed to give detailed advice for raters on how to label search results.

pigeon system Google Quality Raters Handbook Emerges Again

A Google April Fool’s Day 2002 joke might be truer than the search engine likes to admit.

But The Register reveals even more, they tell us who the raters are:

“Google’s outsources the ratings to contractors Leapforce and Lionbridge, who employ home workers,” the article reveals. “According to one Leapforce job ad there are 1,500 raters. The work is flexible but demanding – raters must pass an examination and are consistently evaluated by Google. For example, a rater is given a “TTR” score – “Time to Rate” measures how quickly they make their decisions.”

Even this is not new information. Lionbridge has been mentioned by several publications, including by Search Engine Land, earlier this year.

So if the existence of the handbook is nothing new, and it is already known who these raters are, why is The Register reigniting the conversation? It probably has a lot to do with the fact that the US Federal Trade Commission (FTC) could be dropping their antitrust case against Google. Andrew Orlowski, who wrote the exclusive piece for The Register, does not mention the FTC deal, but ends his piece with a valid observation:

It’s amazing how the image Google likes to promote – and politicians believe – one of high tech boffinry and magical algorithms, contrasts with the reality. Outsourced home workers are keeping the machine running. Glamorous, it isn’t.

How do you feel about having human raters in the equation?

 Google Quality Raters Handbook Emerges Again
Mihaela Lica Butler is senior partner at Pamil Visions PR and editor at Everything PR. She is a widely cited authority on search engine optimization and public relations issues (BBC News, Reuters, Al Jazeera and others), with an experience of over 10 years in online PR.
 Google Quality Raters Handbook Emerges Again

Latest posts by Mihaela Lica Butler (see all)

You Might Also Like

Comments are closed.

17 thoughts on “Google Quality Raters Handbook Emerges Again

  1. Disclosure: I was a google quality rater some 5+ years ago for about 1 year.

    These conspiracy theories amuse me. As a quality rater, they never told us “why” we were rating results, but that didn’t stop many from guessing.

    It’s my guess though that quality raters aren’t used for ranking purposes. It’s my opinion that the raters are used to perform a variety of tasks. One task might be to determine a “test set” of results to help train a neural network.

    A more likely interpretation would be that raters are simply measuring different algorithm changes and the sum of their ratings are used (as one of several signals) to Google to decide if they should implement the change.

    As a rater, it was never even insinuated to me that I had the power to demote/ban/penalize/promote/improve any site’s ranking.

    Using humans to actually determine rankings wouldn’t be scalable or robust, and that doesn’t fit with any of Google’s philosophies.

  2. Sorry for the double post, I used the term “conspiracy” because of the way the author from the Reg ends his article. His ending statement feels like hes taking a shot at Google. I think using remote workers that are doing quality assurance is glamorous. They wouldn’t be able to update and improve their search algorithm so regularly with end users in mind if users aren’t adding valuable input into how they perceive and use websites.

    1. Now I see. I found that interesting too… I don’t think he was considering a conspiracy though… just trying to insinuate that Google is not very transparent regarding how they rank sites. He seems to believe that raters have tremendous power… that they influence the algorithm. That is, of course, a matter of debate. What triggered my interest, however, is the timing of the Reg’s article. Why now? Why again? Since 2011 this issue comes back when you least expect it.

  3. In some respects human input can be seen as reassuring. But I would be more concerned about the ‘Time To Rate’ metric being used to assess performance – some websites require time and attention to assess. TTR would appear to prioritise speed over accuracy.

  4. As Michael said, it makes sense for sites that are viewed by people to be reviewed by people. It is only Google after all, we aren’t talking about Skynet here ;-)

  5. “How do you feel about having human raters in the equation?”
    As long as I get good results for my queries I don’t care what’s behind the scenes. Also having a human element in the equation just makes the whole system more bullet proof, as algorithms are more easy to fool.

  6. Google’s algorithm has been fundamentally based on human input from the start. The backlinks to your site that Google uses to evaluate the value of it are placed by humans. Google interprets these as “votes” from humans. Only humans can determine what is valuable to humans. There will always be human input in search.