Google Quality Raters Handbook Emerges Again

SMS Text

Time and time again, rumors of a Google “rater’s manual” take the web by storm. This manual is, in fact, a book of guidelines for a team of people assigned by Google to rate the quality and relevancy of webpages that are indexed in its search engine results. Now, The Register claim they have seen a copy of the book.

In October 2011, Miranda Miller, at Search Engine Watch, wrote an extensive piece about a 120+ page training manual for new URL raters, initially discovered by PotPieGirl. That book was called the 2011 Google Quality Raters Handbook, and it magically vanished from the Internet a few days after PotPieGirl (Jennifer Ledbetter) made the link public.

Today, The Register revealed the existence of a second manual, perhaps an updated version of the first. At 160+ pages, this is also supposed to give detailed advice for raters on how to label search results.

The technology behind Google's great results

A Google April Fool’s Day 2002 joke might be truer than the search engine likes to admit.

But The Register reveals even more, they tell us who the raters are:

“Google’s outsources the ratings to contractors Leapforce and Lionbridge, who employ home workers,” the article reveals. “According to one Leapforce job ad there are 1,500 raters. The work is flexible but demanding – raters must pass an examination and are consistently evaluated by Google. For example, a rater is given a “TTR” score – “Time to Rate” measures how quickly they make their decisions.”

Even this is not new information. Lionbridge has been mentioned by several publications, including by Search Engine Land, earlier this year.

So if the existence of the handbook is nothing new, and it is already known who these raters are, why is The Register reigniting the conversation? It probably has a lot to do with the fact that the US Federal Trade Commission (FTC) could be dropping their antitrust case against Google. Andrew Orlowski, who wrote the exclusive piece for The Register, does not mention the FTC deal, but ends his piece with a valid observation:

It’s amazing how the image Google likes to promote – and politicians believe – one of high tech boffinry and magical algorithms, contrasts with the reality. Outsourced home workers are keeping the machine running. Glamorous, it isn’t.

How do you feel about having human raters in the equation?

Mihaela Lica Butler
Mihaela Lica Butler is senior partner at Pamil Visions PR and editor at Everything PR. She is a widely cited authority on search engine optimization... Read Full Bio
Get the latest news from Search Engine Journal!
We value your privacy! See our policy here.
  • Ryan Jones

    Disclosure: I was a google quality rater some 5+ years ago for about 1 year.

    These conspiracy theories amuse me. As a quality rater, they never told us “why” we were rating results, but that didn’t stop many from guessing.

    It’s my guess though that quality raters aren’t used for ranking purposes. It’s my opinion that the raters are used to perform a variety of tasks. One task might be to determine a “test set” of results to help train a neural network.

    A more likely interpretation would be that raters are simply measuring different algorithm changes and the sum of their ratings are used (as one of several signals) to Google to decide if they should implement the change.

    As a rater, it was never even insinuated to me that I had the power to demote/ban/penalize/promote/improve any site’s ranking.

    Using humans to actually determine rankings wouldn’t be scalable or robust, and that doesn’t fit with any of Google’s philosophies.

    • Mihaela Lica Butler

      That’s fine, Ryan, I was not even insinuating that raters have the power to demote/ban/penalize/promote/improve any site’s ranking. Just reporting the news. No conspiracy theory there. 🙂 Besides, Google openly admits to using humans to rate the quality of the results, and may (or may not) make the guidelines public

      • Ryan Jones

        yeah i wasn’t so much addressing your article as the general fears that always follow mentions of this program.

      • Mihaela Lica Butler

        It’s good that you did, Ryan. Thank you very much.

  • Michael Sheridan

    Wesbites are for human consumption, it makes sense humans would be the ones checking sites for quality. Its like a focus group, not a conspiracy.

    • Mihaela Lica Butler

      Why are we using that “conspiracy” term again? Who said anything about a conspiracy?

  • Miranda Miller

    Good to see the topic come up again, I still have my copy lol. It really doesn’t say anything SEOs shouldn’t already know as best practices and when you consider these people have 30 seconds to a minute to review a page, their impact is minimal.

    Just as an aside, we wrote about Lionbridge and Leapforce in Nov 2011, a month after my original post on it.

    • Mihaela Lica Butler

      Thanks for pointing that out (the Lionbridge link), Miranda. I am sure I saw it at the time, but forgot about it now, in the rush of the moment. 🙂

      • Miranda Miller

        No worries!

  • Michael Sheridan

    Sorry for the double post, I used the term “conspiracy” because of the way the author from the Reg ends his article. His ending statement feels like hes taking a shot at Google. I think using remote workers that are doing quality assurance is glamorous. They wouldn’t be able to update and improve their search algorithm so regularly with end users in mind if users aren’t adding valuable input into how they perceive and use websites.

    • Mihaela Lica Butler

      Now I see. I found that interesting too… I don’t think he was considering a conspiracy though… just trying to insinuate that Google is not very transparent regarding how they rank sites. He seems to believe that raters have tremendous power… that they influence the algorithm. That is, of course, a matter of debate. What triggered my interest, however, is the timing of the Reg’s article. Why now? Why again? Since 2011 this issue comes back when you least expect it.

      • Miranda Miller

        I think it’s pretty simple… they found it and didn’t do much research to see if anyone else had ever written about it :s

        Can’t imagine why else they would have marked it “Exclusive” when my colleague Danny Goodwin wrote about this updated version two months ago.

      • Mihaela Lica Butler

        That puzzled me too. That “exclusive” thing is still there. And what’s even stranger is that the big guys (Forbes, The Next Web) are paying attention.

  • Ian Williams

    In some respects human input can be seen as reassuring. But I would be more concerned about the ‘Time To Rate’ metric being used to assess performance – some websites require time and attention to assess. TTR would appear to prioritise speed over accuracy.

  • Peter Winter

    As Michael said, it makes sense for sites that are viewed by people to be reviewed by people. It is only Google after all, we aren’t talking about Skynet here 😉

  • Aron Baczoni

    “How do you feel about having human raters in the equation?”
    As long as I get good results for my queries I don’t care what’s behind the scenes. Also having a human element in the equation just makes the whole system more bullet proof, as algorithms are more easy to fool.

  • Anthony Martello

    Google’s algorithm has been fundamentally based on human input from the start. The backlinks to your site that Google uses to evaluate the value of it are placed by humans. Google interprets these as “votes” from humans. Only humans can determine what is valuable to humans. There will always be human input in search.