Forgot your password?
typodupeerror
Businesses Google Technology

Google's Manual For Its Unseen Human Raters 67

Posted by timothy
from the do-this-stuff dept.
concealment writes "It's widely believed that Google search results are produced entirely by computer algorithms — in large part because Google would like this to be widely believed. But in fact a little-known group of home-worker humans plays a large part in the Google process. The way these raters go about their work has always been a mystery. Now, The Register has seen a copy of the guidelines Google issues to them."
This discussion has been archived. No new comments can be posted.

Google's Manual For Its Unseen Human Raters

Comments Filter:
  • Could it be... (Score:5, Interesting)

    by Baba Ram Dass (1033456) on Tuesday November 27, 2012 @09:55AM (#42105411)

    First off, didn't read the article. Yeah, I said it. So if the article dispells this just ignore me.

    What if Google actively uses the human ratings as a comparison/benchmark against which they measure those fancy algorithms? In other words, the users are rating the algorithms more than they are the websites. Makes sense they would improve search results algorithms, a highly technical and scientific method of ranking sites (which is of little use to a human in and of itself), by constantly striving toward an unscientific and untechnical (e.g. "quality") method... humans... which afterall is, you know, who uses the engine in the first place.

    Amazon probably does the same to improve their suggestions model.

  • by poofmeisterp (650750) on Tuesday November 27, 2012 @10:13AM (#42105557) Journal

    Apparently, if this is the case (which is probably is because Google's algorithms aren't AI), the tech sector needs a lot better rating.

    For instance, do a search for a particular model of laptop. The results you get are of course mad online retail shops, but you also get a BUNCH of sites that have nothing to do with the product you searched. They put the names / models in META tags and in hidden or font-size-reduced areas of the page, but the actual page contents itself is just a bunch of crap that has nothing to do with laptops or laptop parts. It's just a bunch of random crap.

    Point being, these aren't weeded out very well. Unfortunately, I don't have an example right now, but I know of one that has been in existence for years and still ranks in the top 5.

    Oh, and the above is dwarfed by software name / functionality searches 10-1!

  • Raters gonna rate (Score:5, Interesting)

    by Aeonym (1115135) on Tuesday November 27, 2012 @11:09AM (#42106103)

    I've actually been a Google rater. I spent about 2 years total doing it--long enough to become a 'moderator' who ensures quality feedback from other raters--in between, and supplemental to, "real" jobs. Raters give feedback on lots of Google services but it falls into two buckets: ranking the quality of legitimate results, and learning to spot the "spam".

    Legit results are easy. Spam is more interesting. For one thing, I didn't entirely agree with their definition of what spam was--that's part of the reason you still see spammy results in some searches. The other part of course is that the spammers are constantly changing tactics. But it was actually kind of fun learning to spot the various methods spammers can use, and know that I was helping to improve search results by getting them off the front page (and hopefully off the top 100 pages).

    But I always assumed that rater feedback was used to judge and adjust The Algorithm rather than individual page results. The Algorithm has always been king at Google.

  • Re:Raters gonna rate (Score:5, Interesting)

    by Kreplock (1088483) on Tuesday November 27, 2012 @11:51AM (#42106521)
    I was a rater for 1 year some time ago. My impression was the rating was against results from updates they were considering for the production algorithm. Testing at the QA level. I found it boring and soulless, but a wide knowledge of obscure, otherwise-useless facts really facilitated the work. Sometimes a little-known double meaning for a concept would cause disagreements among raters, and once a moderator hated my opinion so much he had my home phone called several times to demand I change my rating.

The bogosity meter just pegged.

Working...