Google hires thousands of humans to rank search results manually

Rick

Posts: 4,512   +66
Staff

Anyone who thinks Google's results are solely the product of fancy-pants algorithms and clever engineering -- think again. The Register has taken a look into Google's 160-page guidebook which is essentially a reference manual for human "raters" -- yep, that's right: humans -- revealing a definite human component behind the results dished out by Google search.

As it turns out, Google outsources (I.e. entrusts) a couple of different crowdsourcing agencies -- Lionbridge and Leapforce -- to produce warm bodies and squeeze those human beings for their valuable opinions. The Register also points out that according to one Leapforce job ad, the company employs about 1,500 search assessors which is decidedly a work-from-home gig. 

Before such contractors can judge the results doled out by Google search queries though, individuals must pass an initial examination. Afterwards, search assessors continue to receive periodic Google evaluations to ensure they're doing an upstanding job grading search results.

Google's manual, amongst other things, informs raters about how to rank search results based on a variety of metrics: quality, relevance and spamminess. Search assessors will judge the results for various queries and choose from any number of grades -- such responses include: "Not Spam", "Maybe Spam", "Porn", "Off-Topic", "Unratable", "Vital" and others.

Just to name a few esoteric guidelines, raters are asked to consider user intent (e.g. Mountain Lion: Mac OS X or the actual predator?), ignore websites with invalid security certificates and avoid results older than four months.

Check out this article for a deeper look into how humans help Google provide the best results possible.

Permalink to story.

 
Google results in last three years 2010-2012 went to totally irrelevant.
 
Have a few buddies that work for leapforce and lionbridge. They make killer money around the holidays as bonuses to contracts are really high.
 
How about rating new or outdated info? it is a lot of old / junk on the web.
 
I actually was a quality evaluator/rater for Google through Workforce Logic, and did this a couple years back when I was in College. So it's nothing new
 
Nothing new. They need human judges to produce relevance assessments to evaluate their retrieval models. this is a customary practise done since the early 1960s. whenever a change to an algorithm is recommended you need to assess it in terms of effectiveness performance. relevance judgements are what facilitate these evaluations. being a large company such as google, you can afford to refresh your training collections by outsourcing relevance assessments on an occasional basis. humans are not actually sitting behind the scenes rating your results
 
Back