Here’s A Book I Want to Read (And Wish I Could Write)

An Anthropology of Google's Search Experiments (with all data exposed, of course). Never will happen, but we get some tantalizing hints in this post on the Google blog: At any given time, we run anywhere from 50 to 200 experiments on Google sites all over the world. I'll start…

An Anthropology of Google’s Search Experiments (with all data exposed, of course).

Never will happen, but we get some tantalizing hints in this post on the Google blog:

At any given time, we run anywhere from 50 to 200 experiments on Google sites all over the world. I’ll start by describing experimental changes so small that you can barely tell the difference after staring at the page, and end with a couple of much more visually obvious experiments that we have run. There are a lot of people dedicated to detecting everything Google changes – and occasionally, things imagined that we did not do! – and they do latch on to a lot of our more prominent experiments. But the experiments with smaller changes are almost never noticed.

9 thoughts on “Here’s A Book I Want to Read (And Wish I Could Write)”

  1. Like having a big brother in Hollywood, who can teach you a lot of tricks for influencing viewers. This kind of research has probably been done in each medium for almost as long as media has existed.

    Does Google, Inc. want Google users to click on Google ads?

    (note: it’s a “trick question” — because I presume Google shareholders are listening, too 😉

  2. Here’s an example of an experiment that lets you comment on search results and move them around on the result page:

    What I would like to know, and perhaps some Google Engineer could answer it here, because there is no comments section on the official Google blog for this sort of end-user interaction, is the following:

    Did Google ever do the “relevance feedback” experiment? Did Google ever offer a way, not of allowing users to reorder search results that they’ve already seen (why would I reorder something when I’ve already found it?), but of allowing users to give up/down feedback on the results of a search, feed that feedback to the algorithms on the backend, and come up with a reranking of the current results list (not a new list) that was biased toward the “ups” and biases away from the “downs”?

    Because I have been using Google for 10+ years, and have never seen this experiment run.

    I really don’t understand this obsession with tweaking whitespace, or fiddling with the size of one-boxes. When it could be that the results that I am really after are ranked 120th through 150th. How is Google going to help me discover those results, short of me clicking through 12 pages of results?

    If we’ve done our job right, almost without your noticing, things will work just that little bit better for you. The world will seem rosier. Birds will sing.

    This is what I don’t understand. What good is being just that little bit better on 100 queries (saving me 0.37 seconds per query, because the first result is just that little bit bigger), when they don’t try to get any better one those 20 queries that cost me 4 minutes to get through to the 120th result, to find what I really need?

    Let’s see.. (100 queries * 0.37 seconds) = total savings of 37 seconds. And (20 queries * 4 minutes) = total loss of 1.33 hours. I’d say the net total doesn’t add up to be too rosy or singing-bird.

    There’s an old saying.. penny-wise, pound foolish.

    Google works so hard squeezing every little penny of optimization out of their engine.. but on the pound-sized larger problems they are completely unhelpful.

    I don’t get it. I really do not get it.

  3. and come up with a reranking of the current results list (not a new list) that was biased toward the “ups” and biases away from the “downs”

    Let me be a little more clear:

    ..come up with a reranking of the unseen documents, those documents within the tens of thousands of results returned by my original query, (not a new list) that was biased toward the “ups” and biased away from the “downs”.

  4. JG,

    do you still think Google might be a useful tool for information retrieval?

    I think Google’s “algorithm” for getting the bias you mention goes sort of like this:

    1. get advertiser to “sign up” to pay for keyword(s)

    2. JG searches for keyword(s)

    3. JG clicks on ad

    4. Google receives money

    It seems pretty simple & straightforward — and I’ll bet there are many people clicking on ads and therefore making Google a lot of money.

    What seems to be the problem?

  5. do you still think Google might be a useful tool for information retrieval

    The problem, nmw, is that I have been, am, and will continue to remain, a hopeless idealist. Yes, I still cling to the unfashionable old belief that Google is a useful tool for information retrieval.

    In reality, however, I have also been saying what you are saying for years: It is more in Google’s interest to get you to “give up” your left-hand-side searching, and just look to the right hand side and click an ad.

    If Google can be “good enough” a lot of the time (for those 100 queries), what ends up happening (studies have shown) is that for those 20 times things don’t work, people blame themselves, not Google, when they can’t find something. That’s when they’ll give up and click an ad.

    So if people blame themselves in those 20 cases, why should Google disabuse those people of that notion.. especially when those same people end up clicking an ad?

    But I keep waiting for someone to definitively come out and tell me that my idealistic pessimism is either hopelessly naive or wrong, or else conclusively show me that it is correct — that Google isn’t really trying to make those 20 cases better, because it is not in their economic interest to do so, despite everything they say about SERPs being independent of the ads, and about organizing the world’s information and all that.

    All I know is that there isn’t more discussion around these sorts of topics, and there should be.

  6. I think the vast majority of people cannot really tell the difference. I’ve had university graduates tell me while we were discussing the search engine that they would never be so gullible to click on an ad — and then they click on an ad! ;D

    Also: In earlier years Ebay would sell you the moon on Google. I think the only way that people will change is if/when advertisers realize that they are actually paying through the nose for something that is more/less worthless.

    Teenagers may continue to use Google, but the question is: how many teenagers will seriously shop for clothing or diamonds or bmws online? AFAIK, they’re more interested in exchanging funny pics & videos than in booking a vacation rental.

    Although I’ve heard that people have complained that Google “charges too much” (much like the “yellow pages” used to do), so far hardly anyone is drawing consequences.

    NBC and CBS did — and they have therefore invested heavily in the “Wisdom of the Language” ( http://gaggle.info/miscellaneous/articles/wisdom-of-the-language ) — but only in the commercial registry. Microsoft has taken it a step further — and much like the Google brand name, they have invested in the “Live” keyword in nearly all domains (and IAC has done much the same with “hotels” — though not as exhaustively).

    I predict that following present trends, that within 5 years, there will not be 1 search engine but rather several thousand. Many other search engines already exist (homes, cars, health, money, shoes …) and they are already among the top results on all “one-size fits-all” engines (and that is indeed what users have learned to expect — even if [as Google’s VP of Engineering said during Google Press Day several years ago] Google does not WANT that).

    But I figure the folks at Google do realize that this is inevitable — and so that is why they are attempting to salvage something before all of search becomes focused, targeted search. And that is why they are trying to position themselves as browser application provider — in order to recommend suggestions for (what did the press release say? 🙂 “pages which you haven’t visited but are popular”?

  7. I think the current search engines have a 3 pronged process -> from Intent to Content: Put simply: Its Search->Scan->Click.

    Me thinks, we cant talk about the future, unless we know what’s wrong with the current. n that note: Here are my thoughts:

    Main Disadvantages:

    [A] Search and trial costs [it only makes Google happy]

    [B] Anonymity: although anonymous nature of the web protects privacy, enables free speech,

    generates content etc etc. It also has a collateral damage : as in lack of accountability

    [C] Authenticity: lack of authenticity implies high trial costs – worse it may and does often lead to

    misinformation etc . No, I am not promoting a big brother or a *central clearing house*. Just that content

    and quality is often promoted at the cost of quality

    How can we make the web better?

    Its a billion dollar question. Me thinks: we can start with:

    [1] A better search engine

    [2] A mechanism to aggregate related, relevant and refreshed content.

    [3] Search Engines may have to give up some revenues ( I know, its wishful thinking) by rethinking the PPC

    model. As of date, the engines are geared toward Hop->Skip–> and Jump model.. More clicks imply more

    costs to the consumer, yet, its precisely what keeps their stock prices touch the roof.

    Here is the permalink: http://olgalednichenko.wordpress.com/2008/09/05/main-disadvantages-of-todays-web-as-a-source-of-information-and-how-to-improve-google-and-yahoo-et-all-by-olga-lednichenko/

Leave a Reply

Your email address will not be published. Required fields are marked *