Marissa Mayer on Googley Lessons

Marissa Mayer, at Web 2.0 today, shared insights into some lessons Google has learned in trying to serve users. The take-away is that Speed is just about the most important concern of users—more than the ability to get a longer list of results, and more valuable than highly interactive…

Marissa Mayer, at Web 2.0 today, shared insights into some lessons Google has learned in trying to serve users. The take-away is that Speed is just about the most important concern of users—more than the ability to get a longer list of results, and more valuable than highly interactive ajax features. And they didn’t learn that from asking users, just the opposite. The ideal number of results on the first page was an area where self-reported user interests were at odds with their ultimate desires. Though they did want more results, they weren’t willing to pay the price for the trade, the extra time in receiving and reviewing the data. In experiments, each run for about 8 weeks, results pages with 30 (rather than 10) results lowered search traffic (and proportionally ad revenues) by 20 percent.

In other notes from today’s web 2.0… EMI Chairman, David Munns and the Pirate-in-a-suit, Eric Keltone, just took the stage together, and responded to Battelle’s request that they envision the idealized conditions that would allow music mash-ups to be created and shared online, while allowing the corporate bastions of the music industry to continue to prosper. Eh…it’ll happen they both say, but after revisiting the Beatles mashup debacle they weren’t hugging as they walked off stage… to the beat of ‘Strawberry Fields’.

13 thoughts on “Marissa Mayer on Googley Lessons”

  1. As usual Google experimented wisely. It does also pay to allow users to set certain settings as they wish (as Google does). I would really be upset if I had to only get search results in pages of just 10. So experiment as Google does to figure out the best default settings but allow users to edit some of those default settings if they want.

  2. >>In experiments, each run for about 8 weeks, results pages with 30 (rather than 10) results lowered search traffic (and proportionally ad revenues) by 20 percent.

    Wonder how search traffic is reported….is it the count of searches performed or the number of search pageviews seen?

    Although I guess either way isn’t lowered search traffic in the users’ best interest? I’m performing less searches cause I’m finding what I want on what would normally be the 2nd or 3rd page. Should be interesting to see how Goog handles issues like this in the future.

  3. I still remember the first day I used Google, sitting at a foundation center in NYC. Speed and simplicity were the two impressive factors that initially got me to switch.

  4. I remember back in the day (1997/8) that Gartner did a piece of research that said that the likelihood that a user was going to continue using a site was inversely proportional to page load times. Not terribly surprised by Marissa’s comments.

  5. Marissa mentioned this one back in 2001 at a presentation at IBM’s Almaden lab. It has bugged me since because there seems to be some serious comparing apples with oranges going on.

    One would suspect the users to simply stops reading results when they had enough or when they are satisfied before they reach the last of 30 results instead of submitting less queries.

    If there are more results on a page, there will be less cases where a user needs to go to the next page in order to review more results on the same query. This means less requests while the same amount of information is processed by users and actually faster since they don’t have to click for next pages and wait for reloads. It does NOT mean users weren’t willing to wait for results longer and review more results.

    There is an easy way to see if users are actually less willing to search: compare how many searches were done per search query while omitting ‘next page’ queries. When people perform ‘next page’ queries, it means to them, there is not enough information on the search results page for that particular query. By displaying 30 results per page, this situation will occur less frequently so the amount of requests for a ‘next page’ will drop, and most likely more than 80%, while the amount of ‘new’ searches in both situations might still be the same. Such an easy test.

    Now, it’s unclear how the exact data analysis by Google was done so my comments will be off in case they did the right thing… Isuspect they made a mistake because of the wording of the results. It’s all about pageviews and not about searches.

    If they had asked the users instead of trying to learn by interpretation while doing the opposite of asking questions, the could have found out how happy users actually were with a certain number of results. It good to try to deduce the reason behind the events you see occurring but if you don’t check with users, you never know if you came up with the correct reason for the observed behavior. It’s just so easy to mislead yourself. So always do both and compare the outcomes. If there is a difference between the two, something must be off (either with the interpretation of the data or with the interviewing etc).

    Now, I suspect the real reason to let users unnecessary go trough more and shorter results lists is advertising revenues. When users are done with the results on a page, they frequently check the sponsored results. When there are 3 times as much results on a page, it becomes more likely they will find an organic result that leads them away from the search results page and it therefore becomes less likely they will start scanning the sponsored results and click on a sponsored result.

    It seems to be more about the money than about satisfying the user. Which is perfectly fine for a company. It just bugs me when it’s sold as user friendliness (or worse, when they really believed their own theories without checking with real-world users and simply made a mistake in their assumptions). Both ways, they loose credits.

    I would really like for Marissa to show me I’m wrong and elaborate some more on why their data interpretation is correct.

    Regards,

    Merijn Terheggen

  6. I agree with some of Merijn`s assessments. We need a few more details from Google before we should really believe these results.

    I mean, here is one very, very simple thing they could have done: If users are so concerned about speed, and 30 results takes so much longer than 10 results, why not wrap the whole ranked list in an AJAX layer. The AJAX layer could accept the immediate, quick top 10 results, as normal. There would be no slowdown at all — the top 10 results would appear with their usual speed. Then, because of the asynchronous nature of AJAX, as results 11-20 and 21-30 are computed, you could dynamically append those results to the end of the list. By the time the user got through reading/scrolling through results 1, 2, maybe even up to 5, the remaining results 11-30 would already be appended.

    Thus, the user would have all 30 results, and it would be completely transparent to the user, in terms of the quickness/responsiveness of the system.

    So, Google, did you really not think of doing that?

    Merijn also says: Now, I suspect the real reason to let users unnecessary go trough more and shorter results lists is advertising revenues. When users are done with the results on a page, they frequently check the sponsored results.

    I have been saying this for years now. In fact, the other experiment Google could have run is a Vivisimo experiment.. wherein they cluster the top 100 (or however many) results. It gives you a quick overview of the top 100 results, without having to sequentially scroll through every. single. one. What would be interesting to see is how many people, using the clustering wrapper, click a link that would otherwise NOT have been in the top 10 results, given just a flat Google list.

    I have not done this experiment, but I bet there would be a helluva lot of non-top 10 clicks.

    But again, the space that Vivisimo uses to do the results clustering is the space that Google uses to show ads. But in order to give the user tools to quickly and efficiently navigate to the 77th result, which result is the relevant page they want, you have to take away the advertisements… or risk cluttering the results page. And clutter is supremely un-Google-like.

    But that is a classic conflict: You have a trade-off between clustering tools, and advertisements. You have to pick one or the other.

    And maybe Google has done the experiment that shows that clustering tools result in fewer relevant clicks than showing advertisements. But until I see that experiment, there is always going to be a doubt in my mind.

  7. Actually, I only one need one result. If I knew I would get the correct answer, the length of time is not that big an issue. However, if I knew I would have to wade through pages of results, then speed becomes more important.

  8. One thing that I can’t for the life of me figure out is why Google still hides its products and services. I worked at Google, and I know the Google web site very well. At the same time, I still have a heck of a time trying to find certain services. For example, when Google launched Web Optimizer, I spent 15 minutes trying to find it on their web site. Another huge thing that Google misses is linking back to Google home or other key pages from their various services.

    A while ago, a usability expert wrote that a web site’s homepage is a window into the true nature of organization. If we were to take it one step further and apply it to the whole web site, couldn’t we say that Google is vastly compartmentalized and confused. I am a user experience designer, and I can not for the life of me figure out why they make it so darn hard to find their services and products. I mean it takes at least four clicks to get to ‘Docs and Spreadsheets’ including clicking on ‘About Google’. To me the Google web site is like a messy room with things strewn all about. Guys, we like your products, please do not make it so hard to find them!

  9. Merijn,

    I doubt Google cares about how they can append after-the-initial-page-load data onto the initial set. The thing that is important to users is the first visible browser window full of content. What they want is that to popular ASAP, scrolling down may be faster than hitting next and waiting for another load, but people seem to not care about that.

    I wonder if that is cultural and American or Euro/American. I’d bet other cultures view search results differently. Just as Japanese and American video game players like slightly different POVs and solve puzzles differently.

    Regardless — this wasn’t new data or even interesting data being shared with us. Its not shocking that people want faster pageload times. Browser wars have been fought over page render time vs fullness of feature set; and I think page render time always wins as most important factor.

    Jason

  10. Jason: But the point is that Google did not really answer the question. Users said that they wanted more results, and Google’s response was “ok, but we are going to make it take longer.. so do you still want more results?” Of course the answer was no. Google therefore concluded that users really didn’t want more results.

    That seems like fallacious reasoning. Google never really tested what the users asked for, which was “ceteris paribus, give us more results”. In other words, hold the page render time constant, but just give us 30 results instead of 10.

    Google could have done this experiment if it had wrapped the ranked list in an AJAX layer that allowed asynchronous appending of results 11-20 and 21-30. That way, results 1-10 would still load/render just as fast. And by the time the user finished reading the top link, results 11-30 would be there, as well. The user would never realize that the “whole” page didn’t render immediately, and it would actually appear to the user as if he had gotten 30 results in the exact same amount of time as it took to get 10 results.

    Then, and only then, could Google actually examine the user statement (hypothesis) that more results are desired.

    In science, you want to hold all variables constant, except for the variable you are testing. Google failed to do this. They let the results size vary, but also varied the page load time. That makes it impossible to test the initial hypothesis.

    That’s bad science. Then again, Google has long claimed to be a company full of engineers and hackers (coders), and not scientists.

  11. My own research (loads of qualitative interviews) somewhat confirms the Google research. However, back at SEW in San Jose, the Yahoo team focused on “trust” as a critical layer in search attraction. This theory became a founding driver behind http://www.url.com... then again, do MORE results increases trust…

  12. JG–

    I’m with you that Google apparently didn’t take very scientific method to solving this problem; what I find more troubling is that this is what they choose for their CMO to talk about at a conference like Web2.

    Spending the 25-30 minutes in-front of the Web2 audience saying nothing is pretty telling about the respect they have for the rest of the community. Either we’re all too stupid to understand the real science they are working on — or they just don’t want to share anything with us, because we’re all competition.

    I’m sure there was something more interesting they could have shared, and with a little thought about what to share it could have helped their business. “People wanted faster pageload times” is a waste of the time Web2 gave them AND a waste of time for the people who paid, quite richly, to hear it.

    Talk about disrespect.

Leave a Reply to John Hunter Cancel reply

Your email address will not be published. Required fields are marked *