A Brief Interview with Google’s Matt Cutts

Matt is the man who the SEO/SEM world looks to for answers around most things Google related. Over the past month Melanie and I have been having a wide-ranging email exchange with him on spam, the role of humans at Google, and other things. Here's the result: Let's say…

Matt-Cutts-Logo

Matt is the man who the SEO/SEM world looks to for answers around most things Google related. Over the past month Melanie and I have been having a wide-ranging email exchange with him on spam, the role of humans at Google, and other things. Here’s the result:

Let’s say you decide to leave Google and are asked to write an exact job description for a replacement to do exactly what you do now. What does it say? (We told Matt to be honest, or his options will not vest!)

My official job is to direct the webspam team at Google. Webspam is essentially when someone tries to trick a search engine into ranking higher than they should. A few people will try almost anything, up to and including the mythical GooglePray meta tag, to rank higher. Our team attempts to help high-quality sites while preventing deceptive techniques from working.

As a result of working on webspam, I started talking to a lot of webmasters on forums, blogs, and at conferences. So I’ve backed into handling a fair chunk of webmaster communication for Google. Last year I started my own blog so that I could answer common questions, or to debunk stuff that isn’t true (e.g., inserting a GooglePray meta tag doesn’t make a whit of difference). These days when I see unusual posts in the blogosphere, I’ll try to get a bug report to the right person, or to clarify if someone is confused.

As you pointed out, you’ve become the human voice between Google and webmasters/SEOs. We’ve heard Google needs to manually remove spam sometimes. And even the algorithm-based feed for Google News requires an editorial gatekeeper for selecting sites. Do you think there is a growing role for human presence in Google’s online technologies?

Bear in mind that this is just my personal opinion, but I think that Google should be open to almost any signal that improves search quality. Let’s hop up to the 50,000 foot view. When savvy people think about Google, they think about algorithms, and algorithms are an important part of Google. But algorithms aren’t magic; they don’t leap fully-formed from computers like Athena bursting from the head of Zeus. Algorithms are written by people. People have to decide the starting points and inputs to algorithms. And quite often, those inputs are based on human contributions in some way.





The simplest example is that hyperlinks on the web are created by people. A passable rule of thumb is that for every page on the web, there are 10 hyperlinks, and all those billions of links are part of the material that modern search engines use to measure reputation. As you mention, Google News ranks based on which stories human editors around the web choose to highlight. Most of the successful web companies benefit from human input, from eBay’s trust ratings to Amazon’s product reviews and usage data. Or take Netflix’s star ratings. This past week I watched Brick and Boondock Saints, and I’m pretty sure that L4yer cake and Hotel Rwanda are going to be good, because all those DVDs have 4+ stars. Those star ratings are done by people, and they converge to pretty trustworthy values after only a few votes.

So I think too many people get hung up on “Google having algorithms.” They miss the larger picture, which (to me) is to pursue approaches that are scalable and robust, even if that implies a human side. There’s nothing inherently wrong with using contributions from people–you just have to bear in mind the limitations of that data. For example, the three companies I mentioned above have to consider the malicious effect that money can have in their human systems. Netflix doesn’t have to worry much (who wants to spam a DVD rating?), while eBay probably spends a lot more time thinking about how to make their trust ratings accurate and fair.



Google recently
added user-tagging to photos. it’s an interesting way to sort search, adding a personal and human dimension yet opening up a can of worms for syntax and keyword variation. Is this social training of human-input going to be applied to other dimensions of search at Google? Requiring labels to gain a critical mass before they become official is clever step, but of course its not immune to automated spamming. From your perspective on quality control, is this going to open up doors for more abuse of Google as a platform?

I personally would love to see more human input into search at Google. But the flip side is that someone has to pay attention to potential abuse by bad actors. Maybe it’s cynical of me, but any time people are involved, I tend to think about how someone could abuse the system. We’ve seen the whole tagging idea in Web 1.0 when they were called meta tags, and some people abused them so badly with deceptive words that to this day, most search engine give little or no scoring weight to keywords in meta tags.

Google took a cautious approach on this image tagging: the large pool of participants and their random pairing makes it harder to conspire, and two people have to agree on a tag. Users doing really weird things would look unusual, and image tagging is easy for people but much harder for a bot. As tagging goes, it’s on the safer end of the spectrum.

I think Google should be open to improving search quality in any way it can, but it should also be mindful of potential abuse with any change.

W3C Schools is listing its supporters’ websites on Page Rank 9 and PR7 pages in exchange for donations, $1000 a pop in cash or trade (http://www.w3.org/Consortium/sup). Speculation on this is buzzing because though W3C is a well respected educational resource many SEO blackhats endorse similar tactics. Does Google consider link selling a type of webspam against Google’s TOS? And if so, should we expect to see some kind of a censure on W3C? Or how does it differ from what Google considers webspam?

I’ve said this before in a few places, but I’m happy to clarify. Google does consider it a violation of our quality guidelines to sell links that affect search engines. If someone wanted to sell links purely for visitors, there are a myriad number of ways to do it that don’t affect search engines. You could have paid links do an internal redirect, and then block that redirecting page in robots.txt. You could add the rel=”nofollow” attribute to a link, which tells search engines that you can’t or don’t want to vouch for the destination of a link. The W3C decided to add a “INDEX, NOFOLLOW” meta tag to their sponsor page, which has the benefits that the sponsor page can show up in search engines and that users receive nice static links that they can click on, but search engines are not affected by the outlinks on that page. All of these approaches are perfectly fine ways to sell links, and are within our quality guidelines.

Did the W3C decide to add the metatag on their own, or was that the result of talks between you and the W3C?



We were happy to talk to people at the W3C to answer questions and to give background info, but I believe they made the decision to add the metatag themselves.

Thanks for the considered responses, Matt!

39 thoughts on “A Brief Interview with Google’s Matt Cutts”

  1. Great questions and clear answers from Matt who is always a fine public speaker and a very nice guy when you get the chance to chat one on one at the conference watering holes.

    Just don’t let him press that RED BUTTON!

  2. Nice interview. One question: Is paragraph 6 starting with: “Bear in mind that this is just my personal opinion” a part of the queston or the answer? It’s itlacized on my computer though appears to read like an answer.

  3. Google’s only value is it’s algorithms, if they fail then google does not have any value. Google is already sinking in the filth they created, made for adsense (MFA) garbage pages millions or is it billions of them…..

    Recently when my 10 year old daughter searched information for a school project all she could find were MFA pages!!!!

  4. Matt, I have one line of questions. Google’s job is to measure the relevancy of web pages with algorithms/human computation or other means Google have or could have.

    But is it fair for Google to put restriction on what people could do with web? Is this not a big brother approach which in long term is quite harmful for the web, and therefore for Google too.

    For an example, suppose W3C is a very nice website which people like to visit, i.e., it naturally has a high relevancy rank. Now for some reason W3C decides to put links to other websites (may be for money or may be for other reasons), which do not have high natural relevancy rank. Do not you think it is Google’s job to innovate ways to realize that those links are somehow different and not raising the relevancy rank of the linked websites.

    Is censuring W3C website, as done with BMW’s website, i.e., not intentionally showing W3C website when the right answer for a user’s query is W3C website, really a good thing for the web? People should be able to design their websites in whatever way for whatever reason they want to design. A good search engine should try to find out the best webpages to a user’s query. Of course, by definition, a spammed webpage is not a good page for a user’s query, so
    if a search engine is good, it won’t show those pages to the users. Thereby taking away any incentive for the spammers.

    I think, in the long run, whatever is good for the web is good for Google too. Google telling BMW, how to design its website may work for Google in the short-term but it is not a good long term approach. Google may issue guidelines to webmasters how they could assist Google in evaluating their web-sites, but any threat of censuring a website or disallowing one’s to use its legal rights (e.g., selling links), or telling what changes people should do to prevent their websites from Google’s wrath (which should not even exist) are restrictions on people’s use of webpages. I think it is perfectally fine for Google to say that it will ignore certain kinds of links. But punishing a website because it is not designed according to Google’s guideline is a bad bahavior. It may be good for Google for now, but may be bad for the web’s growth.

    A crude analogy could be a s follows. Suppose a state only knows how to compute sales tax on dollars. So the state prohibits merchants to price goods/services in cents. Though the right approach is that till the state figures out how to compute the sales tax on cents, it should waive the sales tax on the cents.

    PS: The commentator is a Microsoft employee. But the viewpoint expressed is his own personal viewpoint.

  5. Good interview, but to be honest, the end of it left me thinking, “okay, so where’s the rest?” I would love for you to continue it and do a more in-depth interview. It seemed a little short to me.

  6. “But is it fair for Google to put restriction on what people could do with web?”

    Kamal, you mostly lost me at that second sentence, because Google does not sit between you (or me) and websites. You can access them directly.

    That said, when BMW futzes with their Google presence and skews things in their favor, it makes it worse for everybody. When I use Google to do searches for car stuff, I choose Google to decide what’s relevant for my car search, not BMW.

    Also, the W3C thing is different. I’m pretty sure Matt effectively said that those people who paid $1000 just for a link threw their money down the drain (good), but you can still get to the page because they index it.

  7. Chris writes: When I use Google to do searches for car stuff, I choose Google to decide what’s relevant for my car search, not BMW.

    If your query is “car”, and BMW comes up, is that not relevant? Car is what you asked for, car is what you got. If you instead searched for “car reviews”, because you want to find pages that compare and contrast lots of different cars, well, then you asked for something different. And BMW, or any single car manufacturer, are not relevant. But then, you asked for something different by changing your query.

    Your mistake is assuming that for some strange algorithmic reason Google actually ranks various cars by their “goodness”, when you type the query “car”. It is not Google’s job to handhold you. It is Google’s job to give you what you asked for. And if you just asked for “car”, and you get BMW, then you got what you asked for.

    If you want something else (e.g.: “car -european”, or “car +japanese”), it is up to you to ask for it.

  8. Chris, Google does sit between a person and billions of those sites. It is impossible for a person to remember the web-address of all those billion sites. So the person uses Google.

    Note that Google itself puts paid link based on money charged (that’s how google makes its 95% of revenue). This business model allows Google to do produce hundreds of services for users. So paid link is a thriving business model. Google argues that paid link on its search pages are relevant for users (not always but economic sense says that advertisers would want to put only relevant link).

    So Google is saying that charging money for putting links is a relevant business model as long as websites tries to put relevant links. It is a perfectly fine business model for websites to charge money for putting external links. These links could be relevant for the same reason as paid links on Google search could be.

    So why does Google destroy’s other people business model. If Google thinks some set of links are not properly serving Google search user, then Google could ignore those links and try to measure the relevancy by relevant links only. It is a big brother approach for a search engine to censure a website which a user may actually be looking for his/her query. If Google is arm twisting a big company like BMW, what it might be doing to little players, whose website nobody might have been able to discover because of censureship?

  9. Seems to be a confusion in one question between the World Wide Web Consortium (w3.org) and W3Schools (w3schools.com). The two sites are unconnected. W3C is not an educational resource, strictly speaking.

  10. The interview is good, as far as it goes. I know that Matt Cutts has a lot of things to do, and there are so many questions that he’d never have the time to answer them all and eat and sleep.

    There are 5 questions in total: the first is an introduction. The second a follow up on Google and the problems with algorithms and humans. The third is about search. The forth and fifth are about the same subject, not to the inferred standard, and the added weight of this fact means that this reader is left wondering why the interview goes in this direction. Then re-read the third question for context, and there is a very tenuous thread between three and four.
    So now I’m left wondering what the interviewers have against w3?

    Which, I’m sure, is not their intention. Had the interview been longer, or the fifth question been about something else.

  11. Matt’s open communication approach has earned him a lot of respect, reputation and friends. This interview is yet another example of his great communication skills.

    There is nobody like him in yahoo or msn searches, even though there’s been some nice tries.

    I think that within few months every major SE will have a designated webmaster communication team or spokeperson.

  12. Google is to blame for so called Spam – not soley the Webmasters………let Search Engines Web explain why:

    Using link popularity in the early days was innovative and filled an important void that allowed Google to flourish.

    THERE WAS NO ONE RIGHT ANSWER…… but, collectively the search engines all brought unique algos to the table (Excite introduced CONCEPT SEARCHING DIRECTHIT intoduced CLICK POPULARITY)

    As Google became popular, it was negligent of their Owners and Engineers NOT to understand that link popularity Discriminated among the newer sites. It was during that time, they should have made some attempt to factor in those inequities so as to keep the web a democratic place for both old and new.

    When Google became the default search engine for AOL then Yahoo – they were responsible for 75% of all organic searches…. THATS WHEN THE TABLE TURNED!!!! (MSN still used pages of Looksmart listings – so in reality Google was almost the ENTIRE organic SERPs)

    Inequities are one thing when you are just another search engine with an average share……but once you become an almost monopoly…..well

    Pay Per Click was just really taking off then and rampant click fraud went uncurbed …..What choice did Webmaster’s have but to explore less acceptable tactics to survive???????

    These tactics are symptons of a deep problem – they should be administerd to as that – not taking the easy way out and banning an ENTIRE domain because of one sentence of hidden text on ONE PAGE!!….Or an optimized, keyword-rich Doorway Pages that redirects.

  13. Short, but a great interview nonetheless.

    This should have been done Matt Cutts style, as in it should have been a video – maybe even using candy or something as a demonstration.

    Thanks for the info!

  14. I’m off to add the googlepray tag to all my sites. Cheers for clearing that one up matt.

    Seriously though, good article. It’s also nice to know that Google DOES communicate directly with the big guys regarding “Selling Pagerank”

  15. Nice interview ! Of course Matt and his team is doing a fine job of keeping the spammers away. But yes Google itself is creating mess by creating adsense, Google video etc. The other day I was trying to find a website of Dotties hotel in Butuan, Philippines. And when I searched for the term ‘dotties hotel butuan’ in Google the top entries are Google videos. Why? just because video.google.com has a high pagerank. Is it not spamming sponsored by Google ?

  16. Seems to be a confusion in one question between the World Wide Web Consortium (w3.org) and W3Schools (w3schools.com). The two sites are unconnected. W3C is not an educational resource, strictly speaking.

  17. >>But is it fair for Google to put restriction on what people could do with web?

    So if I build a crap page which ends up not ranking #1 for “real estate”, I’m a victim of Google censorship, right?

    Out of billions of webpages, Google must decide which one is the most relevant for any query.

    You are saying you prefer the SERP to be in random order, which is ridiculous.

  18. Halfdeck, if you build a crap page then I am not saying Google to show it.

    But what if you build a fantastic web-page which everybody wants to visit for a query. Then Google must show it, even if you decide to monetize it by puting paid links there and not changing robots.txt.

    A dominating search engine should not use its weight to decide how you monetize your fantastic web-page. If you put AdSense paid links then it seems okay with Google. Though in reality, having AdSense paid links there may actually decreases the desirability of your otherwise fantastic web-page. Therefore, if a search engine decreases the relevancy rank of a web-page because of AdSense paid links, then it seems understandable. But if a search-engine censures a web-page because the owner was carefully selling the static links is a big-brother approach.

    Just to remind you, since people keep forgetting, Google is the same company which some time back even boycotted a news media. I love Google and its product, even if Google is a competitor to my employer, but on the other hand Google does give hints that if it becomes a dominating player then …

  19. BTW AltaVista also went through “human aided ranking” before google came! history repeats itself…..

    If google need to depend on “human aided ranking” then many smaller companies will be able to provide better web indexing than google…….

  20. Many times I also wonder why some “crappy” websites that have no content at all but are very well optimized – rank better than more authoritative sites. Maybe the “human touch” is the only one that can discern, after all, what is the better website and what is the crappy one.
    And also, just like roger said – sometimes these “children” of google are the main results, or even blogs or forums. Can there be anything done about this?

  21. The great GOOG could clean up 2/3s of the crap pages in a single swoop by turining off adsense.

    Alas…thre goes 2/3s of our revenue…Oh, the humanity!

  22. Good interview. Matt and his team is definitely making excellent efforts to cut down spam in the search results. But now Google has become a giant. Will it really be possible to implement ‘human touch’ at this stage. Many smaller search engines had tried it earlier and failed. Anyway, there will be always spammers who will try to beat the system with new techniques. Money drives spam and it is killing the true spirit of interent.

  23. Nice and really very informative interview. One thing that i think is that no one in this world doing social service. Everyone is here to do business where the question comes of internet world. So if someone is placing links of others site for money, i dont think there is anything wrong about this, thats what exactly search engines doing.

  24. So if W3C added a “INDEX,NOFOLLOW” tag to their sponsor page, does that mean everyone who paid for links on that page aren’t really getting anything for the money they paid to be listed there? Or am I reading Matt’s response wrong?

  25. “Everyone is here to do business where the question comes of internet world. So if someone is placing links of others site for money, i dont think there is anything wrong about this, thats what exactly search engines doing.”

    That’s true. But too many outgoing links would affect the site itself’s ranking negatively as well. So link redirect should be a good solution. However, people may not like to buy links in that case.

  26. W3C et “W3 Schools” are two different sites and entities not related to each other. That would be good to fix the question above. Thanks.

  27. Matt’s open communication approach has earned him a lot of respect, reputation and friends. This interview is yet another example of his great communication skills.

    There is nobody like him in yahoo or msn searches, even though there’s been some nice tries.

    I think that within few months every major SE will have a designated webmaster communication team or spokeperson.

  28. Excellent article, When I use Google to do searches for car stuff, I choose Google to decide what’s relevant for my car search, not BMW.

  29. Search results should not be based on popularity so that the site with the most friends wins. If google would just take the link juice maker out of the algo it would solve the problem. People could then sell links/advertising on high traffic sites simply because they have good traffic. To expect people to run out and add nofollow links to a bunch of old pages, it’s not gonna happen. I would say the majority of people dont even know what nofollow is and many just don’t have the time to deal with it. Trying to knock down the spammers always results in innocent sites being hurt too. Please get rid of link juicer, you don’t need it in the algo.

  30. That was a really good article, although as someone said above it was a bit short, but I like the way you have put your questions forward to him and his excellent detailed responses back, I assume it must have taken some practice answering questions without giving too much of Google’s secret away. I’d like to read some more interviews of you with high profile web figures, consider yourself I’ve bookmarked your site 🙂

  31. I think, in the long run, whatever is good for the web is good for Google too. Google telling BMW, how to design its website may work for Google in the short-term but it is not a good long term approach. Google may issue guidelines to webmasters how they could assist Google in evaluating their web-sites, but any threat of censuring a website or disallowing one’s to use its legal rights (e.g., selling links), or telling what changes people should do to prevent their websites from Google’s wrath (which should not even exist) are restrictions on people’s use of webpages. I think it is perfectally fine for Google to say that it will ignore certain kinds of links. But punishing a website because it is not designed according to Google’s guideline is a bad bahavior. It may be good for Google for now, but may be bad for the web’s growth.

  32. Maybe the “human touch” is the only one that can discern, after all, what is the better website and what is the crappy one.So the state prohibits merchants to price goods/services in cents.Thanks for the info!

Leave a Reply to JG Cancel reply

Your email address will not be published. Required fields are marked *