Site icon John Battelle's Search Blog

Google v. Facebook? What We Learn from Twitter.

Last week I wrote a post in which I opined a bit about Facebook search. In it I wrote:

Facebook is way more than its newsfeed, and its search play is key to proving that value, and extending it….No doubt building Facebook search today is akin to building Google ten years ago – bigger, most likely, in terms of data, algorithmic, and platform challenges.

If only I had waited a few days, I could have pointed to Fred’s piece in Wired, out this week. He profiles the ongoing feud between the King of Search, Google, and the upstart, Facebook. In his piece, he writes:

For the last decade or so, the Web has been defined by Google’s algorithms—rigorous and efficient equations that parse practically every byte of online activity to build a dispassionate atlas of the online world. Facebook CEO Mark Zuckerberg envisions a more personalized, humanized Web, where our network of friends, colleagues, peers, and family is our primary source of information, just as it is offline. In Zuckerberg’s vision, users will query this “social graph” to find a doctor, the best camera, or someone to hire—rather than tapping the cold mathematics of a Google search. It is a complete rethinking of how we navigate the online world, one that places Facebook right at the center. In other words, right where Google is now.

I agree that of all the contenders out there right now (including Twitter), Facebook has the most data, position, and potential to upset Google’s dominance of the web. But I disagree with one premise of the piece, which is that Facebook’s proprietary approach to the data it stores presents a blind spot to Google that gives Facebook a competitive edge. Fred writes:

Together, this data comprises a mammoth amount of activity, almost a second Internet. By Facebook’s estimates, every month users share 4 billion pieces of information—news stories, status updates, birthday wishes, and so on. They also upload 850 million photos and 8 million videos. But anyone wanting to access that stuff must go through Facebook; the social network treats it all as proprietary data, largely shielding it from Google’s crawlers. Except for the mostly cursory information that users choose to make public, what happens on Facebook’s servers stays on Facebook’s servers. That represents a massive and fast-growing blind spot for Google, whose long-stated goal is to “organize the world’s information.”

I think it’s a major strategic mistake to not offer this information to Google (and anyone else that wants to crawl it.) In fact, I’d argue that the right thing to do is to make just about everything possible available to Google to crawl, then sit back and watch while Google struggles with whether or not to “organize it and make it universally available.” A regular damned if you do, damned if you don’t scenario, that….

For an example of what I mean, look no further than Twitter. That service makes every single tweet available as a crawlable resource. And Google certainly is crawling Twitter pages, but the key thing to watch is whether the service is surfacing “superfresh” results when the query merits it. So far, the answer is a definitive NO.

Why?

Well, perhaps I’m being cynical, but I think it’s because Google doesn’t want to push massive value and traffic to Twitter without a business deal in place where it gets to monetize those real time results.

Is that “organizing the world’s information and making it universally available?” Well, no. At least, not yet.

By making all its information available to Google’s crawlers (and fixing its terrible URL structure in the process), Facebook could shine an awfully bright light on this interesting conflict in interest.

Exit mobile version