(image) **Updated at 3 PM PST with more info about Facebook/Google negotiations…please read to the bottom…**
In today’s business climate, it’s not normal for corporations to cooperate with each other when it comes to sharing core assets. In fact, it’s rather unusual. Even when businesses do share, it’s usually for some ulterior motive, a laying of groundwork for future chess moves which insure eventual domination over the competition.
Such is the way of business, particularly at the highest and largest levels, such as those now inhabited by top Internet players.
Allow me to posit that this philosophy is going to change over the next few decades, and further, indulge me as I try to apply a new approach to a very present case study: That of Google, Facebook, and Twitter as it relates to Google’s search index and the two social services’ valuable social interaction datasets.
This may take a while, and I will most likely get a fair bit wrong. But it seems worth a shot, so if you feel like settling in for some Thinking Out Loud, please come along.
First, some abridged background. Back in 2009, on the Web 2 Summit stage of all places (yes, I was the emcee), Google, Microsoft, Facebook and Twitter announced a flurry of deals, some of which were worked out in a last minute fury of negotiations. Early in the conference Microsoft announced it would incorporate Twitter and Facebook feeds into its new search engine Bing. Not to be outdone, Google announced a deal with Twitter the next day. However, Google did not announce a deal with Facebook, and the two companies have never come to terms. Meanwhile, Microsoft has continued to deepen its relationship with Facebook data, to the point of viewing that relationship as a key differentiator between Bing and Google search.
All of these deals have business terms, some of them financial, all with limits on how data is used and presented, I would presume. Marissa Mayer of Google told me on the Web 2 stage that there were “financial terms” in Google’s deal with Twitter, but would not give me any details (nor should she have, frankly).
Fast forward to the middle of last year, when the Google/Twitter deal was set to expire. At about the same time as renewal was being negotiated, Google launched Google+, a clear Facebook and Twitter competitor. For reasons that seem in dispute (Google said yesterday Twitter walked away, Twitter has not made a public statement about why things fell apart), the renewal never happened.
And then yesterday, Google incorporated Google+ results into its main search index, sparking a debate in the blogosphere that rages on today – Is Google acting like a monopolist? Does Facebook or Twitter have a “right” to be included in Google results? Why didn’t Google try to negotiate inclusion with its rivals prior to making such a clearly self-serving move?
Google execs, including Chair Eric Schmidt, told SEL’s Danny Sullivan that the company would be happy to talk to both companies to figure out ways to incorporate Twitter and Facebook into Google search, but clearly, those talks could have happened prior to the G+ launch, and they didn’t (or they did, and did not work out – I honestly have no idea). When Danny pointed out that Twitter pages are publicly available, Schmidt demurred, saying that Google prefers to “have a conversation” with a company before using its pages in such a wholesale fashion (er, so did they have one, or not? Anyway…). He has a point (commercial deals are de-rigueur), but…that conversation happened last year, and apparently ended without a deal. And around we go…
What’s clear is this: All the companies involved in this great data spat are acting in what they believe to be their own self interest, and the greatest potential loser, at least in the short term, is the search consumer, who will not be seeing “all the world’s information” but rather “that information which is readily available to Google on terms Google prefers.”
The key to that last sentence is the phrase “what they believe to be their own self interest.” Because I think there’s an argument that, in fact, their true self interest is to open up and share with each other.
Am I nuts? Perhaps. But indulge my insanity for a bit.
The Cost of Blinkered Competition
Back in the Web 1.0 days, when I was running The Industry Standard, I had a number of strong competitors. It’s probably fair to say we didn’t like each other much – we competed daily for news stories, advertiser dollars, and the loyalty of readers. The market for information about the tech industry was limited – there were only so many people interested in our products, and only so much time in the day for them to engage with us.
My strategy to win was clear: We’d make the best product, have the best people, and we’d win on quality. When I heard about one of our competitors badmouthing us, I’d try to ignore it – we were winning anyway: We had the dominant marketshare, the most revenues ($120mm in 2000, with $21mm in EBIDTA), and the best product.
Then something strange happened: an emissary from a competitor called and asked for a meeting. Intrigued, I took it, and was surprised by his offer: Let’s put our two companies together. Apart, he argued, we were simply tearing each other down. Together, we could consolidate the market and insure a long term win.
I considered his idea, but for various reasons, we didn’t take him up on it. I felt like we had the dominant position, that his offer was driven by weakness, not intellectual soundness, and I also felt that a combination would require that my shareholders take on too much dilution.
Two years later, both of us were out of business.
Now, I’m not sure it would have mattered, given the great crash of 2001. But what is certainly true is that I could have thought a bit deeper about what this fellow was proposing. Back in the days of print-bound information, we were essentially competing on what were publicly available assets: stories, particularly interpretations and reportage around those stories, and people: writers, editors, ad sales executives, and management. Short of combining companies, there wasn’t really any other way for us to collaborate, or at least, so I thought.
But perhaps there could have been. It’s been more than a decade since that meeting, and I still wonder: perhaps we could have shared back-end resources like operations, publishing contracts, etc. and saved tens of millions of dollars. We’d compete just on how we leveraged those public assets (stories, people). Perhaps we might have survived the wipeout of the dot com crash. We’ll never know. Since those publications died, the blogosphere has claimed the market, and now it’s far larger than the one we lost back in 2001. Of course I started Federated Media to participate in that model, and now FM has as large a revenue run rate as the Industry Standard, across a far more diverse market.
Why am I bringing this up? Because I think there’s a win-win in this whole Google/Facebook/Twitter dust up, but it’s going to take some Thinking Differently to make it happen.
Imagine Twitter and Facebook offer efficient access to all of their “public” pages – those that its users are happy to share with anyone (or even just to their pre-defined “circles”) – to Google under some set of reasonable usage terms. Financial terms would be minimal – perhaps just enough to cover the costs of serving such a large firehose of data to the search giant. Imagine further that Google, in return, agrees to incorporate this user data in a fashion that is fair – ie doesn’t favor any service over any other – be it Twitter, Google+, or Facebook.
Now, negotiating what is “fair” will be complicated, and honestly, should be subject to iteration as all parties learn usage patterns. And of course all this should be subject to consumer control – if I want to see only Twitter or Facebook or Google+ results in particular searches (or all results for that matter), I should have that right.
And this leads me to my point. Such a set up, regardless of how painful it might be to get right, would create a shared class of assets that would have to compete at the level of the consumer. In other words, the best service for the query wins.
That’s always been Google’s stated philosophy: the best answer for the question at hand. Danny gets to this point in a piece posted last night (which I just saw as I was writing this): Search Engines Should Be Like Santa From “Miracle On 34th Street”. In it he argues that Google’s great strength has been its pattern of sending people to its competitors. And he upbraids Google for violating that principle with its Google+ integration.
It doesn’t have to be this way. It’s not only Google that’s at fault here.
Facebook won’t share with Google on any terms, Facebook and Google have not been able to come to terms on how to share data (more on that below*), and Twitter clearly wants some kind of value if it is to share its complete firehose with the search giant. Imagine if all three were to agree on minimal terms, creating a public commons of social data. Yes, that would put Google in an extreme position of trust (not to mention imperil its toddler Google+ service), but covenants can be put in place that allow parties to terminate sharing for clear breaches which demonstrate one party favoring itself over others.
Were such a public commons to be created, then the real competition could start: at the level of how each service interprets that data, and adds value to it in various ways.
Four years ago to the month, I wrote this post: It’s Time For Services on The Web to Compete On More Than Data
In it I said: It’s time that services on the web compete on more than just the data they aggregate….
I think in the end, Facebook will win based on the services it provides for that data. Set the data free, and it will come back to roost wherever it’s best used. And if Facebook doesn’t win that race, well, it’ll lose over time anyway. Such a move is entirely in line with the company’s nascent philosophy, and would be a massively popular move within the ouroborosphere (my name for all things Techmeme).
Compete on service, Facebook, it’s where the world is headed anyway!
Two and a half years ago, as it became clear Facebook’s “nascent philosophy” had changed (and as Twitter rose in stature), I followed up with this post: Google v. Facebook? What We Learn from Twitter. In that post, I said:
I think it’s a major strategic mistake to not offer (Facebook’s pages and social graph) to Google (and anyone else that wants to crawl it.) In fact, I’d argue that the right thing to do is to make just about everything possible available to Google to crawl, then sit back and watch while Google struggles with whether or not to “organize it and make it universally available.” A regular damned if you do, damned if you don’t scenario, that….
For an example of what I mean, look no further than Twitter. That service makes every single tweet available as a crawlable resource. And Google certainly is crawling Twitter pages, but the key thing to watch is whether the service is surfacing “superfresh” results when the query merits it. So far, the answer is a definitive NO.
Well, perhaps I’m being cynical, but I think it’s because Google doesn’t want to push massive value and traffic to Twitter without a business deal in place where it gets to monetize those real time results.
Is that “organizing the world’s information and making it universally available?” Well, no. At least, not yet.
By making all its information available to Google’s crawlers (and fixing its terrible URL structure in the process), Facebook could shine an awfully bright light on this interesting conflict (of) interest.