free html hit counter Media/Tech Business Models Archives - Page 14 of 152 - John Battelle's Search Blog

It’s Not Whether Google’s Threatened. It’s Asking Ourselves: What Commons Do We Wish For?

By - February 02, 2012

If Facebook’s IPO filing does anything besides mint a lot of millionaires, it will be to shine a rather unsettling light on a fact most of us would rather not acknowledge: The web as we know it is rather like our polar ice caps: under severe, long-term attack by forces of our own creation.

And if we lose the web, well, we lose more than funny cat videos and occasionally brilliant blog posts. We lose a commons, an ecosystem, a “tangled bank” where serendipity, dirt, and iterative trial and error drive open innovation. Google’s been the focus of most of this analysis (hell, I called Facebook an “existential threat” to Google on Bloomberg yesterday), but I’d like to pull back for a second.

This post has been brewing in me for a while, but I was moved to start writing after reading this piece in Time:

Is Google In Danger of Being Shut Out of the Changing Internet?

The short answer is Hell Yes. But while I’m a fan of Google (for the most part), to me the piece is focused too narrowly on what might happen to one company, rather than to the ecosystem which allowed that company to thrive. It does a good job of outlining the challenges Google faces, which are worth recounting (and expanding upon) as a proxy for the larger question I’m attempting to elucidate:

1. The “old” Internet is shrinking, and being replaced by walled gardens over which Google’s crawlers can’t climb. Sure, Google can crawl Facebook’s “public pages,” but those represent a tiny fraction of the “pages” on Facebook, and are not informed by the crucial signals of identity and relationship which give those pages meaning. Similarly, Google can crawl the “public pages” of Apple’s iTunes store on the web, but all the value creation in the mobile iOS appworld is behind the walls of Fortress Apple. Google can’t see that information, can’t crawl it, and can’t “make it universally available.” Same for Amazon with its Kindle universe, Microsoft’s Xbox and mobile worlds, and many others.

2. Google’s business model depends on the web remaining open, and given #1 above, that model is imperiled. It’s damn hard to change business models, but with Google+ and Android, the company is trying. The author of the Time piece is skeptical of Google’s chances of recreating the Open Web with these new tools, however.

He makes a good point. But to me, the real issue isn’t whether Google’s business model is under attack by forces outside its control. Rather, the question is far more existential in nature: What kind of a world do we want to live in?

I’m going to say that again, because it bears us really considering: What kind of a world do we want to live in? As we increasingly leverage our lives through the world of digital platforms, what are the values we wish to hold in common? I wrote about this issue a month or so ago:  On This Whole “Web Is Dead” Meme. In that piece I outlined a number of core values that I believe are held in common when it comes to what I call the “open” or “independent” web. They also bear repeating (I go into more detail in the post, should you care to read it):

No gatekeepers. The web is decentralized. Anyone can start a web site. No one has the authority (in a democracy, anyway) to stop you from putting up a shingle.

– An ethos of the commons. The web developed over time under an ethos of community development, and most of its core software and protocols are royalty free or open source (or both). There wasn’t early lockdown on what was and wasn’t allowed. This created chaos, shady operators, and plenty of dirt and dark alleys. But it also allowed extraordinary value to blossom in that roiling ecosystem.

- No preset rules about how data is used. If one site collects information from or about a user of its site, that site has the right to do other things with that data, assuming, again, that it’s doing things that benefit all parties concerned.

- Neutrality. No one site on the web is any more or less accessible than any other site. If it’s on the web, you can find it and visit it.

- Interoperability. Sites on the web share common protocols and principles, and determine independently how to work with each other. There is no centralized authority which decides who can work with who, in what way.

I find it hard to argue with any of the points above as core values of how the Internet should work. And it is these values that created Google and allowed the company to become the world beater is has been these past ten or so years. But if you look at this list of values, and ask if Apple, Facebook, Amazon, and the thousands of app makers align with them, I am afraid the answer is mostly no. And that’s the bigger issue I’m pointing to: We’re slowly but surely creating an Internet that is abandoning its original values for…well, for something else that as yet is not well defined.

This is why I wrote Put Your Taproot Into the Independent Web. I’m not out to “save Google,” I’m focused on trying to understand what the Internet would look like if we don’t pay attention to our core shared values.

And it’s not fair to blame Apple, Facebook, Amazon, or app makers here. In conversations with various industry folks over the past few months, it’s become clear that there are more than business model issues stifling the growth of the open web. In no particular order, they are:

1. Engineering. It’s simply too hard to create super-great experiences on the open web. For many high value products and services, HTML and its associated scripting languages, including HTML5, are messy, incomplete, and are not as fast, clean, and elegant as coding for iOS or the Facebook ecosystem. I’ve heard this over and over again. This means developers are drawn to the Apple universe first, web second. Value accrues where engineering efforts pay off in a more compelling user experience.

2. Mobility. The PC-based HTML web is hopelessly behind mobile in any number of ways. It has no eyes (camera), no ears (audio input), no sense of place (GPS/location data). Why would anyone want to invest in a web that’s deaf, dumb, blind, and stuck in one place?

3. Experience. The open web is full of spam, shady operators, and blatant falsehoods. Outside of a relatively small percentage of high quality sites, most of the web is chock full of popup ads and other interruptive come-ons. It’s nearly impossible to find signal in that noise, and the web is in danger of being overrun by all that crap. In the curated gardens of places like Apple and Facebook, the weeds are kept to a minimum, and the user experience is just…better.

So, does that mean the Internet is going to become a series of walled gardens, each subject to the whims of that garden’s liege?

I don’t think so. Scroll up and look at that set of values again. I see absolutely no reason why they can not and should not be applied to how we live our lives inside the worlds of Apple, Facebook, Amazon, and the countless apps we have come to depend upon. But it requires a shift in our relationship to the Internet. It requires that we, as the co-creators of value through interactions, data, and sharing, take responsibility for ensuring that the Internet continues to be a commons.

I expect this will be less difficult that it sounds. It won’t take a political movement or a wholesale migration from Facebook to more open services. Instead, I believe in the open market of ideas, of companies and products and services which identify  the problems I’ve outlined above, and begin to address them through innovative new approaches that solve for them. I believe in the Internet. Always have, and always will.

Related:

Predictions 2012 #4: Google’s Challenging Year

We Need An Identity Re-Aggregator (That We Control)

Set The Data Free, And Value Will Follow

A Report Card on Web 2 and the App Economy

The InterDependent Web

On This Whole “Web Is Dead” Meme

  • Content Marquee

Google+ Spreads to AdSense, Will It Spread to the Whole Web?

By - January 25, 2012

Seen in the wild (well, OK, on this very site):

The “Recommend this on Google” hover box at the bottom is new, I’ve never seen it before (then again, my ads are usually from FM). It’s what we in the biz call a “social overlay” or a “social ad” – and as far as I can tell, it’s only available to those advertisers who use Google AdSense.

Why am I on about this? Because some weeks ago, Facebook told a bunch of advertisers and third parties (FM was one of them) that it was no longer OK to integrate Facebook actions into third party advertisements. This was always in their policies, but everyone was pretty much ignoring it – including most of the largest advertisers on the planet. After all, it’d be pretty hard to tell major television advertisers to stop asking viewers to “Like us on Facebook”. But for some reason, Facebook recently decided enough was enough online, and won’t let folks do exactly the same thing – with interactive functionality – online. You won’t be seeing ads on any site that integrate Facebook Likes, Shares, or other verbs, unless the advertisers paying for those ads have cut special deals with Facebook. (Or, of course, unless Facebook launches its own ad network…)

And yes, my sense of why Facebook might all-of-a-sudden-restrict advertisers or their partners from using Facebook actions in their ads stems from my prediction that Facebook is going to launch a competitor to AdSense, and that Facebook will want to differentiate its competitor by making “FaceSense” the only place across the web where you can run ads that drive Facebook social actions – Likes, Subscriptions, Shares, Recommendations, etc.

Because of this, I recently asked Google whether it would impose the same kind of restrictions on how advertisers might integrate Google+. I got a nuanced and careful response – Google doesn’t support it now, but is open to the idea in the future.

I’m thinking Google can differentiate itself by not acting like Facebook, but instead allow any advertiser to integrate “+1″ into their ads, regardless of where that ad runs – be it a direct buy on ESPN, an independent web player like FM, or, as seen above, a buy on Google’s own AdSense service.*

Anyway, it’s worth thinking about as we plot the strategies of the Big Five – what will their policies be relating to corporate speech and social services? So far, the answer is “not sure.” Worth asking Microsoft, Apple, and Amazon, come to think of it….I can’t imagine, for example, that Apple welcomes Facebook icons integrated into its iAds product – but then again, Facebook now doesn’t allow it anyway. Which seems to me a violation of some corporate right to free speech – but I digress. For now.

* If you’re wondering why is AdSense on my blog these days, well, I’m getting more traffic than we thought I would in January, and AdSense is picking up some of the extra impressions. Thanks for reading – I’m honored. 

The Future of War (From Jan., 1993 to the Present)

By - January 24, 2012

(image is a shot of my copy of the first Wired magazine, signed by our founding team)
I just read this NYT piece on the United States’ approach to unmanned warfare: Do Drones Undermine Democracy?. From it:

There is not a single new manned combat aircraft under research and development at any major Western aerospace company, and the Air Force is training more operators of unmanned aerial systems than fighter and bomber pilots combined. In 2011, unmanned systems carried out strikes from Afghanistan to Yemen. The most notable of these continuing operations is the not-so-covert war in Pakistan, where the United States has carried out more than 300 drone strikes since 2004.

Yet this operation has never been debated in Congress; more than seven years after it began, there has not even been a single vote for or against it. This campaign is not carried out by the Air Force; it is being conducted by the C.I.A. This shift affects everything from the strategy that guides it to the individuals who oversee it (civilian political appointees) and the lawyers who advise them (civilians rather than military officers).

It also affects how we and our politicians view such operations. President Obama’s decision to send a small, brave Navy Seal team into Pakistan for 40 minutes was described by one of his advisers as “the gutsiest call of any president in recent history.” Yet few even talk about the decision to carry out more than 300 drone strikes in the very same country.

Read the whole piece. Really, read it. If any article in the past year or so does a better job of displaying how what we’ve built with technology is changing the essence of our humanity, I’d like to read it.

For me, this was a pretty powerful reminder. Why? Because we put the very same idea on display as the very first cover story of Wired, nearly 20 years ago. Written by Bruce Sterling, whose star has only become brighter in the past two decades, it predicts the future of war with an eerie accuracy. In the article, Sterling describes “modern Nintendo training for modern Nintendo war.” Sure, if he was all seeing, he might have said Xbox, but still…here are some quotes from nearly 20 years ago:

The omniscient eye of computer surveillance can now dwell on the extremes of battle like a CAT scan detailing a tumor in a human skull. This is virtual reality as a new way of knowledge: a new and terrible kind of transcendent military power.

…(Military planners) want a pool of contractors and a hefty cadre of trained civilian talent that they can draw from at need. They want professional Simulation Battle Masters. Simulation system operators. Simulation site managers. Logisticians. Software maintenance people. Digital cartographers. CAD-CAM designers. Graphic designers.

(Ed: Like my son playing Call of Duty?)

And it wouldn’t break their hearts if the American entertainment industry picked up on their interactive simulation network technology, or if some smart civilian started adapting these open-architecture, virtual-reality network protocols that the military just developed. The cable TV industry, say. Or telephone companies running Distributed Simulation on fiber-to-the-curb. Or maybe some far-sighted commercial computer-networking service. It’s what the military likes to call the “purple dragon” angle. Distributed Simulation technology doesn’t have to stop at tanks and aircraft, you see. Why not simulate something swell and nifty for civilian Joe and Jane Sixpack and the kids? Why not purple dragons?

(Ed: Skyrim, anyone?!)

Can governments really exercise national military power – kick ass, kill people – merely by using some big amps and some color monitors and some keyboards, and a bunch of other namby-pamby sci-fi “holodeck” stuff?

The answer is yes.

Say you are in an army attempting to resist the United States. You have big tanks around you, and ferocious artillery, and a gun in your hands. And you are on the march.

Then high-explosive metal begins to rain upon you from a clear sky. Everything around you that emits heat, everything around you with an engine in it, begins to spontaneously and violently explode. You do not see the eyes that see you. You cannot know where the explosives are coming from: sky-colored Stealths invisible to radar, offshore naval batteries miles away, whip-fast and whip-smart subsonic cruise missiles, or rapid-fire rocket batteries on low-flying attack helicopters just below your horizon. It doesn’t matter which of these weapons is destroying your army – you don’t know, and you won’t be told, either. You will just watch your army explode.

Eventually, it will dawn on you that the only reason you, yourself, are still alive, still standing there unpierced and unlacerated, is because you are being deliberately spared. That is when you will decide to surrender. And you will surrender. After you give up, you might come within actual physical sight of an American soldier.

Eventually you will be allowed to go home. To your home town. Where the ligaments of your nation’s infrastructure have been severed with terrible precision. You will have no bridges, no telephones, no power plants, no street lights, no traffic lights, no working runways, no computer networks, and no defense ministry, of course. You have aroused the wrath of the United States. You will be taking ferries in the dark for a long time.

Now imagine two armies, two strategically assisted, cyberspace-trained, post-industrial, panoptic ninja armies, going head-to-head. What on earth would that look like? A “conventional” war, a “non-nuclear” war, but a true War in the Age of Intelligent Machines, analyzed by nanoseconds to the last square micron.

Who would survive? And what would be left of them?

Who indeed.

Put Your Taproot Into the Independent Web

By -

(image) This article – Early Facebook App Causes Is Being Reborn As A Polished Web Site For Good – caught my eye as I was nodding off last night (thanks so much for moving the web into my bedroom, Flipboard. No really.)

Now, it didn’t catch my eye because of its subject – Causes – but because of what its subject was doing: refocusing its business back out on the Independent Web, from its original home in the zoological garden that is the Facebook platform.

This is indicative of what I believe will become a trend over the next year or so, barring moves by Facebook to stem the tide (I’ve heard tell of far more “weblike” canvas pages coming, for instance). Companies that have planted their presence too deeply into the soils of Facebook are going to realize they need to control their own destiny, and move their focus and their core presence back into the independent waters of the open Internet (think Zynga “project Z”, for instance). Listen to Causes VP Chris Chan on the decision to move back to Causes.org:

As the years have progressed the web has gotten a lot more social, and it makes more sense to have our own brand and site. We can still be ‘on’ Facebook in the sense that we plug into News Feed and fan pages, but having our own brand gives us full, top to bottom control over the product experience, something that we think is critical for building the best tool possible for organizers to create campaigns for social change.

That “full, top to bottom control” means a lot more than just the chrome finishes on your website. It means controlling all the data created by interactions on that site, including if and how you share that data with your consumers and your partners (including Facebook, of course).

In seminars, writings, conferences, and speaking gigs around the world over the past couple of years, I’ve started using a phrase when asked my opinion of what a brand’s social strategy might be, in particular when it comes to Facebook. The context is nuanced (I’m a fan of integrating Facebook into your brand efforts), but the point is simple: If you are a brand, publisher, or independent voice, don’t put your taproot into the soils of Facebook. Plant it in the independent web. (A bit more on this can be found here).

Now, that doesn’t mean “don’t use Facebook,” not at all. I think Facebook is an extraordinarily important part of the Internet ecosystem, and having a robust presence there is a critical part of any brand (or company’s) strategy.

But Facebook is a for profit, advertising and data-driven company. If you seat mission critical portions of your business inside its walls, you are driving value to Facebook – and you are presuming the trade, in terms of traffic and virality, will come out on balance favoring you. I wouldn’t count on that. Facebook will always have more data than you do about how consumers use the Facebook platform, and will always be able to leverage that data more effectively.

Not to mention, have you checked out Facebook’s terms of service when it comes to using data derived from its platform? Here are a few choice terms that come from a quick perusal (sources are here and here):

– You own your own content, but you grant Facebook license to use it as well.

– You may only request user data needed to operate your app (if you create a Facebook app as part of your presence on Facebook).

– You may not use data collected in your app in your other advertising efforts (including ad networks).

– You may not integrate analytics from third party sources into your efforts inside Facebook. Facebook, however, can gather data from how your app or page is used for their own advertising programs.

– Facebook reserves the right to do exactly what you’re doing at any time – if you create a killer new app inside Facebook, and it takes off, Facebook can decide to do the same thing. (Clearly Facebook isn’t motivated to do this if it angers a major advertising partner, but this term does give pause).

– Facebook reserves the right to market your work in Facebook’s own promotional efforts.But if you want to promote what you are doing on Facebook across third party advertising networks out on the Independent Web, you must get written permission.*

(I’ll be writing more about terms of service in general in another post). 

Now, I don’t think Facebook’s terms are particularly crazy, they’re written by lawyers looking to protect and  preserve as much value as possible for Facebook as a corporation. They have the right to do so, and they are quite open and transparent about their policies.

But it drives me crazy to see major brands using expensive television time to drive consumers to a Facebook program that lives exclusively inside Facebook. (I imagine the reverse is true when Facebook executives see those same ads). I’m sure it works in the short term – you get folks there, they “like” or “follow” your brand, and they engage in whatever promotion or campaign is currently running. But if that campaign, promotion, or program lives only on Facebook, well, good luck deriving all the value you possibly can from it.

If that same program lives out on the Independent web – your own site, on your own domain, with your own platform – then you own all the data and insights, and you can broker those assets back into a Facebook page, or anywhere else you may care to. It doesn’t work the other way around. Imagine trying to replicate the value you create in a Facebook-exclusive program into, say, Google+ or Twitter, or in a major buy across an agency trading desk. Not with the terms outlined above.

It’s not like Facebook is stopping brands from leveraging the service out on the open web – that’s the point of the Open Graph, after all (and it’s what Causes is using now). Facebook knows that independence is critical to the future of the Internet, and has created tools to insure it’s a major player there. My advice: use those tools inside your own presence on the web. But put your taproot into soil that you control, soil that is shared by the millions of other independent voices on the web. That insures you’ll be part of a free and open ecosystem where serendipity and opportunity can create wonderful new possibilities.

—-

*Thanks to my researcher, LeeAnn Prescott, for analysis of these terms. If I’ve gotten any of this wrong, I hope folks from Facebook and/or my smarter-than-I-am readership will correct me, and I’ll update this post accordingly. 

Also, an important caveat – I am founder and Chair of a company that promotes the Independent Web, and operates a significant network for the purposes of advertising. 

Google+: Now Serving 90 Million. But…Where’s the Engagement Data!

By - January 20, 2012

Google didn’t have a great earnings call today – the company missed Wall St. estimates and the stock is getting hammered in after hours trading – it’s down 9 percent, which is serious whiplash for a major stock in one day.

But while there’s probably much to say about the earnings call – in particular whether Google’s core CPC business is starting to erode (might that be due to Facebook, Wall St. wonders?) – I’m more interested in Google’s jihad against samesaid competitor, a jihad called Google+.

And in the earnings call, Google+ was identified as one of the shining stars of the quarter.

Here’s a quote from the press release, the very first quote, attributed to Larry Page. I’ve highlighted the parts where Google+ is mentioned.

 “I am super excited about the growth of Android, Gmail, and Google+, which now has 90 million users globally – well over double what I announced just three months ago. By building a meaningful relationship with our users through Google+ we will create amazing experiences across our services.”

You getting that? The lead quote had to do with Google+, pretty much, not the company’s earnings, which ended up being a miss (Google is blaming fluctuations in foreign currency for much of that, and I have no idea whether that’s true, false, or silly).

But here’s my question: When is Google going to release actual engagement numbers for Google+? Because in the end, that’s all that really matters. As I have written in the past, it’s pretty easy to get a lot of people signing up for Google+ if you integrate it into everything Google does (particularly if you do it the way they’ve done it with search).

But can you get those folks to engage, deeply? That’d be a real win, and one I’d give full credit to Google for executing. After all, it’s one thing to get the horse to water…another to have it pull up a chair and share a few stories with friends.

Now, Page did talk about engagement in his comments today, but as far as I can tell, it was not specific to Google+ (though it was crafted to be easily conflated, and in reports I’ve seen across the web, it has been). He certainly led with Google+, but this is what he said:

“Engagement on + is also growing tremendously. I have some amazing data to share there for the first time: +users are very engaged with our products — over 60% of them engage daily, and over 80% weekly.”

Er….so you’re saying the folks who use Google+ use *Google* a lot. That’s not surprising – most of them came to Google+ because they were already using Google a lot. But what about minutes per month using Google+? I’m guessing if Google had good news on that particular front, they’d be trumpeting it in a more direct fashion.

Look, I’m being critical here, and perhaps unfairly. But like many others, I’m a bit baffled by Google’s moves last week around search integration, and I’m looking forward to Google addressing the mounting criticism from not only its competitors, but its fans as well. So far, the company has decided to ignore it – both in its earnings calls, and in my own communications with company representatives. That only leads to speculation that Google is doing this on purpose, to get to critical mass with G+ before, cough cough, apologizing a month or so down the line and “fixing” the approach it’s taken to search integration.

I’m going to be down there soon, talking to key execs in search and, I hope, at Google+. There are always more sides to the story than are apparent as that story develops. Stay tuned.

What Might A Facebook Search Engine Look Like?

By - January 16, 2012

(image) Dialing in from the department of Pure Speculation…

As we all attempt to digest the implications of last week’s Google+ integration, I’ve also be thinking about Facebook’s next moves. There’s been plenty of speculation in the past that Facebook might compete with Google directly – by creating a full web search engine. After all, with the Open Graph and in particular, all those Like buttons, Facebook is getting a pretty good proxy of pages across the web, and indexing those pages in some way might prove pretty useful.

But I don’t think Facebook will create a search engine, at least not in the way we think about search today. For “traditional” web search, Facebook can lean on its partner Microsoft, which has a very good product in Bing. I find it more interesting to think about what “search problem” Facebook might solve in the future that Google simply can’t.

And that problem could be the very same problem (or opportunity) that Google can’t currently solve for, the very same problem that drove Google to integrate Google+ into its main search index: that of personalized search.

As I wrote over the past week, I believe the dominant search paradigm – that of crawling a free and open web, then displaying the best results for any particular query – has been broken by the rise of Facebook on the one hand, and the app economy on the other. Both of these developments are driven by personalization – the rise of “social.”

Both Facebook and the app economy are invisible to Google’s crawlers. To be fair, there are billions of Facebook pages in Google’s index, but it’s near impossible to “organize them and make them universally available” without Facebook’s secret sauce (its social graph and related logged in data). This is what those 2009 negotiations broke down over, after all.

The app economy, on the other hand, is just plain invisible to anyone. Sure, you can go to one of ten or so app stores and search for apps to use, but you sure can’t search apps the way you search, say, a web site. Why? First, the use case of apps, for the most part, is entirely personal, so apps have not been built to be “searchable.” I find this extremely frustrating, because why wouldn’t I want to “Google” the hundreds of rides and runs I’ve logged on my GPS app, as one example?

Secondly, the app economy is invisible to Google because data use policies of the dominant app universe – Apple – make it nearly impossible to create a navigable link economy between apps, so developers simply don’t do it. And as we all know, without a navigable link economy, “traditional” search breaks down.

Now, this link economy may well be rebuilt in a way that can be crawled, through up and coming standards like HTML5 and Telehash. But it’s going to take a lot of time for the app world to migrate to these standards, and I don’t know that open standards like these will necessarily win. Not when there’s a platform that already exists that can tie them together.

What platform is that, you might ask? Why, Facebook, of course.

Stick with me here. Imagine a world where the majority of app builders integrate with Facebook’s Open Graph, instrumenting your personal data through Facebook such that your data becomes searchable. (If you think that’s crazy, remember how most major companies and app services have already fallen all over themselves to leverage Open Graph). Then, all that data is hoovered into Facebook’s “search index”, and integrated with your personal social graph. Facebook then builds an interface to all you app data, add in your Facebook social graph data, and then perhaps tosses in a side of Bing so you can have the whole web as a backdrop, should you care to.

Voila – you’ve got yourself a truly personalized new kind of search engine. A Facebook search engine, one that searches your world, apps, Facebook and all.

Strangers things will probably happen. What do you think?

Update: Facebook’s getting one step closer this week…

 

Our Google+ Conundrum

By - January 14, 2012

I’m going to add another Saturday morning sketch to this site, and offer a caveat to you all: I’ve not bounced this idea off many folks, and the seed of it comes from a source who is unreservedly biased about all this. But I thought this worth airing out, so here you have it.

Given that Google+ results are dominating so many SERPs these days, Google is clearly leveraging its power in search to build up Google+. Unless a majority of people start turning SPYW (Search Plus Your World) off, or decide to search in a logged out way, Google has positioned Google+ as a sort of “mini Internet,” a place where you can find results for a large percentage of your queries.(My source is pretty direct about this: “Google has decided that beating Facebook is worth selling their soul.”)

But to my point. An example of samesaid is the search I did this morning for that Hitler video I posted. Here’s a screenshot of my results:

 

As you can see, the Universal search feature kicked in, and put News results at the top. I know that news results won’t get me straight to the video, I want the YouTube or Vimeo page, not a story about the video. So I look to the results below. The next four results are from Google+. Right below the fold is the actual YouTube video. I didn’t see it on first blush.

So I found that video by clicking on someone’s Google+ post about it (see how the first one is purple, and not blue? That’s the one I clicked on). Some dude I don’t know posted it to Google+, I clicked through to his post (gaining Google another pageview), then clicked through the video to YouTube. That’s lame. That’s not a Googley search experience.

But if that’s how the world of Google works now, that means it’s very important that you tend your Google+ pages, so that you rank well in Google search. Google has pretty much gamed its own search engine to insure Google+ will succeed.

This is what happens when you tell your entire staff that your salary depends on winning in social. 

Now, this presents us all a conundrum. If a large percentage of people are logged into Google and/or Google+ when they are searching for stuff, that means Google+ pages are going to rank well for those people. Hence, I really have no choice but to play Google’s game, and tend to my Google+ page, be I a brand, a person, a small business…. are you getting the picture here? If you decide to NOT play on Google+, you will, in essence, be devalued in Google search, at least for the percentage of people who are logged in whilst using Google.

I dunno. This strikes me as wrong. I’ve spent nearly ten years building this site, Searchblog, and it has tens of thousands of inbound links, six thousand posts, nearly 30,000 comments, etc., etc. But if you are logged into Google+ and search for me, you’re going to get my Google+ profile first.

Seems a bit off. Seems like Google is taking the first click away from me and directing it to a Google service.

Now, if I decide to protest this, and delete my Google+ account, I better pray no one else named John Battelle creates a Google+ account, or they will rank ahead of me. And while Battelle is a pretty unique name, there are actually quite a few of us out there. Imagine if my name was John Kelly? Or Joe Smith?

Yikes. Quite a conundrum.

Again, just sketching on a Saturday morning. It’s a beautiful day, so I think I’ll stop, take a ride, and think a bit more about it before I write anymore.

Related:

It’s Not About Search Anymore, It’s About Deals

Hitler Is Pissed About Google+

Google Responds: No,That’s Not How Facebook Deal Went Down (Oh, And I Say: The Search Paradigm Is Broken)

Compete To Death, or Cooperate to Compete?

Twitter Statement on Google+ Integration with Google Search

Search, Plus Your World, As Long As It’s Our World

 

Google Responds: No,That’s Not How Facebook Deal Went Down (Oh, And I Say: The Search Paradigm Is Broken)

By - January 13, 2012

(image) I’ve just been sent an official response from Google to the updated version of my story posted yesterday (Compete To Death, or Cooperate to Compete?). In that story, I reported about 2009 negotiations over incorporation of Facebook data into Google search. I quoted a source familiar with the negotiations on the Facebook side, who told me  “Senior executives at Google insisted that for technical reasons all information would need to be public and available to all,” and “The only reason Facebook has a Bing integration and not a Google integration is that Bing agreed to terms for protecting user privacy that Google would not.”

I’ve now had conversations with a source familiar with Google’s side of the story, and to say the company disagrees with how Facebook characterized the negotiations is to put it mildly. I’ve also spoken to my Facebook source, who has clarified some nuance as well. To get started, here’s the official, on the record statement, from Rachel Whetstone, SVP Global Communications and Public Affairs:

“We want to set the record straight. In 2009, we were negotiating with Facebook over access to its data, as has been reported.  To claim that the we couldn’t reach an agreement because Google wanted to make private data publicly available is simply untrue.”

My source familiar with Google’s side of the story goes further, and gave me more detail on why the deal went south, at least from Google’s point of view. According to this source, as part of the deal terms Facebook insisted that Google agree to not use publicly available Facebook information to build out a “social service.” The two sides had already agreed that Google would not use Facebook’s firehose (or private) data to build such a service, my source says.

So what does “publicly available” mean? Well, that’d be Facebook pages that any search engine can crawl – information on Facebook that people *want* search engines to know about. This is compared to the firehose data that was the core asset being discussed between the parties. This firehose data is what Google would need in order to surface personal Facebook pages relevant to you in the context of a search query. (So, for example, if you were my friend on Facebook, and you searched for “Battelle soccer” on Google, then with the proposed deal, you’d see pictures of my kids’ soccer games that I had posted to Facebook).

Apparently, Google believed that Facebook’s demand around public information could be interpreted  as applying to how Google’s own search service was delivered, not to mention how it (or other products) might evolve. Interpretation is always where the devil is in these deals. Who’s to say, after all, that Google’s “social search” is not a “social service”? And Google Pages, Maps, etc. – those are arguably social in nature, or will be in the future.

Google balked at this language, and the deal fell apart. My Google source also disputes the claim that Google balked at being able to technically separate public from private data. Conversely, my Facebook source counters that the real issue of public vs. private had to do with Google’s refusal to honor changes in privacy settings over time – for example, if I deleted those soccer pictures, they should also be deleted from Google’s index. There’s a point where this all devolves to she said/he said, because the deal never happened, and to be honest, there are larger points to make.

So let’s start with this: If Facebook indeed demanded that Google not use publicly available Facebook data, it’s certainly understandable why Google wouldn’t agree to the deal. It may not seem obvious, but there is an awful lot of publicly available Facebook pages and data out there. Starbucks, for example, is more than happy to let anyone see its Facebook page, no matter if you’re logged in or not. And then there’s all that Facebook open graph data out on the public web – tons of sites show Facebook status updates, like counts and so on in a public fashion. In short, asking Google to not leverage that data in anything that might constitute a “social service” is anathema to a company who claims its mission to crawl all publicly available information, organize it, and make it available.

It’s one thing to ask that Google not use Facebook’s own social graph and private data to build new social services – after all, the social graph is Facebook’s crown jewels. But it’s quite another thing to ask Google to ignore other public information completely.

From Google’s point of view, Facebook was crippling future products and services that Google might create, which was tantamount to an insurance policy of sorts that Google wouldn’t become a strong competitor, at least not one that  leverages public information from Facebook. Google balked. If Facebook’s demand could have been interpreted as also applying to Google’s search results, well, that’s a stone cold deal killer.

I certainly understand why Facebook might ask for what they did, it’s not crazy. Google might well have responded by narrowing the deal, saying “Fine, you don’t build a search engine, and we won’t build a social network. But we should have the right to create other kinds of social services.” As far as I know, Google didn’t chose to say that. (Microsoft apparently did). And I think I know why: The two companies realized they were dancing on the head of a pin. Search = social, social = search. They couldn’t figure out a way to tease the two apart. Microsoft has cast its lot with Facebook, Google, not so much.

When high stakes deals fall apart, both sides usually claim the other is at fault, and that certainly seems to be the case here. It’s also the case with the Twitter deal, which I’ve gotten a fair amount of new information about as well. I hope to dig into that in another post. For now, I want to pull back a second and comment on what I think is really going on here, at least from the perspective of a longer view.

Our Cherished Search Paradigm Is Broken (But We Will Fix It….Eventually)

I think what we have here is a clear indication that the search paradigm we’ve operated under for a decade or so is broken. That paradigm stems from Google’s original letter to shareholders in 2004. Remember this line?Our search results are the best we know how to produce. They are unbiased and objective, and we do not accept payment for them or for inclusion or more frequent updating.

In many cases, it’s simply naive to claim Google is unbiased or objective. Google often favors its own properties over others, as Danny points out in Real-Life Examples Of How Google’s “Search Plus” Pushes Google+ Over Relevancy and others have also detailed. But there is a reason: if you’re going to show results from all other possible contenders, replete with their associated UI and functional bells and whistles (as Google does with its own Maps, Pages, Plus etc.), well, it’s nearly impossible now to determine which service is the right answer to a particular person’s query. Not to mention, you need to put a deal in place to get all the functionality of the service. Instead, Google has opted, in many cases, to go with their own stuff.

This is not a new idea, by the way. Yahoo’s been doing it this way from the beginning. The contentious issue is that biasing some results toward Google’s own products runs counter to Google’s founding philosophy.

I have a theory as to why all this is happening, and I don’t entirely blame Google. Back when search wasn’t personalized, Google could defensibly say that one service was better than another because it got more traffic, was linked to more (better PageRank), and so on. Back when everyone got the same results and the web was one homogenous glob of HTML, well, you could claim “this is the best result for the general population.” But personalized search has broken that framework – I lamented this back in 2008 with this post: Search Was Our Social Glue. But That Is Dissolving (more here).

With the rise of Facebook and the app economy, the problem of search has become terribly complicated. If you want to have results from Facebook in your search, well, that search service has to do a deal with Facebook. But what if you want results from your running app (I have hundreds of rides and runs logged on AllSportGPS, for example)? Or Instagram? Or Path, for that matter? Do they all have to do deals with Google and Bing? There are so many unconnected pieces of the Internet now (millions of apps, most of our own Facebook experiences, etc. etc.) that what’s a good personal result for one person is not necessarily good for another. If Google is to stay true to its original mission, it needs a new framework and a massive number of new signals – new glue – to put the pieces back together.

There are several ways to resolve this, and in another post, I hope to explore them (one of them, of course, is simply that everyone should just go through Facebook. That’s the vision of Open Graph). But for now, I’m just going to say this: The issues raised by this kerfuffle are far larger than Google vs. Facebook, or Google vs. Twitter. We are in the midst of a major search paradigm shift, and there will be far more tears before it gets resolved. But resolve it must, and resolve it will.

Compete To Death, or Cooperate to Compete?

By - January 11, 2012

(image) **Updated at 3 PM PST with more info about Facebook/Google negotiations…please read to the bottom…**

In today’s business climate, it’s not normal for corporations to cooperate with each other when it comes to sharing core assets. In fact, it’s rather unusual. Even when businesses do share, it’s usually for some ulterior motive, a laying of groundwork for future chess moves which insure eventual domination over the competition.

Such is the way of business, particularly at the highest and largest levels, such as those now inhabited by top Internet players.

Allow me to posit that this philosophy is going to change over the next few decades, and further, indulge me as I try to apply a new approach to a very present case study: That of Google, Facebook, and Twitter as it relates to Google’s search index and the two social services’ valuable social interaction datasets.

This may take a while, and I will most likely get a fair bit wrong. But it seems worth a shot, so if you feel like settling in for some Thinking Out Loud, please come along.

First, some abridged background. Back in 2009, on the Web 2 Summit stage of all places (yes, I was the emcee), Google, Microsoft, Facebook and Twitter announced a flurry of deals, some of which were worked out in a last minute fury of negotiations. Early in the conference Microsoft announced it would incorporate Twitter and Facebook feeds into its new search engine Bing. Not to be outdone, Google announced a deal with Twitter the next day. However, Google did not announce a deal with Facebook, and the two companies have never come to terms. Meanwhile, Microsoft has continued to deepen its relationship with Facebook data, to the point of viewing that relationship as a key differentiator between Bing and Google search.

All of these deals have business terms, some of them financial, all with limits on how data is used and presented, I would presume. Marissa Mayer of Google told me on the Web 2 stage that there were “financial terms” in Google’s deal with Twitter, but would not give me any details (nor should she have, frankly).

Fast forward to the middle of last year, when the Google/Twitter deal was set to expire. At about the same time as renewal was being negotiated, Google launched Google+, a clear Facebook and Twitter competitor. For reasons that seem in dispute (Google said yesterday Twitter walked away, Twitter has not made a public statement about why things fell apart), the renewal never happened.

And then yesterday, Google incorporated Google+  results into its main search index, sparking a debate in the blogosphere that rages on today – Is Google acting like a monopolist? Does Facebook or Twitter have a “right” to be included in Google results? Why didn’t Google try to negotiate inclusion with its rivals prior to making such a clearly self-serving move?

Google execs, including Chair Eric Schmidt, told SEL’s Danny Sullivan that the company would be happy to talk to both companies to figure out ways to incorporate Twitter and Facebook into Google search, but clearly, those talks could have happened prior to the G+ launch, and they didn’t (or they did, and did not work out – I honestly have no idea). When Danny pointed out that Twitter pages are publicly available, Schmidt demurred, saying that Google prefers to “have a conversation” with a company before using its pages in such a wholesale fashion (er, so did they have one, or not? Anyway…). He has a point (commercial deals are de-rigueur), but…that conversation happened last year, and apparently ended without a deal. And around we go…

What’s clear is this: All the companies involved in this great data spat are acting in what they believe to be their own self interest, and the greatest potential loser, at least in the short term, is the search consumer, who will not be seeing “all the world’s information” but rather “that information which is readily available to Google on terms Google prefers.”

The key to that last sentence is the phrase “what they believe to be their own self interest.” Because I think there’s an argument that, in fact, their true self interest is to open up and share with each other.

Am I nuts? Perhaps. But indulge my insanity for a bit.

The Cost of Blinkered Competition

Back in the Web 1.0 days, when I was running The Industry Standard, I had a number of strong competitors. It’s probably fair to say we didn’t like each other much – we competed daily for news stories, advertiser dollars, and the loyalty of readers. The market for information about the tech industry was limited – there were only so many people interested in our products, and only so much time in the day for them to engage with us.

My strategy to win was clear: We’d make the best product, have the best people, and we’d win on quality. When I heard about one of our competitors badmouthing us, I’d try to ignore it – we were winning anyway: We had the dominant marketshare, the most revenues ($120mm in 2000, with $21mm in EBIDTA), and the best product.

Then something strange happened: an emissary from a competitor called and asked for a meeting. Intrigued, I took it, and was surprised by his offer: Let’s put our two companies together. Apart, he argued, we were simply tearing each other down. Together, we could consolidate the market and insure a long term win.

I considered his idea, but for various reasons, we didn’t take him up on it. I felt like we had the dominant position, that his offer was driven by weakness, not intellectual soundness, and I also felt that a combination would require that my shareholders take on too much dilution.

Two years later, both of us were out of business.

Now, I’m not sure it would have mattered, given the great crash of 2001. But what is certainly true is that I could have thought a bit deeper about what this fellow was proposing. Back in the days of print-bound information, we were essentially competing on what were publicly available assets: stories, particularly interpretations and reportage around those stories, and people: writers, editors, ad sales executives, and management. Short of combining companies, there wasn’t really any other way for us to collaborate, or at least, so I thought.

But perhaps there could have been. It’s been more than a decade since that meeting, and I still wonder: perhaps we could have shared back-end resources like operations, publishing contracts, etc. and saved tens of millions of dollars. We’d compete just on how we leveraged those public assets (stories, people). Perhaps we might have survived the wipeout of the dot com crash. We’ll never know. Since those publications died, the blogosphere has claimed the market, and now it’s far larger than the one we lost back in 2001. Of course I started Federated Media to participate in that model, and now FM has as large a revenue run rate as the Industry Standard, across a far more diverse market.

Why am I bringing this up? Because I think there’s a win-win in this whole Google/Facebook/Twitter dust up, but it’s going to take some Thinking Differently to make it happen.

Imagine Twitter and Facebook offer efficient access to all of their “public” pages – those that its users are happy to share with anyone (or even just to their pre-defined “circles”) – to Google under some set of reasonable usage terms. Financial terms would be minimal – perhaps just enough to cover the costs of serving such a large firehose of data to the search giant. Imagine further that Google, in return, agrees to incorporate this user data in a fashion that is fair – ie doesn’t favor any service over any other – be it Twitter, Google+, or Facebook.

Now, negotiating what is “fair” will be complicated, and honestly, should be subject to iteration as all parties learn usage patterns. And of course all this should be subject to consumer control – if I want to see only Twitter or Facebook or Google+ results in particular searches (or all results for that matter), I should have that right.

And this leads me to my point. Such a set up, regardless of how painful it might be to get right, would create a shared class of assets that would have to compete at the level of the consumer. In other words, the best service for the query wins.

That’s always been Google’s stated philosophy: the best answer for the question at hand. Danny gets to this point in a piece posted last night (which I just saw as I was writing this): Search Engines Should Be Like Santa From “Miracle On 34th Street”. In it he argues that Google’s great strength has been its pattern of sending people to its competitors. And he upbraids Google for violating that principle with its Google+ integration.

It doesn’t have to be this way. It’s not only Google that’s at fault here. Facebook won’t share with Google on any terms, Facebook and Google have not been able to come to terms on how to share data (more on that below*), and Twitter clearly wants some kind of value if it is to share its complete firehose with the search giant. Imagine if all three were to agree on minimal terms, creating a public commons of social data. Yes, that would put Google in an extreme position of trust (not to mention imperil its toddler Google+ service), but covenants can be put in place that allow parties to terminate sharing for clear breaches which demonstrate one party favoring itself over others.

Were such a public commons to be created, then the real competition could start: at the level of how each service interprets that data, and adds value to it in various ways.

Four years ago to the month, I wrote this post: It’s Time For Services on The Web to Compete On More Than Data

In it I said: It’s time that services on the web compete on more than just the data they aggregate….

I think in the end, Facebook will win based on the services it provides for that data. Set the data free, and it will come back to roost wherever it’s best used. And if Facebook doesn’t win that race, well, it’ll lose over time anyway. Such a move is entirely in line with the company’s nascent philosophy, and would be a massively popular move within the ouroborosphere (my name for all things Techmeme).

Compete on service, Facebook, it’s where the world is headed anyway!

Two and a half years ago, as it became clear Facebook’s “nascent philosophy” had changed (and as Twitter rose in stature), I followed up with this post: Google v. Facebook? What We Learn from Twitter. In that post, I said:

 

I think it’s a major strategic mistake to not offer (Facebook’s pages and social graph) to Google (and anyone else that wants to crawl it.) In fact, I’d argue that the right thing to do is to make just about everything possible available to Google to crawl, then sit back and watch while Google struggles with whether or not to “organize it and make it universally available.” A regular damned if you do, damned if you don’t scenario, that….

For an example of what I mean, look no further than Twitter. That service makes every single tweet available as a crawlable resource. And Google certainly is crawling Twitter pages, but the key thing to watch is whether the service is surfacing “superfresh” results when the query merits it. So far, the answer is a definitive NO.

Why?

Well, perhaps I’m being cynical, but I think it’s because Google doesn’t want to push massive value and traffic to Twitter without a business deal in place where it gets to monetize those real time results.

Is that “organizing the world’s information and making it universally available?” Well, no. At least, not yet.

By making all its information available to Google’s crawlers (and fixing its terrible URL structure in the process), Facebook could shine an awfully bright light on this interesting conflict (of) interest.

Thanks to Google’s inclusion of Google+ in its search index, that light has now been shone, and what we’re seeing isn’t all good. I’m of the opinion that a few years from now, each and every one of us will have the expectation and the right to incorporate our own social data into web-wide queries. If the key parties involved in search and social today don’t figure out a way to make that happen, well, they may end up just like The Industry Standard did back in 2001.
But not to worry, someone else will come along, pick up the pieces, and figure out how to play a more cooperative and federated game.
*Update: I’ve heard from a source with knowledge of the Facebook/Google negotiations over integration of Facebook’s data into Google’s search index. This source – who while very credible does come from Facebook’s side of the debate – explained to me that during the 2009 negotiations, Google balked at Facebook’s request that Facebook data be protected in the same fashion as it is in Facebook’s deal with Bing. In essence, Google claimed no way to keep data within circles of friends in the context of a Google search. According to this source: “Senior executives at Google insisted that for technical reasons all information would need to be public and available to all.” But the source goes on to point out that in Google’s own integration of Google+, Google does exactly what it claims it could not do with Facebook data. “The only reason Facebook has a Bing integration and not a Google integration is that Bing agreed to terms for protecting user privacy that Google would not,” this source told me.
Also, and quite interestingly, Google also refused to agree to a clause which stated that Google could not use the data to build its own social network. Now, this is where things can get very dicey. It’s very hard to prove whether or not a company is using the data in particular ways, and had Google agreed to that clause, it might have severely limited its ability to build Google+. What is clear is that Microsoft agreed to Facebook’s terms.

Twitter Statement on Google+ Integration with Google Search

By - January 10, 2012

The integration of Google+ into Google’s native search results has been at the top of Techmeme all day long. And right after I wrote my post on the subject (about four hours ago), Twitter’s general counsel picked up on it, resulting, I believe, in the most RT’s of a Searchblog post in the history of the site.

Just now I received an official statement from Twitter on the subject. I didn’t ask for it – I think it must have been sent out to a large list of press and bloggers. Here it is in full:

For years, people have relied on Google to deliver the most relevant results anytime they wanted to find something on the Internet.

Often, they want to know more about world events and breaking news. Twitter has emerged as a vital source of this real-time information, with more than 100 million users sending 250 million Tweets every day on virtually every topic. As we’ve seen time and time again, news breaks first on Twitter; as a result, Twitter accounts and Tweets are often the most relevant results.

We’re concerned that as a result of Google’s changes, finding this information will be much harder for everyone. We think that’s bad for people, publishers, news organizations and Twitter users.

Meanwhile, my aside at the bottom of the post wondering about anti-trust has been echoed by any number of well known commentators. I wonder if Facebook is about to make a statement?

For what it’s worth, I wrote about all this, after a fashion, in this post in 2009:

Google v. Facebook? What We Learn from Twitter.