free html hit counter Book Related Archives | Page 14 of 31 | John Battelle's Search Blog

Where Good Ideas Come From: A Tangled Bank

By - January 31, 2012

After pushing my way through a number of difficult but important reads, it was a pleasure to rip through Steven Johnson’s Where Good Ideas Come From: A Natural History of Innovation. I consider Steven a friend and colleague, and that will color my review of his most recent work (it came out in paperback last Fall). In short, I really liked the book. There, now Steven will continue to accept my invitations to lunch…

Steven is author of seven books, and I admire his approach to writing. He mixes story with essay, and has an elegant, spare style that I hope to emulate in my next book. If What We Hath Wrought is compared to his work, I’ll consider that a win.

Where Good Ideas Come From is an interesting, fast paced read that outlines the kinds of environments which spawn world-changing ideas. In a sense, this book is the summary of “lessons learned” from several of Johnson’s previous books, which go deep into one really big idea – The Invention of Air, for example, or  the discovery of a cure for cholera. It’s also a testament to another of Johnson’s obsessions – the modern city, which he points out is a far more likely seedbed of great ideas than the isolated suburb or cabin-on-a-lake-somewhere.

Johnson draws a parallel between great cities and the open web – both allow for many ideas to bump up against each other, breed, and create new forms. 

Some environments squelch new ideas; some environments seem to breed them effortlessly. The city and the Web have been such engines of innovation because, for complicated historical reasons, they are both environments that are powerfully suited for the creation, diffusion, and adoption of good ideas.

While more than a year old, Where Good Ideas Come From is an important and timely book, because the conclusions Johnson draw are instructive to the digital world we are building right now – will it be one that fosters what Zittrain calls generativity, or are we feeding ecosystems that are closed in nature? Johnson writes:

…openness and connectivity may, in the end, be more valuable to innovation than purely competitive mechanisms. Those patterns of innovation deserve recognition—in part because it’s intrinsically important to understand why good ideas emerge historically, and in part because by embracing these patterns we can build environments that do a better job of nurturing good ideas…

…If there is a single maxim that runs through this book’s arguments, it is that we are often better served by connecting ideas than we are by protecting them. ….when one looks at innovation in nature and in culture, environments that build walls around good ideas tend to be less innovative in the long run than more open-ended environments. Good ideas may not want to be free, but they do want to connect, fuse, recombine. They want to reinvent themselves by crossing conceptual borders. They want to complete each other as much as they want to compete.

I couldn’t help but think of the data and commercial restrictions imposed by Facebook and Apple as I read those words. As I’ve written over and over on this site, I’m dismayed by the world we’re building inside Apple’s “appworld,” on the one hand, and the trend toward planting our personal and corporate taproots too deeply in the soils of Facebook, on the other. Johnson surveys centuries of important, world changing ideas, often relating compelling personal narratives on the way to explaining how those ideas came to be not through closed, corporate R&D labs, but through unexpected collisions between passions, hobbies, coffee house conversations, and seeming coincidence. If you’re ever stuck, Johnson advises, go outside and bump into things for a while. I couldn’t agree more.

One concept Johnson elucidates is the “adjacent possible,” a theory attributed to biologist Stuart Kauffman. In short, the adjacent possible is the space inhabited by “what could be” based on what currently is. In biology and chemistry, for example, it’s the potential for various combinations of molecules to build self-replicating proteins. When that occurs, new adjacent possibilities open up, to the point of an explosion in life and order.

Johnson applies this theory to ideas, deftly demonstrating how Darwin’s fascination with the creation of coral reefs led – over years – to what is perhaps the most powerful idea of modernity – evolution. He concludes that while most of us understand Darwin’s great insight as mostly about “survival of the fittest,” perhaps its greatest insight is how it has “revealed the collaborative and connective forces at work in the natural world.” Darwin’s famous metaphor for this insight is the tangled bank:

It is interesting to contemplate a tangled bank, clothed with many plants of many kinds, with birds singing on the bushes, with various insects flitting about, and with worms crawling through the damp earth, and to reflect that these elaborately constructed forms, so different from each other, and dependent upon each other in so complex a manner, have all been produced by laws acting around us. . .

Johnson also extolls the concept of “liquid networks” – where information freely flows between many minds, of “slow hunches,” where ideas develop over long periods of time, as well as the importance of noise, serendipity, and error in the development of good ideas. He explores “exaptation” – the repurposing of one idea for another use, and the concept of “platforms” that allow each of these concepts – from liquid networks to serendipity and exaptation – to blossom (Twitter is cited as such a platform).

Johnson concludes:

Ideas rise in crowds, as Poincaré said. They rise in liquid networks where connection is valued more than protection. So if we want to build environments that generate good ideas—whether those environments are in schools or corporations or governments or our own personal lives—we need to keep that history in mind, and not fall back on the easy assumptions that competitive markets are the only reliable source of good ideas. Yes, the market has been a great engine of innovation. But so has the reef.

Amen, I say. I look forward to our great tech companies – Apple and Facebook amongst them – becoming more tangled bank than carefully pruned garden.

A nice endcap to the book is a survey Johnson took of great ideas across history. He places each idea on an XY grid where an idea is either generated by an individual or a network of individuals (the X axis) and/or a commercial or non-commercial environment (the Y Axis). The results are pretty clear: ideas thrive in “non-market/networked” environments.

Johnson's chart of major ideas emerging during the 19th and 20th centuries

This doesn’t mean those ideas don’t become the basis for commerce – quite the opposite in fact. But this is a book about how good ideas are created, not how they might be exploited. And we’d be well advised to pay attention to that as we consider how we organize our corporations, our governments, and ourselves – we have some stubborn problems to solve, and we’ll need a lot of good ideas if we’re going to solve them.

Highly recommended.

Next up on the reading list: Inside Apple: How America’s Most Admired–and Secretive–Company Really Works by Adam Lashinsky, and Republic, Lost: How Money Corrupts Congress–and a Plan to Stop It, by Larry Lessig.

####

Other works I’ve reviewed:

The Singularity Is Near: When Humans Transcend Biology by Ray Kurzweil (my review)

The Corporation (film – my review).

What Technology Wants by Kevin Kelly (my review)

Alone Together: Why We Expect More from Technology and Less from Each Other by Sherry Turkle (my review)

The Information: A History, a Theory, a Flood by James Gleick (my review)

In The Plex: How Google Thinks, Works, and Shapes Our Lives by Steven Levy (my review)

The Future of the Internet–And How to Stop It by Jonathan Zittrain (my review)

The Next 100 Years: A Forecast for the 21st Century by George Friedman (my review)

Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives by the Year 2100 by Michio Kaku (my review)

 

  • Content Marquee

What Happens When Sharing Is Turned Off? People Don’t Dance.

By - January 30, 2012

One of only two photos to emerge from last night's Wilco concert, image Eric Henegen

Last night my wife and I did something quite rare – we went to a concert on a Sunday night, in San Francisco, with three other couples (Wilco, playing at The Warfield). If you don’t have kids and don’t live in the suburbs, you probably think we’re pretty lame, and I suppose compared to city dwellers, we most certainly are. But there you have it.

So why am I telling you about it? Because something odd happened at the show: Wilco enforced a “no smartphone” rule. Apparently lead singer Jeff Tweedy hates looking out at the audience and seeing folks waving lit phones back at him. Members of the Warfield staff told me they didn’t like the policy, but they enforced it  - quite strictly, I might add. It created a weird vibe – folks didn’t even take out their phones for fear they might be kicked out for taking a picture of the concert. (A couple of intrepid souls did sneak a pic in, as you can see at left…)

And… no one danced, not till the very end, anyway. I’ve seen Wilco a few times, and I’ve never seen a more, well, motionless crowd. But more on that later.

Now, I have something of a history when it comes to smart phones and concerts. Back in 2008 I was a founding partner in a new kind of social music experiment we called “CrowdFire.” In my post explaining the idea, I wrote:

Over the course of several brainstorming sessions… an idea began to take shape based on a single insight: personal media is changing how we all experience music. (when I was at Bonnaroo in 2007), everyone there had a cel phone with a camera. Or a Flip. Or a digital camera. And when an amazing moment occurred, more folks held up their digital devices than they did lighters. At Bonnaroo, I took a picture that nails it for me – the image at left. A woman capturing an incredible personal memory of an incredible shared experience (in this case, it was Metallica literally blowing people’s minds), the three screens reflecting the integration of physical, personal, and shared experiences. That image informed our logo, as you can see (below).

So – where did all those experiences go (Searchblog readers, of course, know I’ve been thinking about this for a while)? What could be done with them if they were all put together in one place, at one time, turned into a great big feed by a smart platform that everyone could access? In short, what might happen if someone built a platform to let the crowd – the audience – upload their experiences of the music to a great big database, then mix, mash, and meld them into something utterly new?

Thanks to partners like Microsoft, Intel, SuperFly, Federated Media and scores of individuals, CrowdFire actually happened at Outside Lands, both in 2008 and in 2009. It was a massive effort – the first year literally broke AT&T’s network. But it was clear we were onto something. People want to capture and share the experience of being at a live concert, and the smart phone was clearly how they were now doing it.

It was the start of something – brainstorming with several of my friends prior to CrowdFire’s birth, we imagined a world where every shareable experience became data that could be recombined to create fungible alternate realities. Heady stuff, stuff that is still impossible, but I feel will eventually become our reality as we careen toward a future of big data and big platforms.

Since those early days, the idea of CrowdFire has certainly caught on. In early 2008, we had to build the whole platform from scratch, but now, folks use services like Instagram, Twitter, Facebook, and Foursquare to share their experiences. Many artists share back, sending out photos and tweets from on stage. Most major festivals and promoters have some kind of fan photo/input service that they promote as well. CrowdFire was a great idea, and maybe, had I not been overwhelmed with running FM, we might have turned it into a real company/service that could have integrated all this output and created something big in the world. But it was a bit ahead of its time.

What has happened since that first Outside Lands is that at every concert I’ve attended, I’ve noticed the crowd’s increasing connection to their smart phones – taking pictures, group texting, tweeting, and sharing the moments with their extended networks across any number of social services. It’s hard to find an experience more social than a big concert, and the thousands of constantly lit smartphone screens are a testament to that fact, as are the constant streams of photos and status updates coming out of nearly every show I’ve seen, or followed enviously online.

Which brings me back to last night. I was unaware of the policy, so as Wilco opened at the sold-out Warfield, something felt off to me. Here were two thousand San Francisco hipsters, all turned attentively toward the stage – but most of them had their hands in their pockets! As the band went into the impossible-not-to-move-to “Art of Almost” and “I Might,” I started wondering what was up – why weren’t people at least swaying?! The music was extraordinary, the sound system perfectly tuned. But everyone seemed very intent on…well…being intent. They stared forward, hands in pocket, nodded their heads a bit, but no one danced. It was a rather odd vibe. It was as if the crowd had been admonished to not be too … expressive.

Then it hit me. Nobody had their phone out. I turned to a security guard and asked why no one was holding up a phone. That’s when I learned of Wilco’s policy.

It seemed to me that the rule had the unintended consequence of muting the crowd’s ability to connect to the joy of the moment. Odd, that. We’re so connected to these devices and their ability to reflect our own sense of self that when we’re deprived of them, we feel somehow less…human.

My first reaction was “Well, this sucks,” but on second thought, I got why Tweedy wanted his audience to focus on the experience in the room, instead of watching and sharing it through the screens of their smartphones. By the encore, many people were dancing – they had loosened up. But in the end, I’m not sure I agree with Wilco – they’re fighting the wrong battle (and losing extremely valuable word of mouth in the process, but that’s another post).

There are essentially two main reasons to hold a phone up at a show. First, to capture a memory for yourself, a reminder of the moment you’re enjoying. And second, to share that moment with someone – to express your emotions socially. Both seem perfectly legitimate to me. (I’m not down with doing email or taking a call during a show, I’ll admit).
But the smart phone isn’t a perfect device, as we all know. It forces the world into a tiny screen. It runs out of battery, bandwidth, and power. It distracts us from the world around us. There are too many steps – too much friction – between capturing the things we are experiencing right now and the sharing of those things with people we care about.

But I sense that the sea of smart phones lit up at concerts is a temporary phenomenon. The integration of technology, sharing, and social into our physical world, on the other hand, well that ain’t going away. In the future, it’s going to be much harder to enforce policies like Wilco’s, because the phone will be integrated into our clothing, our jewelry, our eyeglasses, and possibly even ourselves. When that happens – when I can take a picture through my glasses, preview it, then send it to Instagram using gestures from my fingers, or eyeblinks, or a wrinkle of my nose – when technology becomes truly magical – asking people to turn it off is going to be the equivalent of asking them not to dance – to not express their joy at being in the moment.

And why would anyone want to do that?

The Future of War (From Jan., 1993 to the Present)

By - January 24, 2012

(image is a shot of my copy of the first Wired magazine, signed by our founding team)
I just read this NYT piece on the United States’ approach to unmanned warfare: Do Drones Undermine Democracy?. From it:

There is not a single new manned combat aircraft under research and development at any major Western aerospace company, and the Air Force is training more operators of unmanned aerial systems than fighter and bomber pilots combined. In 2011, unmanned systems carried out strikes from Afghanistan to Yemen. The most notable of these continuing operations is the not-so-covert war in Pakistan, where the United States has carried out more than 300 drone strikes since 2004.

Yet this operation has never been debated in Congress; more than seven years after it began, there has not even been a single vote for or against it. This campaign is not carried out by the Air Force; it is being conducted by the C.I.A. This shift affects everything from the strategy that guides it to the individuals who oversee it (civilian political appointees) and the lawyers who advise them (civilians rather than military officers).

It also affects how we and our politicians view such operations. President Obama’s decision to send a small, brave Navy Seal team into Pakistan for 40 minutes was described by one of his advisers as “the gutsiest call of any president in recent history.” Yet few even talk about the decision to carry out more than 300 drone strikes in the very same country.

Read the whole piece. Really, read it. If any article in the past year or so does a better job of displaying how what we’ve built with technology is changing the essence of our humanity, I’d like to read it.

For me, this was a pretty powerful reminder. Why? Because we put the very same idea on display as the very first cover story of Wired, nearly 20 years ago. Written by Bruce Sterling, whose star has only become brighter in the past two decades, it predicts the future of war with an eerie accuracy. In the article, Sterling describes “modern Nintendo training for modern Nintendo war.” Sure, if he was all seeing, he might have said Xbox, but still…here are some quotes from nearly 20 years ago:

The omniscient eye of computer surveillance can now dwell on the extremes of battle like a CAT scan detailing a tumor in a human skull. This is virtual reality as a new way of knowledge: a new and terrible kind of transcendent military power.

…(Military planners) want a pool of contractors and a hefty cadre of trained civilian talent that they can draw from at need. They want professional Simulation Battle Masters. Simulation system operators. Simulation site managers. Logisticians. Software maintenance people. Digital cartographers. CAD-CAM designers. Graphic designers.

(Ed: Like my son playing Call of Duty?)

And it wouldn’t break their hearts if the American entertainment industry picked up on their interactive simulation network technology, or if some smart civilian started adapting these open-architecture, virtual-reality network protocols that the military just developed. The cable TV industry, say. Or telephone companies running Distributed Simulation on fiber-to-the-curb. Or maybe some far-sighted commercial computer-networking service. It’s what the military likes to call the “purple dragon” angle. Distributed Simulation technology doesn’t have to stop at tanks and aircraft, you see. Why not simulate something swell and nifty for civilian Joe and Jane Sixpack and the kids? Why not purple dragons?

(Ed: Skyrim, anyone?!)

Can governments really exercise national military power – kick ass, kill people – merely by using some big amps and some color monitors and some keyboards, and a bunch of other namby-pamby sci-fi “holodeck” stuff?

The answer is yes.

Say you are in an army attempting to resist the United States. You have big tanks around you, and ferocious artillery, and a gun in your hands. And you are on the march.

Then high-explosive metal begins to rain upon you from a clear sky. Everything around you that emits heat, everything around you with an engine in it, begins to spontaneously and violently explode. You do not see the eyes that see you. You cannot know where the explosives are coming from: sky-colored Stealths invisible to radar, offshore naval batteries miles away, whip-fast and whip-smart subsonic cruise missiles, or rapid-fire rocket batteries on low-flying attack helicopters just below your horizon. It doesn’t matter which of these weapons is destroying your army – you don’t know, and you won’t be told, either. You will just watch your army explode.

Eventually, it will dawn on you that the only reason you, yourself, are still alive, still standing there unpierced and unlacerated, is because you are being deliberately spared. That is when you will decide to surrender. And you will surrender. After you give up, you might come within actual physical sight of an American soldier.

Eventually you will be allowed to go home. To your home town. Where the ligaments of your nation’s infrastructure have been severed with terrible precision. You will have no bridges, no telephones, no power plants, no street lights, no traffic lights, no working runways, no computer networks, and no defense ministry, of course. You have aroused the wrath of the United States. You will be taking ferries in the dark for a long time.

Now imagine two armies, two strategically assisted, cyberspace-trained, post-industrial, panoptic ninja armies, going head-to-head. What on earth would that look like? A “conventional” war, a “non-nuclear” war, but a true War in the Age of Intelligent Machines, analyzed by nanoseconds to the last square micron.

Who would survive? And what would be left of them?

Who indeed.

The Singularity Is Weird

By - January 23, 2012

It’s been a while since I’ve posted a book review, but that doesn’t mean I’ve not been reading. I finished two tomes over the past couple weeks, Ray Kurzweil’s The Singularity Is Near, and Stephen Johnson’s Where Good Ideas Come From. I’ll focus on Kurzweil’s opus in this post.

Given what I hope to do in What We Hath Wrought, I simply had to read Singularity. I’ll admit I’ve been avoiding doing so (it’s nearly six years old now) mainly for one reason: The premise (as I understood it) kind of turns me off, and I’d heard from various folks in the industry that the book’s author was a bit, er, strident when it came to his points of view. I had read many reviews of the book (some mixed), and I figured I knew enough to get by.

I was wrong. The Singularity Is Near is not an easy book to read (it’s got a lot of deep and loosely connected science, and the writing could really use a few more passes by a structural editor), but it is an important one to read. As Kevin Kelly said in What Technology Wants, Kurzweil has written a book that will be cited over and over again as our culture attempts to sort out its future relationship to technology, policy, and yes, to God.

I think perhaps the “weirdness” vibe of Kurzweil’s work relates, in the end, to his rather messianic tone – he’s not afraid to call himself a “Singulatarian” and to claim this philosophy as his religion. I don’t know about you, but I’m wary of anyone who invents  a new religion and then proclaims themselves the leader of it.

That’s not to say Kurzweil doesn’t have a point or two. The main argument of the book is that technology is moving far faster than we realize, and its exponential progress will surprise us all – within about thirty years, we’ll have the ability to not only compute most of the intractable problems of humanity, we’ll be able to transcend our bodies, download our minds, and reach immortality.

Or, a Christian might argue, we could just wait for the rapture. My problem with this book is that it feels about the same in terms of faith.

But then again, faith is one of those Very Hard Topics that most of us struggle with. And if you take this book at face value, it will force you to address that question. Which to me, makes it a worthy read.

For example, Kurzweil has faith that, as machines get smarter than humans, we’ll essentially merge with machines, creating a new form of humanity. Our current form is merely a step along the way to the next level of evolution – a level where we merge our technos with our bios, so to speak. Put another way, compared to what we’re about to become, we’re the equivalent of Homo Erectus right about now.

It’s a rather compelling argument, but a bit hard to swallow, for many reasons. We’re rather used to evolution taking a very long time – hundreds of generations, at the very least. But Kurzweil is predicting all this will happen in the next one or two generations – and should that occur, I’m pretty sure far more minds will be blown than merged.

And Kurzweil has a knack for taking the provable tropes of technology – Moore’s Law, for example – and applying them to all manner of things, like human intelligence, biology, and, well, rocks (Kurzweil calculates the computing power of a rock in one passage). I’m in no way qualified to say whether it’s fair to directly apply lessons learned from technology’s rise to all things human, but I can say it feels a bit off. Like perhaps he’s missing a high order bit along the way.

Of course, that could just be me clinging to my narrow-minded and entitled sense of Humanity As It’s Currently Understood. Now that I’ve read Kurzweil’s book, I’m far more aware of my own limitations, philosophically as well as computationally. And for that, I’m genuinely grateful.

Other works I’ve reviewed:

The Corporation (film – my review).

What Technology Wants by Kevin Kelly (my review)

Alone Together: Why We Expect More from Technology and Less from Each Other by Sherry Turkle (my review)

The Information: A History, a Theory, a Flood by James Gleick (my review)

In The Plex: How Google Thinks, Works, and Shapes Our Lives by Steven Levy (my review)

The Future of the Internet–And How to Stop It by Jonathan Zittrain (my review)

The Next 100 Years: A Forecast for the 21st Century by George Friedman (my review)

Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives by the Year 2100 by Michio Kaku (my review)

Facebook Coalition To Google: Don’t Be Evil, Focus On The User

By -

Last week I spent an afternoon down at Facebook, as I mentioned here. While at Facebook I met with Blake Ross, Direct of Product (and well known in web circles as one of the creators of Firefox). Talk naturally turned to the implications of Google’s controversial integration of Google+ into its search results – a move that must both terrify (OMG, Google is gunning for us!) as well as delight (Holy cow, Google is breaking its core promise to its users!).

Turns out Ross had been quite busy the previous weekend, and he had a little surprise to show me. It was a simple hack, he said, some code he had thrown together in response to the whole Google+ tempest. But there was most certainly a gleam in his eye as he brought up a Chrome browser window (Google’s own product, he reminded me).

Blake had installed a bookmarklet onto his browser, one he had titled – in a nod to Google’s informal motto –  “Don’t be evil.” For those of you who aren’t web geeks (I had to remind myself as well), a bookmarklet is “designed to add one-click functionality to a browser or web page. When clicked, a bookmarklet performs some function, one of a wide variety such as a search query or data extraction.”

When engaged, this “Don’t be evil” bookmarklet did indeed do one simple thing: It turned back the hands of time, and made Google work the way it did before the integration of Google+ earlier this month.

It was a very elegant hack, more thoughtful than the one or two I had seen before – those simply took all references to Google+ out of the index. This one went much further, and weaved a number of Google’s own tools – including its “rich snippet” webmaster tool and its own organic search listings, to re-order not only the search engine results, but also the results of the promotional Google+ boxes on the right side of the results, as well as the “typeahead” results that now feature only Google+ accounts (see example below, the first a search on my name using “normal Google” and then one using the bookmarklet).

After Blake showed me his work, we had a lively discussion about the implications of Facebook actually releasing such a tool. I mean, it’s one thing for a lone hacktivist to do this, it’s quite another for a member of the Internet Big Five to publicly call Google out. Facebook would need to vet this with legal, with management (this clearly had to pass muster with Mark Zuckerberg), and, I was told, Facebook wanted to reach out to others – such as Twitter – and get their input as well.

Due to all this, I had to agree to keep Blake’s weekend hack private till Facebook figured out whether (and how) it  would release Ross’s work.

Today, the hack goes public. It’s changed somewhat – it now resides at a site called “Focus On The User” and credit is given to engineers at Facebook, Twitter, and Myspace, but the basic implication is there: This is a tool meant to directly expose Google’s recent moves with Google+ as biased, hardcoded, and against Google’s core philosophy (which besides “don’t be evil,” has always been about “focusing on the user”).

Now, this wasn’t what I meant last week when I asked what a Facebook search engine might look like, but one can be very sure, this is certainly how Facebook and many others want Google to look like once again.

From the site’s overview:

We wanted to see how much better social search could be for consumers if Google chose to use all of the information already in its index. We think the results speak for themselves. Specifically, we created a bookmarklet that uses Google’s own relevance measure—the ranking of their organic search results—to determine what social content should appear in the areas where Google+ results are currently hardcoded. That includes the box on the right; the typeahead; and the indent under the first result for brand searches like “Macy’s” or “New York Times”.

All of the information in this demo comes from Google itself, and all of the ranking decisions are made by Google’s own algorithms. No other services, APIs or proprietary data stores are accessed.

Facebook released a video explaining how the hack works, including some rather devastating examples (be sure to watch the AT&T example at minute seven, and a search for my name as well), and it has open sourced the codebase. The video teasingly invites Google to use the code should it care to (er…not gonna happen).

Here’s an embed:

It’d be interesting if millions of people adopted the tool, however I don’t think that’s the point. A story such as this is tailor made for the Techmeme leaderboard, to be sure, and will no doubt be the talk of the Valley today. By tonight, the story most likely will go national, and that can’t help Google’s image. And I’m quite sure the folks at Facebook, Twitter, and others (think LinkedIn, Yahoo, etc) are making sure word of this exemplar reaches the right folks at the Federal Trade Commission, the Department of Justice, Congress, and government agencies around the world.

Not to mention, people in the Valley do care, deeply, about where they work. There are scores of former Google execs now working at Twitter, Facebook, and others. Many are dismayed by Google’s recent moves, and believe that inside Google, plenty of folks aren’t sleeping well because of what their beloved company’s single-minded focus on Google+. “Focus on The User” is a well-timed poke in the eye, a slap to the conscience of a company that has always claimed to be guided by higher principles, and an elegant hack, sure to become legend in the ongoing battle of the Big Five.

As I’ve said before, I’m planning on spending some time with folks at Google in the coming weeks. I’m eager to understand their point of view. Certainly they are playing a longer-term game here – and seem willing, at present, to take the criticism and not respond to the chorus of complaints. Should Google change that stance, I’ll let you know.

Related:

What Might A Facebook Search Engine Look Like?

Google+: Now Serving 90 Million. But…Where’s the Engagement Data!

Our Google+ Conundrum

It’s Not About Search Anymore, It’s About Deals

Hitler Is Pissed About Google+

Google Responds: No,That’s Not How Facebook Deal Went Down (Oh, And I Say: The Search Paradigm Is Broken)

Compete To Death, or Cooperate to Compete?

Twitter Statement on Google+ Integration with Google Search

Search, Plus Your World, As Long As It’s Our World

On “The Corporation,” the Film

By - January 20, 2012

If you read my Predictions for 2012, you’ll recall that #6 was “The Corporation” Becomes A Central Societal Question Mark.

We aren’t very far into the year, and signs of this coming true are all around. The “Occupy” movement seems to have found a central theme to its 2012 movement around overturning “the corporation as a person,” and some legislators are supporting that concept.

We’ll see if this goes anywhere, but I wanted to note, as I didn’t fairly do in my prediction post, the role that “The Corporation”  played in my thinking. I finally watched this 2003 documentary over the holidays. Its promoters still maintain an ongoing community here, and it doesn’t take long to determine that this film has a very strong, classically liberal point of view about the role corporations play in our society.

If you can manage the film’s rather heavy handed approach to the topic, you’ll learn a lot about how we got to the point we’re at with the Citizens United case. Obviously the film was made well before that case, but it certainly foreshadowed it. I certainly recommend it to anyone who wants the backstory – with a healthy side of scare tactics – of the corporation’s rise in American society.

My next review will be Ray Kurzweil’s The Singularity Is Near, a 2005 book I finished a few weeks ago. I’m currently reading Steven Johnson’s Where Good Ideas Come From: The Natural History of Innovation, which is a pleasure.

Other books I’ve reviewed:

What Technology Wants by Kevin Kelly (my review)

Alone Together: Why We Expect More from Technology and Less from Each Other by Sherry Turkle (my review)

The Information: A History, a Theory, a Flood by James Gleick (my review)

In The Plex: How Google Thinks, Works, and Shapes Our Lives by Steven Levy (my review)

The Future of the Internet–And How to Stop It by Jonathan Zittrain (my review)

The Next 100 Years: A Forecast for the 21st Century by George Friedman (my review)

Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives by the Year 2100 by Michio Kaku (my review)

What Might A Facebook Search Engine Look Like?

By - January 16, 2012

(image) Dialing in from the department of Pure Speculation…

As we all attempt to digest the implications of last week’s Google+ integration, I’ve also be thinking about Facebook’s next moves. There’s been plenty of speculation in the past that Facebook might compete with Google directly – by creating a full web search engine. After all, with the Open Graph and in particular, all those Like buttons, Facebook is getting a pretty good proxy of pages across the web, and indexing those pages in some way might prove pretty useful.

But I don’t think Facebook will create a search engine, at least not in the way we think about search today. For “traditional” web search, Facebook can lean on its partner Microsoft, which has a very good product in Bing. I find it more interesting to think about what “search problem” Facebook might solve in the future that Google simply can’t.

And that problem could be the very same problem (or opportunity) that Google can’t currently solve for, the very same problem that drove Google to integrate Google+ into its main search index: that of personalized search.

As I wrote over the past week, I believe the dominant search paradigm – that of crawling a free and open web, then displaying the best results for any particular query – has been broken by the rise of Facebook on the one hand, and the app economy on the other. Both of these developments are driven by personalization – the rise of “social.”

Both Facebook and the app economy are invisible to Google’s crawlers. To be fair, there are billions of Facebook pages in Google’s index, but it’s near impossible to “organize them and make them universally available” without Facebook’s secret sauce (its social graph and related logged in data). This is what those 2009 negotiations broke down over, after all.

The app economy, on the other hand, is just plain invisible to anyone. Sure, you can go to one of ten or so app stores and search for apps to use, but you sure can’t search apps the way you search, say, a web site. Why? First, the use case of apps, for the most part, is entirely personal, so apps have not been built to be “searchable.” I find this extremely frustrating, because why wouldn’t I want to “Google” the hundreds of rides and runs I’ve logged on my GPS app, as one example?

Secondly, the app economy is invisible to Google because data use policies of the dominant app universe – Apple – make it nearly impossible to create a navigable link economy between apps, so developers simply don’t do it. And as we all know, without a navigable link economy, “traditional” search breaks down.

Now, this link economy may well be rebuilt in a way that can be crawled, through up and coming standards like HTML5 and Telehash. But it’s going to take a lot of time for the app world to migrate to these standards, and I don’t know that open standards like these will necessarily win. Not when there’s a platform that already exists that can tie them together.

What platform is that, you might ask? Why, Facebook, of course.

Stick with me here. Imagine a world where the majority of app builders integrate with Facebook’s Open Graph, instrumenting your personal data through Facebook such that your data becomes searchable. (If you think that’s crazy, remember how most major companies and app services have already fallen all over themselves to leverage Open Graph). Then, all that data is hoovered into Facebook’s “search index”, and integrated with your personal social graph. Facebook then builds an interface to all you app data, add in your Facebook social graph data, and then perhaps tosses in a side of Bing so you can have the whole web as a backdrop, should you care to.

Voila – you’ve got yourself a truly personalized new kind of search engine. A Facebook search engine, one that searches your world, apps, Facebook and all.

Strangers things will probably happen. What do you think?

Update: Facebook’s getting one step closer this week…

 

It’s Not About Search Anymore, It’s About Deals

By - January 14, 2012

As in, who gets the best deal, why didn’t that deal go down, how do I get a deal, what should the deal terms be?

This is of course in the air given the whole Google+ fracas, but it’s part of a larger framework I’m thinking through and hope to write about. On the issue of “deals,” however, a little sketching out loud seems worthwhile.

Go read this piece: Facebook+Spotify: An ‘Unfair, Insider, Anti-Competitive’ Relationship…

It’s a common lament: A small developer who feels boxed out by whoever got the sweet deal. In this case, it’s on Facebook, but we all know it happens inside the Apple store as well (whoever gets top billing, gets sales).  Closed ecosystems controlled by one company create this dynamic. There’s only so much real estate, and the owner of the land gets to determine the most profitable use of it.

Google now appears to be acting the same way, cutting Google+ a “deal” so to speak, giving it the best real estate for all manner of search queries. That’s not how search was supposed to work. Search was supposed to reflect the ongoing conversation happening across all aspects of the Internet. If you were that small developer, you worked hard to get your service noticed on the web, and as it picked up a following, search would notice, start raising your profile in search results, and a virtuous loop began. Is that concept now dead?

Search isn’t supposed to be about cutting a deal to get your company’s wares to the top of relevant searches. In my reporting over the past week, most of my source conversations have been about failed deals – between Google and Facebook, or Google and Twitter. But search is supposed to be about showing the best results to consumers based on objective (or at least defensible and understandable) parameters, parameters *unrelated to the search engine itself.*

With Google Search Plus Your World (shortened by many to SPYW, which is just laughably bad as an acronym), it’s rather hard to tell the two apart anymore. When I wrote last year that Google = Google+, I meant it from a brand perspective. I didn’t realize how literal it’s become. Because with SPYW, all I’m getting is Google+ at the top of my results. I know I can turn SPYW off, and I probably will. Or, I can bail on Google+ altogether. But there is a real conundrum in doing so – more on that in my next post.

Some are arguing that search is no longer about results anymore, and that for years search has pretty much been about paid inclusion anyway (either paid through SEO,  or paid through ads, which increasingly don’t look like ads). That now, Google is focusing entirely on getting you an answer, and surfacing that answer right there on the results page. Perhaps the “right answer” is best found through cutting deals.

But I hope not. Because for me, search is a journey, not an answer.

This SPYW story has raised so many questions, it’s rather hard to sort through them all. I guess I’ll just keep writing till I feel like the writing’s done…

Related:

Hitler Is Pissed About Google+

Google Responds: No,That’s Not How Facebook Deal Went Down (Oh, And I Say: The Search Paradigm Is Broken)

Compete To Death, or Cooperate to Compete?

Twitter Statement on Google+ Integration with Google Search

Search, Plus Your World, As Long As It’s Our World

Google Responds: No,That’s Not How Facebook Deal Went Down (Oh, And I Say: The Search Paradigm Is Broken)

By - January 13, 2012

(image) I’ve just been sent an official response from Google to the updated version of my story posted yesterday (Compete To Death, or Cooperate to Compete?). In that story, I reported about 2009 negotiations over incorporation of Facebook data into Google search. I quoted a source familiar with the negotiations on the Facebook side, who told me  “Senior executives at Google insisted that for technical reasons all information would need to be public and available to all,” and “The only reason Facebook has a Bing integration and not a Google integration is that Bing agreed to terms for protecting user privacy that Google would not.”

I’ve now had conversations with a source familiar with Google’s side of the story, and to say the company disagrees with how Facebook characterized the negotiations is to put it mildly. I’ve also spoken to my Facebook source, who has clarified some nuance as well. To get started, here’s the official, on the record statement, from Rachel Whetstone, SVP Global Communications and Public Affairs:

“We want to set the record straight. In 2009, we were negotiating with Facebook over access to its data, as has been reported.  To claim that the we couldn’t reach an agreement because Google wanted to make private data publicly available is simply untrue.”

My source familiar with Google’s side of the story goes further, and gave me more detail on why the deal went south, at least from Google’s point of view. According to this source, as part of the deal terms Facebook insisted that Google agree to not use publicly available Facebook information to build out a “social service.” The two sides had already agreed that Google would not use Facebook’s firehose (or private) data to build such a service, my source says.

So what does “publicly available” mean? Well, that’d be Facebook pages that any search engine can crawl – information on Facebook that people *want* search engines to know about. This is compared to the firehose data that was the core asset being discussed between the parties. This firehose data is what Google would need in order to surface personal Facebook pages relevant to you in the context of a search query. (So, for example, if you were my friend on Facebook, and you searched for “Battelle soccer” on Google, then with the proposed deal, you’d see pictures of my kids’ soccer games that I had posted to Facebook).

Apparently, Google believed that Facebook’s demand around public information could be interpreted  as applying to how Google’s own search service was delivered, not to mention how it (or other products) might evolve. Interpretation is always where the devil is in these deals. Who’s to say, after all, that Google’s “social search” is not a “social service”? And Google Pages, Maps, etc. – those are arguably social in nature, or will be in the future.

Google balked at this language, and the deal fell apart. My Google source also disputes the claim that Google balked at being able to technically separate public from private data. Conversely, my Facebook source counters that the real issue of public vs. private had to do with Google’s refusal to honor changes in privacy settings over time – for example, if I deleted those soccer pictures, they should also be deleted from Google’s index. There’s a point where this all devolves to she said/he said, because the deal never happened, and to be honest, there are larger points to make.

So let’s start with this: If Facebook indeed demanded that Google not use publicly available Facebook data, it’s certainly understandable why Google wouldn’t agree to the deal. It may not seem obvious, but there is an awful lot of publicly available Facebook pages and data out there. Starbucks, for example, is more than happy to let anyone see its Facebook page, no matter if you’re logged in or not. And then there’s all that Facebook open graph data out on the public web – tons of sites show Facebook status updates, like counts and so on in a public fashion. In short, asking Google to not leverage that data in anything that might constitute a “social service” is anathema to a company who claims its mission to crawl all publicly available information, organize it, and make it available.

It’s one thing to ask that Google not use Facebook’s own social graph and private data to build new social services – after all, the social graph is Facebook’s crown jewels. But it’s quite another thing to ask Google to ignore other public information completely.

From Google’s point of view, Facebook was crippling future products and services that Google might create, which was tantamount to an insurance policy of sorts that Google wouldn’t become a strong competitor, at least not one that  leverages public information from Facebook. Google balked. If Facebook’s demand could have been interpreted as also applying to Google’s search results, well, that’s a stone cold deal killer.

I certainly understand why Facebook might ask for what they did, it’s not crazy. Google might well have responded by narrowing the deal, saying “Fine, you don’t build a search engine, and we won’t build a social network. But we should have the right to create other kinds of social services.” As far as I know, Google didn’t chose to say that. (Microsoft apparently did). And I think I know why: The two companies realized they were dancing on the head of a pin. Search = social, social = search. They couldn’t figure out a way to tease the two apart. Microsoft has cast its lot with Facebook, Google, not so much.

When high stakes deals fall apart, both sides usually claim the other is at fault, and that certainly seems to be the case here. It’s also the case with the Twitter deal, which I’ve gotten a fair amount of new information about as well. I hope to dig into that in another post. For now, I want to pull back a second and comment on what I think is really going on here, at least from the perspective of a longer view.

Our Cherished Search Paradigm Is Broken (But We Will Fix It….Eventually)

I think what we have here is a clear indication that the search paradigm we’ve operated under for a decade or so is broken. That paradigm stems from Google’s original letter to shareholders in 2004. Remember this line?Our search results are the best we know how to produce. They are unbiased and objective, and we do not accept payment for them or for inclusion or more frequent updating.

In many cases, it’s simply naive to claim Google is unbiased or objective. Google often favors its own properties over others, as Danny points out in Real-Life Examples Of How Google’s “Search Plus” Pushes Google+ Over Relevancy and others have also detailed. But there is a reason: if you’re going to show results from all other possible contenders, replete with their associated UI and functional bells and whistles (as Google does with its own Maps, Pages, Plus etc.), well, it’s nearly impossible now to determine which service is the right answer to a particular person’s query. Not to mention, you need to put a deal in place to get all the functionality of the service. Instead, Google has opted, in many cases, to go with their own stuff.

This is not a new idea, by the way. Yahoo’s been doing it this way from the beginning. The contentious issue is that biasing some results toward Google’s own products runs counter to Google’s founding philosophy.

I have a theory as to why all this is happening, and I don’t entirely blame Google. Back when search wasn’t personalized, Google could defensibly say that one service was better than another because it got more traffic, was linked to more (better PageRank), and so on. Back when everyone got the same results and the web was one homogenous glob of HTML, well, you could claim “this is the best result for the general population.” But personalized search has broken that framework – I lamented this back in 2008 with this post: Search Was Our Social Glue. But That Is Dissolving (more here).

With the rise of Facebook and the app economy, the problem of search has become terribly complicated. If you want to have results from Facebook in your search, well, that search service has to do a deal with Facebook. But what if you want results from your running app (I have hundreds of rides and runs logged on AllSportGPS, for example)? Or Instagram? Or Path, for that matter? Do they all have to do deals with Google and Bing? There are so many unconnected pieces of the Internet now (millions of apps, most of our own Facebook experiences, etc. etc.) that what’s a good personal result for one person is not necessarily good for another. If Google is to stay true to its original mission, it needs a new framework and a massive number of new signals – new glue – to put the pieces back together.

There are several ways to resolve this, and in another post, I hope to explore them (one of them, of course, is simply that everyone should just go through Facebook. That’s the vision of Open Graph). But for now, I’m just going to say this: The issues raised by this kerfuffle are far larger than Google vs. Facebook, or Google vs. Twitter. We are in the midst of a major search paradigm shift, and there will be far more tears before it gets resolved. But resolve it must, and resolve it will.

Compete To Death, or Cooperate to Compete?

By - January 11, 2012

(image) **Updated at 3 PM PST with more info about Facebook/Google negotiations…please read to the bottom…**

In today’s business climate, it’s not normal for corporations to cooperate with each other when it comes to sharing core assets. In fact, it’s rather unusual. Even when businesses do share, it’s usually for some ulterior motive, a laying of groundwork for future chess moves which insure eventual domination over the competition.

Such is the way of business, particularly at the highest and largest levels, such as those now inhabited by top Internet players.

Allow me to posit that this philosophy is going to change over the next few decades, and further, indulge me as I try to apply a new approach to a very present case study: That of Google, Facebook, and Twitter as it relates to Google’s search index and the two social services’ valuable social interaction datasets.

This may take a while, and I will most likely get a fair bit wrong. But it seems worth a shot, so if you feel like settling in for some Thinking Out Loud, please come along.

First, some abridged background. Back in 2009, on the Web 2 Summit stage of all places (yes, I was the emcee), Google, Microsoft, Facebook and Twitter announced a flurry of deals, some of which were worked out in a last minute fury of negotiations. Early in the conference Microsoft announced it would incorporate Twitter and Facebook feeds into its new search engine Bing. Not to be outdone, Google announced a deal with Twitter the next day. However, Google did not announce a deal with Facebook, and the two companies have never come to terms. Meanwhile, Microsoft has continued to deepen its relationship with Facebook data, to the point of viewing that relationship as a key differentiator between Bing and Google search.

All of these deals have business terms, some of them financial, all with limits on how data is used and presented, I would presume. Marissa Mayer of Google told me on the Web 2 stage that there were “financial terms” in Google’s deal with Twitter, but would not give me any details (nor should she have, frankly).

Fast forward to the middle of last year, when the Google/Twitter deal was set to expire. At about the same time as renewal was being negotiated, Google launched Google+, a clear Facebook and Twitter competitor. For reasons that seem in dispute (Google said yesterday Twitter walked away, Twitter has not made a public statement about why things fell apart), the renewal never happened.

And then yesterday, Google incorporated Google+  results into its main search index, sparking a debate in the blogosphere that rages on today – Is Google acting like a monopolist? Does Facebook or Twitter have a “right” to be included in Google results? Why didn’t Google try to negotiate inclusion with its rivals prior to making such a clearly self-serving move?

Google execs, including Chair Eric Schmidt, told SEL’s Danny Sullivan that the company would be happy to talk to both companies to figure out ways to incorporate Twitter and Facebook into Google search, but clearly, those talks could have happened prior to the G+ launch, and they didn’t (or they did, and did not work out – I honestly have no idea). When Danny pointed out that Twitter pages are publicly available, Schmidt demurred, saying that Google prefers to “have a conversation” with a company before using its pages in such a wholesale fashion (er, so did they have one, or not? Anyway…). He has a point (commercial deals are de-rigueur), but…that conversation happened last year, and apparently ended without a deal. And around we go…

What’s clear is this: All the companies involved in this great data spat are acting in what they believe to be their own self interest, and the greatest potential loser, at least in the short term, is the search consumer, who will not be seeing “all the world’s information” but rather “that information which is readily available to Google on terms Google prefers.”

The key to that last sentence is the phrase “what they believe to be their own self interest.” Because I think there’s an argument that, in fact, their true self interest is to open up and share with each other.

Am I nuts? Perhaps. But indulge my insanity for a bit.

The Cost of Blinkered Competition

Back in the Web 1.0 days, when I was running The Industry Standard, I had a number of strong competitors. It’s probably fair to say we didn’t like each other much – we competed daily for news stories, advertiser dollars, and the loyalty of readers. The market for information about the tech industry was limited – there were only so many people interested in our products, and only so much time in the day for them to engage with us.

My strategy to win was clear: We’d make the best product, have the best people, and we’d win on quality. When I heard about one of our competitors badmouthing us, I’d try to ignore it – we were winning anyway: We had the dominant marketshare, the most revenues ($120mm in 2000, with $21mm in EBIDTA), and the best product.

Then something strange happened: an emissary from a competitor called and asked for a meeting. Intrigued, I took it, and was surprised by his offer: Let’s put our two companies together. Apart, he argued, we were simply tearing each other down. Together, we could consolidate the market and insure a long term win.

I considered his idea, but for various reasons, we didn’t take him up on it. I felt like we had the dominant position, that his offer was driven by weakness, not intellectual soundness, and I also felt that a combination would require that my shareholders take on too much dilution.

Two years later, both of us were out of business.

Now, I’m not sure it would have mattered, given the great crash of 2001. But what is certainly true is that I could have thought a bit deeper about what this fellow was proposing. Back in the days of print-bound information, we were essentially competing on what were publicly available assets: stories, particularly interpretations and reportage around those stories, and people: writers, editors, ad sales executives, and management. Short of combining companies, there wasn’t really any other way for us to collaborate, or at least, so I thought.

But perhaps there could have been. It’s been more than a decade since that meeting, and I still wonder: perhaps we could have shared back-end resources like operations, publishing contracts, etc. and saved tens of millions of dollars. We’d compete just on how we leveraged those public assets (stories, people). Perhaps we might have survived the wipeout of the dot com crash. We’ll never know. Since those publications died, the blogosphere has claimed the market, and now it’s far larger than the one we lost back in 2001. Of course I started Federated Media to participate in that model, and now FM has as large a revenue run rate as the Industry Standard, across a far more diverse market.

Why am I bringing this up? Because I think there’s a win-win in this whole Google/Facebook/Twitter dust up, but it’s going to take some Thinking Differently to make it happen.

Imagine Twitter and Facebook offer efficient access to all of their “public” pages – those that its users are happy to share with anyone (or even just to their pre-defined “circles”) – to Google under some set of reasonable usage terms. Financial terms would be minimal – perhaps just enough to cover the costs of serving such a large firehose of data to the search giant. Imagine further that Google, in return, agrees to incorporate this user data in a fashion that is fair – ie doesn’t favor any service over any other – be it Twitter, Google+, or Facebook.

Now, negotiating what is “fair” will be complicated, and honestly, should be subject to iteration as all parties learn usage patterns. And of course all this should be subject to consumer control – if I want to see only Twitter or Facebook or Google+ results in particular searches (or all results for that matter), I should have that right.

And this leads me to my point. Such a set up, regardless of how painful it might be to get right, would create a shared class of assets that would have to compete at the level of the consumer. In other words, the best service for the query wins.

That’s always been Google’s stated philosophy: the best answer for the question at hand. Danny gets to this point in a piece posted last night (which I just saw as I was writing this): Search Engines Should Be Like Santa From “Miracle On 34th Street”. In it he argues that Google’s great strength has been its pattern of sending people to its competitors. And he upbraids Google for violating that principle with its Google+ integration.

It doesn’t have to be this way. It’s not only Google that’s at fault here. Facebook won’t share with Google on any terms, Facebook and Google have not been able to come to terms on how to share data (more on that below*), and Twitter clearly wants some kind of value if it is to share its complete firehose with the search giant. Imagine if all three were to agree on minimal terms, creating a public commons of social data. Yes, that would put Google in an extreme position of trust (not to mention imperil its toddler Google+ service), but covenants can be put in place that allow parties to terminate sharing for clear breaches which demonstrate one party favoring itself over others.

Were such a public commons to be created, then the real competition could start: at the level of how each service interprets that data, and adds value to it in various ways.

Four years ago to the month, I wrote this post: It’s Time For Services on The Web to Compete On More Than Data

In it I said: It’s time that services on the web compete on more than just the data they aggregate….

I think in the end, Facebook will win based on the services it provides for that data. Set the data free, and it will come back to roost wherever it’s best used. And if Facebook doesn’t win that race, well, it’ll lose over time anyway. Such a move is entirely in line with the company’s nascent philosophy, and would be a massively popular move within the ouroborosphere (my name for all things Techmeme).

Compete on service, Facebook, it’s where the world is headed anyway!

Two and a half years ago, as it became clear Facebook’s “nascent philosophy” had changed (and as Twitter rose in stature), I followed up with this post: Google v. Facebook? What We Learn from Twitter. In that post, I said:

 

I think it’s a major strategic mistake to not offer (Facebook’s pages and social graph) to Google (and anyone else that wants to crawl it.) In fact, I’d argue that the right thing to do is to make just about everything possible available to Google to crawl, then sit back and watch while Google struggles with whether or not to “organize it and make it universally available.” A regular damned if you do, damned if you don’t scenario, that….

For an example of what I mean, look no further than Twitter. That service makes every single tweet available as a crawlable resource. And Google certainly is crawling Twitter pages, but the key thing to watch is whether the service is surfacing “superfresh” results when the query merits it. So far, the answer is a definitive NO.

Why?

Well, perhaps I’m being cynical, but I think it’s because Google doesn’t want to push massive value and traffic to Twitter without a business deal in place where it gets to monetize those real time results.

Is that “organizing the world’s information and making it universally available?” Well, no. At least, not yet.

By making all its information available to Google’s crawlers (and fixing its terrible URL structure in the process), Facebook could shine an awfully bright light on this interesting conflict (of) interest.

Thanks to Google’s inclusion of Google+ in its search index, that light has now been shone, and what we’re seeing isn’t all good. I’m of the opinion that a few years from now, each and every one of us will have the expectation and the right to incorporate our own social data into web-wide queries. If the key parties involved in search and social today don’t figure out a way to make that happen, well, they may end up just like The Industry Standard did back in 2001.
But not to worry, someone else will come along, pick up the pieces, and figure out how to play a more cooperative and federated game.
*Update: I’ve heard from a source with knowledge of the Facebook/Google negotiations over integration of Facebook’s data into Google’s search index. This source – who while very credible does come from Facebook’s side of the debate – explained to me that during the 2009 negotiations, Google balked at Facebook’s request that Facebook data be protected in the same fashion as it is in Facebook’s deal with Bing. In essence, Google claimed no way to keep data within circles of friends in the context of a Google search. According to this source: “Senior executives at Google insisted that for technical reasons all information would need to be public and available to all.” But the source goes on to point out that in Google’s own integration of Google+, Google does exactly what it claims it could not do with Facebook data. “The only reason Facebook has a Bing integration and not a Google integration is that Bing agreed to terms for protecting user privacy that Google would not,” this source told me.
Also, and quite interestingly, Google also refused to agree to a clause which stated that Google could not use the data to build its own social network. Now, this is where things can get very dicey. It’s very hard to prove whether or not a company is using the data in particular ways, and had Google agreed to that clause, it might have severely limited its ability to build Google+. What is clear is that Microsoft agreed to Facebook’s terms.