free html hit counter Book Related Archives | Page 12 of 29 | John Battelle's Search Blog

It’s Not Whether Google’s Threatened. It’s Asking Ourselves: What Commons Do We Wish For?

By - February 02, 2012

If Facebook’s IPO filing does anything besides mint a lot of millionaires, it will be to shine a rather unsettling light on a fact most of us would rather not acknowledge: The web as we know it is rather like our polar ice caps: under severe, long-term attack by forces of our own creation.

And if we lose the web, well, we lose more than funny cat videos and occasionally brilliant blog posts. We lose a commons, an ecosystem, a “tangled bank” where serendipity, dirt, and iterative trial and error drive open innovation. Google’s been the focus of most of this analysis (hell, I called Facebook an “existential threat” to Google on Bloomberg yesterday), but I’d like to pull back for a second.

This post has been brewing in me for a while, but I was moved to start writing after reading this piece in Time:

Is Google In Danger of Being Shut Out of the Changing Internet?

The short answer is Hell Yes. But while I’m a fan of Google (for the most part), to me the piece is focused too narrowly on what might happen to one company, rather than to the ecosystem which allowed that company to thrive. It does a good job of outlining the challenges Google faces, which are worth recounting (and expanding upon) as a proxy for the larger question I’m attempting to elucidate:

1. The “old” Internet is shrinking, and being replaced by walled gardens over which Google’s crawlers can’t climb. Sure, Google can crawl Facebook’s “public pages,” but those represent a tiny fraction of the “pages” on Facebook, and are not informed by the crucial signals of identity and relationship which give those pages meaning. Similarly, Google can crawl the “public pages” of Apple’s iTunes store on the web, but all the value creation in the mobile iOS appworld is behind the walls of Fortress Apple. Google can’t see that information, can’t crawl it, and can’t “make it universally available.” Same for Amazon with its Kindle universe, Microsoft’s Xbox and mobile worlds, and many others.

2. Google’s business model depends on the web remaining open, and given #1 above, that model is imperiled. It’s damn hard to change business models, but with Google+ and Android, the company is trying. The author of the Time piece is skeptical of Google’s chances of recreating the Open Web with these new tools, however.

He makes a good point. But to me, the real issue isn’t whether Google’s business model is under attack by forces outside its control. Rather, the question is far more existential in nature: What kind of a world do we want to live in?

I’m going to say that again, because it bears us really considering: What kind of a world do we want to live in? As we increasingly leverage our lives through the world of digital platforms, what are the values we wish to hold in common? I wrote about this issue a month or so ago:  On This Whole “Web Is Dead” Meme. In that piece I outlined a number of core values that I believe are held in common when it comes to what I call the “open” or “independent” web. They also bear repeating (I go into more detail in the post, should you care to read it):

- No gatekeepers. The web is decentralized. Anyone can start a web site. No one has the authority (in a democracy, anyway) to stop you from putting up a shingle.

An ethos of the commons. The web developed over time under an ethos of community development, and most of its core software and protocols are royalty free or open source (or both). There wasn’t early lockdown on what was and wasn’t allowed. This created chaos, shady operators, and plenty of dirt and dark alleys. But it also allowed extraordinary value to blossom in that roiling ecosystem.

- No preset rules about how data is used. If one site collects information from or about a user of its site, that site has the right to do other things with that data, assuming, again, that it’s doing things that benefit all parties concerned.

- Neutrality. No one site on the web is any more or less accessible than any other site. If it’s on the web, you can find it and visit it.

- Interoperability. Sites on the web share common protocols and principles, and determine independently how to work with each other. There is no centralized authority which decides who can work with who, in what way.

I find it hard to argue with any of the points above as core values of how the Internet should work. And it is these values that created Google and allowed the company to become the world beater is has been these past ten or so years. But if you look at this list of values, and ask if Apple, Facebook, Amazon, and the thousands of app makers align with them, I am afraid the answer is mostly no. And that’s the bigger issue I’m pointing to: We’re slowly but surely creating an Internet that is abandoning its original values for…well, for something else that as yet is not well defined.

This is why I wrote Put Your Taproot Into the Independent Web. I’m not out to “save Google,” I’m focused on trying to understand what the Internet would look like if we don’t pay attention to our core shared values.

And it’s not fair to blame Apple, Facebook, Amazon, or app makers here. In conversations with various industry folks over the past few months, it’s become clear that there are more than business model issues stifling the growth of the open web. In no particular order, they are:

1. Engineering. It’s simply too hard to create super-great experiences on the open web. For many high value products and services, HTML and its associated scripting languages, including HTML5, are messy, incomplete, and are not as fast, clean, and elegant as coding for iOS or the Facebook ecosystem. I’ve heard this over and over again. This means developers are drawn to the Apple universe first, web second. Value accrues where engineering efforts pay off in a more compelling user experience.

2. Mobility. The PC-based HTML web is hopelessly behind mobile in any number of ways. It has no eyes (camera), no ears (audio input), no sense of place (GPS/location data). Why would anyone want to invest in a web that’s deaf, dumb, blind, and stuck in one place?

3. Experience. The open web is full of spam, shady operators, and blatant falsehoods. Outside of a relatively small percentage of high quality sites, most of the web is chock full of popup ads and other interruptive come-ons. It’s nearly impossible to find signal in that noise, and the web is in danger of being overrun by all that crap. In the curated gardens of places like Apple and Facebook, the weeds are kept to a minimum, and the user experience is just…better.

So, does that mean the Internet is going to become a series of walled gardens, each subject to the whims of that garden’s liege?

I don’t think so. Scroll up and look at that set of values again. I see absolutely no reason why they can not and should not be applied to how we live our lives inside the worlds of Apple, Facebook, Amazon, and the countless apps we have come to depend upon. But it requires a shift in our relationship to the Internet. It requires that we, as the co-creators of value through interactions, data, and sharing, take responsibility for ensuring that the Internet continues to be a commons.

I expect this will be less difficult that it sounds. It won’t take a political movement or a wholesale migration from Facebook to more open services. Instead, I believe in the open market of ideas, of companies and products and services which identify  the problems I’ve outlined above, and begin to address them through innovative new approaches that solve for them. I believe in the Internet. Always have, and always will.

Related:

Predictions 2012 #4: Google’s Challenging Year

We Need An Identity Re-Aggregator (That We Control)

Set The Data Free, And Value Will Follow

A Report Card on Web 2 and the App Economy

The InterDependent Web

On This Whole “Web Is Dead” Meme

  • Content Marquee

Where Good Ideas Come From: A Tangled Bank

By - January 31, 2012

After pushing my way through a number of difficult but important reads, it was a pleasure to rip through Steven Johnson’s Where Good Ideas Come From: A Natural History of Innovation. I consider Steven a friend and colleague, and that will color my review of his most recent work (it came out in paperback last Fall). In short, I really liked the book. There, now Steven will continue to accept my invitations to lunch…

Steven is author of seven books, and I admire his approach to writing. He mixes story with essay, and has an elegant, spare style that I hope to emulate in my next book. If What We Hath Wrought is compared to his work, I’ll consider that a win.

Where Good Ideas Come From is an interesting, fast paced read that outlines the kinds of environments which spawn world-changing ideas. In a sense, this book is the summary of “lessons learned” from several of Johnson’s previous books, which go deep into one really big idea – The Invention of Air, for example, or  the discovery of a cure for cholera. It’s also a testament to another of Johnson’s obsessions – the modern city, which he points out is a far more likely seedbed of great ideas than the isolated suburb or cabin-on-a-lake-somewhere.

Johnson draws a parallel between great cities and the open web – both allow for many ideas to bump up against each other, breed, and create new forms. 

Some environments squelch new ideas; some environments seem to breed them effortlessly. The city and the Web have been such engines of innovation because, for complicated historical reasons, they are both environments that are powerfully suited for the creation, diffusion, and adoption of good ideas.

While more than a year old, Where Good Ideas Come From is an important and timely book, because the conclusions Johnson draw are instructive to the digital world we are building right now – will it be one that fosters what Zittrain calls generativity, or are we feeding ecosystems that are closed in nature? Johnson writes:

…openness and connectivity may, in the end, be more valuable to innovation than purely competitive mechanisms. Those patterns of innovation deserve recognition—in part because it’s intrinsically important to understand why good ideas emerge historically, and in part because by embracing these patterns we can build environments that do a better job of nurturing good ideas…

…If there is a single maxim that runs through this book’s arguments, it is that we are often better served by connecting ideas than we are by protecting them. ….when one looks at innovation in nature and in culture, environments that build walls around good ideas tend to be less innovative in the long run than more open-ended environments. Good ideas may not want to be free, but they do want to connect, fuse, recombine. They want to reinvent themselves by crossing conceptual borders. They want to complete each other as much as they want to compete.

I couldn’t help but think of the data and commercial restrictions imposed by Facebook and Apple as I read those words. As I’ve written over and over on this site, I’m dismayed by the world we’re building inside Apple’s “appworld,” on the one hand, and the trend toward planting our personal and corporate taproots too deeply in the soils of Facebook, on the other. Johnson surveys centuries of important, world changing ideas, often relating compelling personal narratives on the way to explaining how those ideas came to be not through closed, corporate R&D labs, but through unexpected collisions between passions, hobbies, coffee house conversations, and seeming coincidence. If you’re ever stuck, Johnson advises, go outside and bump into things for a while. I couldn’t agree more.

One concept Johnson elucidates is the “adjacent possible,” a theory attributed to biologist Stuart Kauffman. In short, the adjacent possible is the space inhabited by “what could be” based on what currently is. In biology and chemistry, for example, it’s the potential for various combinations of molecules to build self-replicating proteins. When that occurs, new adjacent possibilities open up, to the point of an explosion in life and order.

Johnson applies this theory to ideas, deftly demonstrating how Darwin’s fascination with the creation of coral reefs led – over years – to what is perhaps the most powerful idea of modernity – evolution. He concludes that while most of us understand Darwin’s great insight as mostly about “survival of the fittest,” perhaps its greatest insight is how it has “revealed the collaborative and connective forces at work in the natural world.” Darwin’s famous metaphor for this insight is the tangled bank:

It is interesting to contemplate a tangled bank, clothed with many plants of many kinds, with birds singing on the bushes, with various insects flitting about, and with worms crawling through the damp earth, and to reflect that these elaborately constructed forms, so different from each other, and dependent upon each other in so complex a manner, have all been produced by laws acting around us. . .

Johnson also extolls the concept of “liquid networks” – where information freely flows between many minds, of “slow hunches,” where ideas develop over long periods of time, as well as the importance of noise, serendipity, and error in the development of good ideas. He explores “exaptation” – the repurposing of one idea for another use, and the concept of “platforms” that allow each of these concepts – from liquid networks to serendipity and exaptation – to blossom (Twitter is cited as such a platform).

Johnson concludes:

Ideas rise in crowds, as Poincaré said. They rise in liquid networks where connection is valued more than protection. So if we want to build environments that generate good ideas—whether those environments are in schools or corporations or governments or our own personal lives—we need to keep that history in mind, and not fall back on the easy assumptions that competitive markets are the only reliable source of good ideas. Yes, the market has been a great engine of innovation. But so has the reef.

Amen, I say. I look forward to our great tech companies – Apple and Facebook amongst them – becoming more tangled bank than carefully pruned garden.

A nice endcap to the book is a survey Johnson took of great ideas across history. He places each idea on an XY grid where an idea is either generated by an individual or a network of individuals (the X axis) and/or a commercial or non-commercial environment (the Y Axis). The results are pretty clear: ideas thrive in “non-market/networked” environments.

Johnson's chart of major ideas emerging during the 19th and 20th centuries

This doesn’t mean those ideas don’t become the basis for commerce – quite the opposite in fact. But this is a book about how good ideas are created, not how they might be exploited. And we’d be well advised to pay attention to that as we consider how we organize our corporations, our governments, and ourselves – we have some stubborn problems to solve, and we’ll need a lot of good ideas if we’re going to solve them.

Highly recommended.

Next up on the reading list: Inside Apple: How America’s Most Admired–and Secretive–Company Really Works by Adam Lashinsky, and Republic, Lost: How Money Corrupts Congress–and a Plan to Stop It, by Larry Lessig.

####

Other works I’ve reviewed:

The Singularity Is Near: When Humans Transcend Biology by Ray Kurzweil (my review)

The Corporation (film – my review).

What Technology Wants by Kevin Kelly (my review)

Alone Together: Why We Expect More from Technology and Less from Each Other by Sherry Turkle (my review)

The Information: A History, a Theory, a Flood by James Gleick (my review)

In The Plex: How Google Thinks, Works, and Shapes Our Lives by Steven Levy (my review)

The Future of the Internet–And How to Stop It by Jonathan Zittrain (my review)

The Next 100 Years: A Forecast for the 21st Century by George Friedman (my review)

Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives by the Year 2100 by Michio Kaku (my review)

 

What Happens When Sharing Is Turned Off? People Don’t Dance.

By - January 30, 2012

One of only two photos to emerge from last night's Wilco concert, image Eric Henegen

Last night my wife and I did something quite rare – we went to a concert on a Sunday night, in San Francisco, with three other couples (Wilco, playing at The Warfield). If you don’t have kids and don’t live in the suburbs, you probably think we’re pretty lame, and I suppose compared to city dwellers, we most certainly are. But there you have it.

So why am I telling you about it? Because something odd happened at the show: Wilco enforced a “no smartphone” rule. Apparently lead singer Jeff Tweedy hates looking out at the audience and seeing folks waving lit phones back at him. Members of the Warfield staff told me they didn’t like the policy, but they enforced it  - quite strictly, I might add. It created a weird vibe – folks didn’t even take out their phones for fear they might be kicked out for taking a picture of the concert. (A couple of intrepid souls did sneak a pic in, as you can see at left…)

And… no one danced, not till the very end, anyway. I’ve seen Wilco a few times, and I’ve never seen a more, well, motionless crowd. But more on that later.

Now, I have something of a history when it comes to smart phones and concerts. Back in 2008 I was a founding partner in a new kind of social music experiment we called “CrowdFire.” In my post explaining the idea, I wrote:

Over the course of several brainstorming sessions… an idea began to take shape based on a single insight: personal media is changing how we all experience music. (when I was at Bonnaroo in 2007), everyone there had a cel phone with a camera. Or a Flip. Or a digital camera. And when an amazing moment occurred, more folks held up their digital devices than they did lighters. At Bonnaroo, I took a picture that nails it for me – the image at left. A woman capturing an incredible personal memory of an incredible shared experience (in this case, it was Metallica literally blowing people’s minds), the three screens reflecting the integration of physical, personal, and shared experiences. That image informed our logo, as you can see (below).

So – where did all those experiences go (Searchblog readers, of course, know I’ve been thinking about this for a while)? What could be done with them if they were all put together in one place, at one time, turned into a great big feed by a smart platform that everyone could access? In short, what might happen if someone built a platform to let the crowd – the audience – upload their experiences of the music to a great big database, then mix, mash, and meld them into something utterly new?

Thanks to partners like Microsoft, Intel, SuperFly, Federated Media and scores of individuals, CrowdFire actually happened at Outside Lands, both in 2008 and in 2009. It was a massive effort – the first year literally broke AT&T’s network. But it was clear we were onto something. People want to capture and share the experience of being at a live concert, and the smart phone was clearly how they were now doing it.

It was the start of something – brainstorming with several of my friends prior to CrowdFire’s birth, we imagined a world where every shareable experience became data that could be recombined to create fungible alternate realities. Heady stuff, stuff that is still impossible, but I feel will eventually become our reality as we careen toward a future of big data and big platforms.

Since those early days, the idea of CrowdFire has certainly caught on. In early 2008, we had to build the whole platform from scratch, but now, folks use services like Instagram, Twitter, Facebook, and Foursquare to share their experiences. Many artists share back, sending out photos and tweets from on stage. Most major festivals and promoters have some kind of fan photo/input service that they promote as well. CrowdFire was a great idea, and maybe, had I not been overwhelmed with running FM, we might have turned it into a real company/service that could have integrated all this output and created something big in the world. But it was a bit ahead of its time.

What has happened since that first Outside Lands is that at every concert I’ve attended, I’ve noticed the crowd’s increasing connection to their smart phones – taking pictures, group texting, tweeting, and sharing the moments with their extended networks across any number of social services. It’s hard to find an experience more social than a big concert, and the thousands of constantly lit smartphone screens are a testament to that fact, as are the constant streams of photos and status updates coming out of nearly every show I’ve seen, or followed enviously online.

Which brings me back to last night. I was unaware of the policy, so as Wilco opened at the sold-out Warfield, something felt off to me. Here were two thousand San Francisco hipsters, all turned attentively toward the stage – but most of them had their hands in their pockets! As the band went into the impossible-not-to-move-to “Art of Almost” and “I Might,” I started wondering what was up – why weren’t people at least swaying?! The music was extraordinary, the sound system perfectly tuned. But everyone seemed very intent on…well…being intent. They stared forward, hands in pocket, nodded their heads a bit, but no one danced. It was a rather odd vibe. It was as if the crowd had been admonished to not be too … expressive.

Then it hit me. Nobody had their phone out. I turned to a security guard and asked why no one was holding up a phone. That’s when I learned of Wilco’s policy.

It seemed to me that the rule had the unintended consequence of muting the crowd’s ability to connect to the joy of the moment. Odd, that. We’re so connected to these devices and their ability to reflect our own sense of self that when we’re deprived of them, we feel somehow less…human.

My first reaction was “Well, this sucks,” but on second thought, I got why Tweedy wanted his audience to focus on the experience in the room, instead of watching and sharing it through the screens of their smartphones. By the encore, many people were dancing – they had loosened up. But in the end, I’m not sure I agree with Wilco – they’re fighting the wrong battle (and losing extremely valuable word of mouth in the process, but that’s another post).

There are essentially two main reasons to hold a phone up at a show. First, to capture a memory for yourself, a reminder of the moment you’re enjoying. And second, to share that moment with someone – to express your emotions socially. Both seem perfectly legitimate to me. (I’m not down with doing email or taking a call during a show, I’ll admit).
But the smart phone isn’t a perfect device, as we all know. It forces the world into a tiny screen. It runs out of battery, bandwidth, and power. It distracts us from the world around us. There are too many steps – too much friction – between capturing the things we are experiencing right now and the sharing of those things with people we care about.

But I sense that the sea of smart phones lit up at concerts is a temporary phenomenon. The integration of technology, sharing, and social into our physical world, on the other hand, well that ain’t going away. In the future, it’s going to be much harder to enforce policies like Wilco’s, because the phone will be integrated into our clothing, our jewelry, our eyeglasses, and possibly even ourselves. When that happens – when I can take a picture through my glasses, preview it, then send it to Instagram using gestures from my fingers, or eyeblinks, or a wrinkle of my nose – when technology becomes truly magical – asking people to turn it off is going to be the equivalent of asking them not to dance – to not express their joy at being in the moment.

And why would anyone want to do that?

The Future of War (From Jan., 1993 to the Present)

By - January 24, 2012

(image is a shot of my copy of the first Wired magazine, signed by our founding team)
I just read this NYT piece on the United States’ approach to unmanned warfare: Do Drones Undermine Democracy?. From it:

There is not a single new manned combat aircraft under research and development at any major Western aerospace company, and the Air Force is training more operators of unmanned aerial systems than fighter and bomber pilots combined. In 2011, unmanned systems carried out strikes from Afghanistan to Yemen. The most notable of these continuing operations is the not-so-covert war in Pakistan, where the United States has carried out more than 300 drone strikes since 2004.

Yet this operation has never been debated in Congress; more than seven years after it began, there has not even been a single vote for or against it. This campaign is not carried out by the Air Force; it is being conducted by the C.I.A. This shift affects everything from the strategy that guides it to the individuals who oversee it (civilian political appointees) and the lawyers who advise them (civilians rather than military officers).

It also affects how we and our politicians view such operations. President Obama’s decision to send a small, brave Navy Seal team into Pakistan for 40 minutes was described by one of his advisers as “the gutsiest call of any president in recent history.” Yet few even talk about the decision to carry out more than 300 drone strikes in the very same country.

Read the whole piece. Really, read it. If any article in the past year or so does a better job of displaying how what we’ve built with technology is changing the essence of our humanity, I’d like to read it.

For me, this was a pretty powerful reminder. Why? Because we put the very same idea on display as the very first cover story of Wired, nearly 20 years ago. Written by Bruce Sterling, whose star has only become brighter in the past two decades, it predicts the future of war with an eerie accuracy. In the article, Sterling describes “modern Nintendo training for modern Nintendo war.” Sure, if he was all seeing, he might have said Xbox, but still…here are some quotes from nearly 20 years ago:

The omniscient eye of computer surveillance can now dwell on the extremes of battle like a CAT scan detailing a tumor in a human skull. This is virtual reality as a new way of knowledge: a new and terrible kind of transcendent military power.

…(Military planners) want a pool of contractors and a hefty cadre of trained civilian talent that they can draw from at need. They want professional Simulation Battle Masters. Simulation system operators. Simulation site managers. Logisticians. Software maintenance people. Digital cartographers. CAD-CAM designers. Graphic designers.

(Ed: Like my son playing Call of Duty?)

And it wouldn’t break their hearts if the American entertainment industry picked up on their interactive simulation network technology, or if some smart civilian started adapting these open-architecture, virtual-reality network protocols that the military just developed. The cable TV industry, say. Or telephone companies running Distributed Simulation on fiber-to-the-curb. Or maybe some far-sighted commercial computer-networking service. It’s what the military likes to call the “purple dragon” angle. Distributed Simulation technology doesn’t have to stop at tanks and aircraft, you see. Why not simulate something swell and nifty for civilian Joe and Jane Sixpack and the kids? Why not purple dragons?

(Ed: Skyrim, anyone?!)

Can governments really exercise national military power – kick ass, kill people – merely by using some big amps and some color monitors and some keyboards, and a bunch of other namby-pamby sci-fi “holodeck” stuff?

The answer is yes.

Say you are in an army attempting to resist the United States. You have big tanks around you, and ferocious artillery, and a gun in your hands. And you are on the march.

Then high-explosive metal begins to rain upon you from a clear sky. Everything around you that emits heat, everything around you with an engine in it, begins to spontaneously and violently explode. You do not see the eyes that see you. You cannot know where the explosives are coming from: sky-colored Stealths invisible to radar, offshore naval batteries miles away, whip-fast and whip-smart subsonic cruise missiles, or rapid-fire rocket batteries on low-flying attack helicopters just below your horizon. It doesn’t matter which of these weapons is destroying your army – you don’t know, and you won’t be told, either. You will just watch your army explode.

Eventually, it will dawn on you that the only reason you, yourself, are still alive, still standing there unpierced and unlacerated, is because you are being deliberately spared. That is when you will decide to surrender. And you will surrender. After you give up, you might come within actual physical sight of an American soldier.

Eventually you will be allowed to go home. To your home town. Where the ligaments of your nation’s infrastructure have been severed with terrible precision. You will have no bridges, no telephones, no power plants, no street lights, no traffic lights, no working runways, no computer networks, and no defense ministry, of course. You have aroused the wrath of the United States. You will be taking ferries in the dark for a long time.

Now imagine two armies, two strategically assisted, cyberspace-trained, post-industrial, panoptic ninja armies, going head-to-head. What on earth would that look like? A “conventional” war, a “non-nuclear” war, but a true War in the Age of Intelligent Machines, analyzed by nanoseconds to the last square micron.

Who would survive? And what would be left of them?

Who indeed.

The Singularity Is Weird

By - January 23, 2012

It’s been a while since I’ve posted a book review, but that doesn’t mean I’ve not been reading. I finished two tomes over the past couple weeks, Ray Kurzweil’s The Singularity Is Near, and Stephen Johnson’s Where Good Ideas Come From. I’ll focus on Kurzweil’s opus in this post.

Given what I hope to do in What We Hath Wrought, I simply had to read Singularity. I’ll admit I’ve been avoiding doing so (it’s nearly six years old now) mainly for one reason: The premise (as I understood it) kind of turns me off, and I’d heard from various folks in the industry that the book’s author was a bit, er, strident when it came to his points of view. I had read many reviews of the book (some mixed), and I figured I knew enough to get by.

I was wrong. The Singularity Is Near is not an easy book to read (it’s got a lot of deep and loosely connected science, and the writing could really use a few more passes by a structural editor), but it is an important one to read. As Kevin Kelly said in What Technology Wants, Kurzweil has written a book that will be cited over and over again as our culture attempts to sort out its future relationship to technology, policy, and yes, to God.

I think perhaps the “weirdness” vibe of Kurzweil’s work relates, in the end, to his rather messianic tone – he’s not afraid to call himself a “Singulatarian” and to claim this philosophy as his religion. I don’t know about you, but I’m wary of anyone who invents  a new religion and then proclaims themselves the leader of it.

That’s not to say Kurzweil doesn’t have a point or two. The main argument of the book is that technology is moving far faster than we realize, and its exponential progress will surprise us all – within about thirty years, we’ll have the ability to not only compute most of the intractable problems of humanity, we’ll be able to transcend our bodies, download our minds, and reach immortality.

Or, a Christian might argue, we could just wait for the rapture. My problem with this book is that it feels about the same in terms of faith.

But then again, faith is one of those Very Hard Topics that most of us struggle with. And if you take this book at face value, it will force you to address that question. Which to me, makes it a worthy read.

For example, Kurzweil has faith that, as machines get smarter than humans, we’ll essentially merge with machines, creating a new form of humanity. Our current form is merely a step along the way to the next level of evolution – a level where we merge our technos with our bios, so to speak. Put another way, compared to what we’re about to become, we’re the equivalent of Homo Erectus right about now.

It’s a rather compelling argument, but a bit hard to swallow, for many reasons. We’re rather used to evolution taking a very long time – hundreds of generations, at the very least. But Kurzweil is predicting all this will happen in the next one or two generations – and should that occur, I’m pretty sure far more minds will be blown than merged.

And Kurzweil has a knack for taking the provable tropes of technology – Moore’s Law, for example – and applying them to all manner of things, like human intelligence, biology, and, well, rocks (Kurzweil calculates the computing power of a rock in one passage). I’m in no way qualified to say whether it’s fair to directly apply lessons learned from technology’s rise to all things human, but I can say it feels a bit off. Like perhaps he’s missing a high order bit along the way.

Of course, that could just be me clinging to my narrow-minded and entitled sense of Humanity As It’s Currently Understood. Now that I’ve read Kurzweil’s book, I’m far more aware of my own limitations, philosophically as well as computationally. And for that, I’m genuinely grateful.

Other works I’ve reviewed:

The Corporation (film – my review).

What Technology Wants by Kevin Kelly (my review)

Alone Together: Why We Expect More from Technology and Less from Each Other by Sherry Turkle (my review)

The Information: A History, a Theory, a Flood by James Gleick (my review)

In The Plex: How Google Thinks, Works, and Shapes Our Lives by Steven Levy (my review)

The Future of the Internet–And How to Stop It by Jonathan Zittrain (my review)

The Next 100 Years: A Forecast for the 21st Century by George Friedman (my review)

Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives by the Year 2100 by Michio Kaku (my review)

Facebook Coalition To Google: Don’t Be Evil, Focus On The User

By -

Last week I spent an afternoon down at Facebook, as I mentioned here. While at Facebook I met with Blake Ross, Direct of Product (and well known in web circles as one of the creators of Firefox). Talk naturally turned to the implications of Google’s controversial integration of Google+ into its search results – a move that must both terrify (OMG, Google is gunning for us!) as well as delight (Holy cow, Google is breaking its core promise to its users!).

Turns out Ross had been quite busy the previous weekend, and he had a little surprise to show me. It was a simple hack, he said, some code he had thrown together in response to the whole Google+ tempest. But there was most certainly a gleam in his eye as he brought up a Chrome browser window (Google’s own product, he reminded me).

Blake had installed a bookmarklet onto his browser, one he had titled – in a nod to Google’s informal motto –  ”Don’t be evil.” For those of you who aren’t web geeks (I had to remind myself as well), a bookmarklet is “designed to add one-click functionality to a browser or web page. When clicked, a bookmarklet performs some function, one of a wide variety such as a search query or data extraction.”

When engaged, this “Don’t be evil” bookmarklet did indeed do one simple thing: It turned back the hands of time, and made Google work the way it did before the integration of Google+ earlier this month.

It was a very elegant hack, more thoughtful than the one or two I had seen before – those simply took all references to Google+ out of the index. This one went much further, and weaved a number of Google’s own tools – including its “rich snippet” webmaster tool and its own organic search listings, to re-order not only the search engine results, but also the results of the promotional Google+ boxes on the right side of the results, as well as the “typeahead” results that now feature only Google+ accounts (see example below, the first a search on my name using “normal Google” and then one using the bookmarklet).

After Blake showed me his work, we had a lively discussion about the implications of Facebook actually releasing such a tool. I mean, it’s one thing for a lone hacktivist to do this, it’s quite another for a member of the Internet Big Five to publicly call Google out. Facebook would need to vet this with legal, with management (this clearly had to pass muster with Mark Zuckerberg), and, I was told, Facebook wanted to reach out to others – such as Twitter – and get their input as well.

Due to all this, I had to agree to keep Blake’s weekend hack private till Facebook figured out whether (and how) it  would release Ross’s work.

Today, the hack goes public. It’s changed somewhat – it now resides at a site called “Focus On The User” and credit is given to engineers at Facebook, Twitter, and Myspace, but the basic implication is there: This is a tool meant to directly expose Google’s recent moves with Google+ as biased, hardcoded, and against Google’s core philosophy (which besides “don’t be evil,” has always been about “focusing on the user”).

Now, this wasn’t what I meant last week when I asked what a Facebook search engine might look like, but one can be very sure, this is certainly how Facebook and many others want Google to look like once again.

From the site’s overview:

We wanted to see how much better social search could be for consumers if Google chose to use all of the information already in its index. We think the results speak for themselves. Specifically, we created a bookmarklet that uses Google’s own relevance measure—the ranking of their organic search results—to determine what social content should appear in the areas where Google+ results are currently hardcoded. That includes the box on the right; the typeahead; and the indent under the first result for brand searches like “Macy’s” or “New York Times”.

All of the information in this demo comes from Google itself, and all of the ranking decisions are made by Google’s own algorithms. No other services, APIs or proprietary data stores are accessed.

Facebook released a video explaining how the hack works, including some rather devastating examples (be sure to watch the AT&T example at minute seven, and a search for my name as well), and it has open sourced the codebase. The video teasingly invites Google to use the code should it care to (er…not gonna happen).

Here’s an embed:

It’d be interesting if millions of people adopted the tool, however I don’t think that’s the point. A story such as this is tailor made for the Techmeme leaderboard, to be sure, and will no doubt be the talk of the Valley today. By tonight, the story most likely will go national, and that can’t help Google’s image. And I’m quite sure the folks at Facebook, Twitter, and others (think LinkedIn, Yahoo, etc) are making sure word of this exemplar reaches the right folks at the Federal Trade Commission, the Department of Justice, Congress, and government agencies around the world.

Not to mention, people in the Valley do care, deeply, about where they work. There are scores of former Google execs now working at Twitter, Facebook, and others. Many are dismayed by Google’s recent moves, and believe that inside Google, plenty of folks aren’t sleeping well because of what their beloved company’s single-minded focus on Google+. “Focus on The User” is a well-timed poke in the eye, a slap to the conscience of a company that has always claimed to be guided by higher principles, and an elegant hack, sure to become legend in the ongoing battle of the Big Five.

As I’ve said before, I’m planning on spending some time with folks at Google in the coming weeks. I’m eager to understand their point of view. Certainly they are playing a longer-term game here – and seem willing, at present, to take the criticism and not respond to the chorus of complaints. Should Google change that stance, I’ll let you know.

Related:

What Might A Facebook Search Engine Look Like?

Google+: Now Serving 90 Million. But…Where’s the Engagement Data!

Our Google+ Conundrum

It’s Not About Search Anymore, It’s About Deals

Hitler Is Pissed About Google+

Google Responds: No,That’s Not How Facebook Deal Went Down (Oh, And I Say: The Search Paradigm Is Broken)

Compete To Death, or Cooperate to Compete?

Twitter Statement on Google+ Integration with Google Search

Search, Plus Your World, As Long As It’s Our World

On “The Corporation,” the Film

By - January 20, 2012

If you read my Predictions for 2012, you’ll recall that #6 was “The Corporation” Becomes A Central Societal Question Mark.

We aren’t very far into the year, and signs of this coming true are all around. The “Occupy” movement seems to have found a central theme to its 2012 movement around overturning “the corporation as a person,” and some legislators are supporting that concept.

We’ll see if this goes anywhere, but I wanted to note, as I didn’t fairly do in my prediction post, the role that “The Corporation”  played in my thinking. I finally watched this 2003 documentary over the holidays. Its promoters still maintain an ongoing community here, and it doesn’t take long to determine that this film has a very strong, classically liberal point of view about the role corporations play in our society.

If you can manage the film’s rather heavy handed approach to the topic, you’ll learn a lot about how we got to the point we’re at with the Citizens United case. Obviously the film was made well before that case, but it certainly foreshadowed it. I certainly recommend it to anyone who wants the backstory – with a healthy side of scare tactics – of the corporation’s rise in American society.

My next review will be Ray Kurzweil’s The Singularity Is Near, a 2005 book I finished a few weeks ago. I’m currently reading Steven Johnson’s Where Good Ideas Come From: The Natural History of Innovation, which is a pleasure.

Other books I’ve reviewed:

What Technology Wants by Kevin Kelly (my review)

Alone Together: Why We Expect More from Technology and Less from Each Other by Sherry Turkle (my review)

The Information: A History, a Theory, a Flood by James Gleick (my review)

In The Plex: How Google Thinks, Works, and Shapes Our Lives by Steven Levy (my review)

The Future of the Internet–And How to Stop It by Jonathan Zittrain (my review)

The Next 100 Years: A Forecast for the 21st Century by George Friedman (my review)

Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives by the Year 2100 by Michio Kaku (my review)

What Might A Facebook Search Engine Look Like?

By - January 16, 2012

(image) Dialing in from the department of Pure Speculation…

As we all attempt to digest the implications of last week’s Google+ integration, I’ve also be thinking about Facebook’s next moves. There’s been plenty of speculation in the past that Facebook might compete with Google directly – by creating a full web search engine. After all, with the Open Graph and in particular, all those Like buttons, Facebook is getting a pretty good proxy of pages across the web, and indexing those pages in some way might prove pretty useful.

But I don’t think Facebook will create a search engine, at least not in the way we think about search today. For “traditional” web search, Facebook can lean on its partner Microsoft, which has a very good product in Bing. I find it more interesting to think about what “search problem” Facebook might solve in the future that Google simply can’t.

And that problem could be the very same problem (or opportunity) that Google can’t currently solve for, the very same problem that drove Google to integrate Google+ into its main search index: that of personalized search.

As I wrote over the past week, I believe the dominant search paradigm – that of crawling a free and open web, then displaying the best results for any particular query – has been broken by the rise of Facebook on the one hand, and the app economy on the other. Both of these developments are driven by personalization – the rise of “social.”

Both Facebook and the app economy are invisible to Google’s crawlers. To be fair, there are billions of Facebook pages in Google’s index, but it’s near impossible to “organize them and make them universally available” without Facebook’s secret sauce (its social graph and related logged in data). This is what those 2009 negotiations broke down over, after all.

The app economy, on the other hand, is just plain invisible to anyone. Sure, you can go to one of ten or so app stores and search for apps to use, but you sure can’t search apps the way you search, say, a web site. Why? First, the use case of apps, for the most part, is entirely personal, so apps have not been built to be “searchable.” I find this extremely frustrating, because why wouldn’t I want to “Google” the hundreds of rides and runs I’ve logged on my GPS app, as one example?

Secondly, the app economy is invisible to Google because data use policies of the dominant app universe – Apple – make it nearly impossible to create a navigable link economy between apps, so developers simply don’t do it. And as we all know, without a navigable link economy, “traditional” search breaks down.

Now, this link economy may well be rebuilt in a way that can be crawled, through up and coming standards like HTML5 and Telehash. But it’s going to take a lot of time for the app world to migrate to these standards, and I don’t know that open standards like these will necessarily win. Not when there’s a platform that already exists that can tie them together.

What platform is that, you might ask? Why, Facebook, of course.

Stick with me here. Imagine a world where the majority of app builders integrate with Facebook’s Open Graph, instrumenting your personal data through Facebook such that your data becomes searchable. (If you think that’s crazy, remember how most major companies and app services have already fallen all over themselves to leverage Open Graph). Then, all that data is hoovered into Facebook’s “search index”, and integrated with your personal social graph. Facebook then builds an interface to all you app data, add in your Facebook social graph data, and then perhaps tosses in a side of Bing so you can have the whole web as a backdrop, should you care to.

Voila – you’ve got yourself a truly personalized new kind of search engine. A Facebook search engine, one that searches your world, apps, Facebook and all.

Strangers things will probably happen. What do you think?

Update: Facebook’s getting one step closer this week…

 

It’s Not About Search Anymore, It’s About Deals

By - January 14, 2012

As in, who gets the best deal, why didn’t that deal go down, how do I get a deal, what should the deal terms be?

This is of course in the air given the whole Google+ fracas, but it’s part of a larger framework I’m thinking through and hope to write about. On the issue of “deals,” however, a little sketching out loud seems worthwhile.

Go read this piece: Facebook+Spotify: An ‘Unfair, Insider, Anti-Competitive’ Relationship…

It’s a common lament: A small developer who feels boxed out by whoever got the sweet deal. In this case, it’s on Facebook, but we all know it happens inside the Apple store as well (whoever gets top billing, gets sales).  Closed ecosystems controlled by one company create this dynamic. There’s only so much real estate, and the owner of the land gets to determine the most profitable use of it.

Google now appears to be acting the same way, cutting Google+ a “deal” so to speak, giving it the best real estate for all manner of search queries. That’s not how search was supposed to work. Search was supposed to reflect the ongoing conversation happening across all aspects of the Internet. If you were that small developer, you worked hard to get your service noticed on the web, and as it picked up a following, search would notice, start raising your profile in search results, and a virtuous loop began. Is that concept now dead?

Search isn’t supposed to be about cutting a deal to get your company’s wares to the top of relevant searches. In my reporting over the past week, most of my source conversations have been about failed deals – between Google and Facebook, or Google and Twitter. But search is supposed to be about showing the best results to consumers based on objective (or at least defensible and understandable) parameters, parameters *unrelated to the search engine itself.*

With Google Search Plus Your World (shortened by many to SPYW, which is just laughably bad as an acronym), it’s rather hard to tell the two apart anymore. When I wrote last year that Google = Google+, I meant it from a brand perspective. I didn’t realize how literal it’s become. Because with SPYW, all I’m getting is Google+ at the top of my results. I know I can turn SPYW off, and I probably will. Or, I can bail on Google+ altogether. But there is a real conundrum in doing so – more on that in my next post.

Some are arguing that search is no longer about results anymore, and that for years search has pretty much been about paid inclusion anyway (either paid through SEO,  or paid through ads, which increasingly don’t look like ads). That now, Google is focusing entirely on getting you an answer, and surfacing that answer right there on the results page. Perhaps the “right answer” is best found through cutting deals.

But I hope not. Because for me, search is a journey, not an answer.

This SPYW story has raised so many questions, it’s rather hard to sort through them all. I guess I’ll just keep writing till I feel like the writing’s done…

Related:

Hitler Is Pissed About Google+

Google Responds: No,That’s Not How Facebook Deal Went Down (Oh, And I Say: The Search Paradigm Is Broken)

Compete To Death, or Cooperate to Compete?

Twitter Statement on Google+ Integration with Google Search

Search, Plus Your World, As Long As It’s Our World

Google Responds: No,That’s Not How Facebook Deal Went Down (Oh, And I Say: The Search Paradigm Is Broken)

By - January 13, 2012

(image) I’ve just been sent an official response from Google to the updated version of my story posted yesterday (Compete To Death, or Cooperate to Compete?). In that story, I reported about 2009 negotiations over incorporation of Facebook data into Google search. I quoted a source familiar with the negotiations on the Facebook side, who told me  “Senior executives at Google insisted that for technical reasons all information would need to be public and available to all,” and “The only reason Facebook has a Bing integration and not a Google integration is that Bing agreed to terms for protecting user privacy that Google would not.”

I’ve now had conversations with a source familiar with Google’s side of the story, and to say the company disagrees with how Facebook characterized the negotiations is to put it mildly. I’ve also spoken to my Facebook source, who has clarified some nuance as well. To get started, here’s the official, on the record statement, from Rachel Whetstone, SVP Global Communications and Public Affairs:

“We want to set the record straight. In 2009, we were negotiating with Facebook over access to its data, as has been reported.  To claim that the we couldn’t reach an agreement because Google wanted to make private data publicly available is simply untrue.”

My source familiar with Google’s side of the story goes further, and gave me more detail on why the deal went south, at least from Google’s point of view. According to this source, as part of the deal terms Facebook insisted that Google agree to not use publicly available Facebook information to build out a “social service.” The two sides had already agreed that Google would not use Facebook’s firehose (or private) data to build such a service, my source says.

So what does “publicly available” mean? Well, that’d be Facebook pages that any search engine can crawl – information on Facebook that people *want* search engines to know about. This is compared to the firehose data that was the core asset being discussed between the parties. This firehose data is what Google would need in order to surface personal Facebook pages relevant to you in the context of a search query. (So, for example, if you were my friend on Facebook, and you searched for “Battelle soccer” on Google, then with the proposed deal, you’d see pictures of my kids’ soccer games that I had posted to Facebook).

Apparently, Google believed that Facebook’s demand around public information could be interpreted  as applying to how Google’s own search service was delivered, not to mention how it (or other products) might evolve. Interpretation is always where the devil is in these deals. Who’s to say, after all, that Google’s “social search” is not a “social service”? And Google Pages, Maps, etc. – those are arguably social in nature, or will be in the future.

Google balked at this language, and the deal fell apart. My Google source also disputes the claim that Google balked at being able to technically separate public from private data. Conversely, my Facebook source counters that the real issue of public vs. private had to do with Google’s refusal to honor changes in privacy settings over time – for example, if I deleted those soccer pictures, they should also be deleted from Google’s index. There’s a point where this all devolves to she said/he said, because the deal never happened, and to be honest, there are larger points to make.

So let’s start with this: If Facebook indeed demanded that Google not use publicly available Facebook data, it’s certainly understandable why Google wouldn’t agree to the deal. It may not seem obvious, but there is an awful lot of publicly available Facebook pages and data out there. Starbucks, for example, is more than happy to let anyone see its Facebook page, no matter if you’re logged in or not. And then there’s all that Facebook open graph data out on the public web – tons of sites show Facebook status updates, like counts and so on in a public fashion. In short, asking Google to not leverage that data in anything that might constitute a “social service” is anathema to a company who claims its mission to crawl all publicly available information, organize it, and make it available.

It’s one thing to ask that Google not use Facebook’s own social graph and private data to build new social services – after all, the social graph is Facebook’s crown jewels. But it’s quite another thing to ask Google to ignore other public information completely.

From Google’s point of view, Facebook was crippling future products and services that Google might create, which was tantamount to an insurance policy of sorts that Google wouldn’t become a strong competitor, at least not one that  leverages public information from Facebook. Google balked. If Facebook’s demand could have been interpreted as also applying to Google’s search results, well, that’s a stone cold deal killer.

I certainly understand why Facebook might ask for what they did, it’s not crazy. Google might well have responded by narrowing the deal, saying “Fine, you don’t build a search engine, and we won’t build a social network. But we should have the right to create other kinds of social services.” As far as I know, Google didn’t chose to say that. (Microsoft apparently did). And I think I know why: The two companies realized they were dancing on the head of a pin. Search = social, social = search. They couldn’t figure out a way to tease the two apart. Microsoft has cast its lot with Facebook, Google, not so much.

When high stakes deals fall apart, both sides usually claim the other is at fault, and that certainly seems to be the case here. It’s also the case with the Twitter deal, which I’ve gotten a fair amount of new information about as well. I hope to dig into that in another post. For now, I want to pull back a second and comment on what I think is really going on here, at least from the perspective of a longer view.

Our Cherished Search Paradigm Is Broken (But We Will Fix It….Eventually)

I think what we have here is a clear indication that the search paradigm we’ve operated under for a decade or so is broken. That paradigm stems from Google’s original letter to shareholders in 2004. Remember this line?Our search results are the best we know how to produce. They are unbiased and objective, and we do not accept payment for them or for inclusion or more frequent updating.

In many cases, it’s simply naive to claim Google is unbiased or objective. Google often favors its own properties over others, as Danny points out in Real-Life Examples Of How Google’s “Search Plus” Pushes Google+ Over Relevancy and others have also detailed. But there is a reason: if you’re going to show results from all other possible contenders, replete with their associated UI and functional bells and whistles (as Google does with its own Maps, Pages, Plus etc.), well, it’s nearly impossible now to determine which service is the right answer to a particular person’s query. Not to mention, you need to put a deal in place to get all the functionality of the service. Instead, Google has opted, in many cases, to go with their own stuff.

This is not a new idea, by the way. Yahoo’s been doing it this way from the beginning. The contentious issue is that biasing some results toward Google’s own products runs counter to Google’s founding philosophy.

I have a theory as to why all this is happening, and I don’t entirely blame Google. Back when search wasn’t personalized, Google could defensibly say that one service was better than another because it got more traffic, was linked to more (better PageRank), and so on. Back when everyone got the same results and the web was one homogenous glob of HTML, well, you could claim “this is the best result for the general population.” But personalized search has broken that framework – I lamented this back in 2008 with this post: Search Was Our Social Glue. But That Is Dissolving (more here).

With the rise of Facebook and the app economy, the problem of search has become terribly complicated. If you want to have results from Facebook in your search, well, that search service has to do a deal with Facebook. But what if you want results from your running app (I have hundreds of rides and runs logged on AllSportGPS, for example)? Or Instagram? Or Path, for that matter? Do they all have to do deals with Google and Bing? There are so many unconnected pieces of the Internet now (millions of apps, most of our own Facebook experiences, etc. etc.) that what’s a good personal result for one person is not necessarily good for another. If Google is to stay true to its original mission, it needs a new framework and a massive number of new signals – new glue – to put the pieces back together.

There are several ways to resolve this, and in another post, I hope to explore them (one of them, of course, is simply that everyone should just go through Facebook. That’s the vision of Open Graph). But for now, I’m just going to say this: The issues raised by this kerfuffle are far larger than Google vs. Facebook, or Google vs. Twitter. We are in the midst of a major search paradigm shift, and there will be far more tears before it gets resolved. But resolve it must, and resolve it will.