free html hit counter Book Related Archives - Page 14 of 31 - John Battelle's Search Blog

Is Our Republic Lost?

By - February 13, 2012

Over the weekend I finished Larry Lessig’s most recent (and ambitious) book, Republic, Lost: How Money Corrupts Congress–and a Plan to Stop It. Amongst those of us who considered Lessig our foremost voice on issues of Internet policy, his abrupt pivot to focus on government corruption was both disorienting and disheartening: here was our best Internet thinker, now tilting at government windmills. I mean, fix government? Take the money out of politics? Better to treat all that as damage, and route around it, right? Isn’t that what the Internet is supposed to be all about?

Well, maybe. But after the wake up call that was PIPA/SOPA, it’s become clear why Lessig decided to stop focusing on battles he felt he couldn’t win (reforming copyright law, for example), and instead aim his intellect at the root causes of why those battles were fruitless. As he writes in his preface:

I was driven to this shift when I became convinced that the questions I was addressing in the fields of copyright and Internet policy depended upon resolving the policy questions – the corruption – that I address (in Republic Lost).

Lessig, ever the lawyer at heart, presents his book as an argument, as well as a call to arms (more on that at the end). Early on he declares our country ruined, “poisoned” by an ineffective government, self-serving corporations, and an indifferent public. To be honest, it was hard to get through the first couple of chapters of Republic Lost without feeling like I was being lectured to on a subject I already acknowledged: Yes, we have a corrupt system, yes, lobbyists are in league with politicians to bend the law toward their client’s bottom lines, and yes, we should really do something about it.

But Lessig does make a promise, and in the book he keeps it: To identify and detail the “root” of the problem, and offer a prescription (or four) to address it. And yes, that root is corruption, in particular the corruption of money, but Lessig takes pains to define a particular kind of corruption. Contrary to popular sentiment, Lessig argues, special interest money is not directly buying votes (after all, that is illegal). Instead, an intricate “gift economy” has developed in Washington, one that is carefully cultivated by all involved, and driven by the incessant need of politicians to raise money so as to insure re-election.

Lessig calls this “dependency corruption” – politicians are dependent on major donors not only to be elected, but to live a lifestyle attendant with being a US Congressperson.  Lessig also points out how more than half of our representatives end up as lobbyists after serving – at salaries two to ten times those of a typical Congressperson (he also points out that we grossly underpay our representatives, compared to how they’d be remunerated for their talents in the private sector).

Lessig likens this dependency corruption to alcoholism – it “develops over time; it sets a patter of interaction that builds upon itself; it develops a resistance to breaking that pattern; it feeds a need that some find easier to resist than others; satisfying that need creates its own reward; that reward makes giving up the dependency difficult; for some, it makes it impossible.”

In short, Lessig says Washington DC is full of addicts, and if we’re to fix anything – health care, energy policy, education, social security, financial markets – we first have to address our politicians’ addiction to money, and our economic system’s enablement of that addiction. Because, as Lessig demonstrates in several chapters devoted to broken food and energy markets, broken schools, and broken financial systems, the problem isn’t that we can’t fix the problem. The problem, Lessig argues, is that we’re paying attention to the wrong problem.

Lessig’s argument essentially concludes that we’ve created a system of government that rewards policy failure – the bigger the issue, the stronger the lobbyists on one or even both sides, forcing Congress into a position of moral hazard – it can insure the most donations if it threatens regulation one way or the other, this way collecting from both sides. Lessig salts his argument with example after example of how the system fails at real reform due to the “money dance” each congressperson must perform.

It’s pretty depressing stuff. And yet – there are no truly evil characters here. In fact, Lessig makes quite the point of this: we face a corruption of “decent souls,” of “good people working in a corrupted system.”

Despite Lessig’s avowed liberal views (combined with his conservative, Reagan-era past), I could imagine that  Republic Lost could as easily be embraced by Tea Party fanatics as by Occupy Wall Street organizers. He focuses chapters on how “so damn much money” defeats the ends of both the left and the right, for example. And at times the book reads like an indictment of the Obama administration – Lessig, like many of us, believed that Obama was truly going to change Washington, then watched aghast as the new administration executed the same political playbook as every other career politician.

In the final section of his book, Lessig offers several plans to force fundamental campaign finance reform – the kind of reform that the majority of us seem to want, but that never seems to actually happen. Lessig acknowledges how unlikely it is that Congress would vote itself out of a system to which it is addicted, and offers some political gymnastics that have almost no chance of working (running a candidate for President who vetoes everything until campaign finance reform is passed, then promises to quit, for example).

The plan that has gotten the most attention is the “Grant and Franklin Project” – a plan to finance all candidacies for Congressional office through public funds. He suggests that the first fifty dollars of any Federal tax revenue (per person per year) be retained to fund political campaigns, then allocated by each of us as a voucher of sorts. In addition, we’d all be able to commit another $100 of our own money to any candidate we choose. Uncommitted funds go to our parties (if we do not actively wish to use our voucher). Any candidate can tap these resources, but only if that candidate agrees to take only vouchers and $100 contributions (bye bye, corporate and PAC money).  Lessig calculates the revenues of this plan would be well above the billions spent to elect politicians in our current system, and argues that the savings in terms of government pork would pay forward the investment many times over.

Lessig ends his book with a call to action – asking us to become “rootstrikers,” to get involved in bringing about the Grant and Franklin Project, or something like it (he goes into detail on a Constitutional convention as a means to the end, for example). And it’s here where I begin to lose the thread. On the one hand, I’m deeply frustrated by the problem Lessig outlines (I wrote about it here On The Problem of Money, Politics, and SOPA), but I’m also suspicious of any new “group” that I need to join – I find “activist” organizations tend to tilt toward unsustainable rhetoric. I’m not an activist by nature, but then again, perhaps it’s not activism Lessig is asking for. Perhaps it’s simply active citizenship.

I could see myself getting behind that. How about you?

####

Other works I’ve reviewed:

Where Good Ideas Come From: A Natural History of Innovation by Steven Johnson (my review)

The Singularity Is Near: When Humans Transcend Biology by Ray Kurzweil (my review)

The Corporation (film – my review).

What Technology Wants by Kevin Kelly (my review)

Alone Together: Why We Expect More from Technology and Less from Each Other by Sherry Turkle (my review)

The Information: A History, a Theory, a Flood by James Gleick (my review)

In The Plex: How Google Thinks, Works, and Shapes Our Lives by Steven Levy (my review)

The Future of the Internet–And How to Stop It by Jonathan Zittrain (my review)

The Next 100 Years: A Forecast for the 21st Century by George Friedman (my review)

Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives by the Year 2100 by Michio Kaku (my review)

  • Content Marquee

Larry Page’s “Tidal Wave Moment”?

By - February 07, 2012

Who remembers the moment, back in 1995, when Bill Gates wrote his famous Internet Tidal Wave Memo? In it he rallied his entire organization to the cause of the Internet, calling the new platform an existential threat/opportunity for Microsoft’s entire business. In the memo Gates wrote:

“I assign the Internet the highest level of importance. In this memo I want to make clear that our focus on the Internet is crucial to every part of our business. The Internet is the most important single development to come along since the IBM PC was introduced in 1981.”

The memo runs more than 5300 words and includes highly detailed product plans across all of Microsoft. In retrospect, it probably wasn’t a genius move to be so transparent – the memo became public during the US Dept. of Justice action against Microsoft in the late 1990s.

It strikes me that Larry Page at Google could have written such a memo to all Googlers last year. Of course, Page and his advisors must have learned from Microsoft’s mistakes, and certainly don’t want a declarative memo floating around the vast clouds of Internet eternity. Bad things can happen from direct mandates such as those made by Gates – in the memo he mentions that Microsoft must “match and beat” Netscape, for example, words that came back to haunt him during the DOJ action.

Here’s what Page might have written to his staff in 2011, with just a few words shifted:

” I assign social networking the highest level of importance. In this memo I want to make clear that our focus on social networking is crucial to every part of our business. Social networking is the most important single development to come along since Google was introduced in 1998.”

I very much doubt Page wrote anywhere that Google must “match and beat” Facebook. And unlike Gates, he probably did not pen detailed memos about integrating Google+ into all of Google’s products (as Gates did – for pages – declaring that Microsoft must integrate the Internet into all of its core products.)

But it’s certainly not lost on any Googler how important “social” is to the company: all of their bonuses were tied to social last year.

So why am I bringing this up now? Well, I’ve got no news hook. I’m just doing research for the book, and came across the memo, and its tone and urgency struck a familiar note. The furor around Search Plus Your World has died down, but it left a bad taste in a lot of folks’ mouths. But put in the context of “existential threat,” it’s easier to understand why Google did what it did.

Unlike the Internet, which was a freely accessible resource that any company could incorporate into its products and services, to date “social” has been dominated by one company, a company that Google has been unable to work with. Imagine if, when Gates wrote his Tidal Wave memo, the “Internet” he spoke of was controlled entirely by, say, MCI, and that Microsoft was unable to secure a deal to get all that Internet goodness into its future products.

That seems to be where Google finds itself, at least by its own reckoning. To continue being a great search engine, it needs the identity and relationship data found, for the most part, behind Facebook’s walls.

I’ve written elsewhere about the breakdown of the open web, the move toward more “walled gardens of data,” and what that does to Google’s ability to execute its core business of search. And it’s not just social – readers have sent me tons of information that predict how mobile, in particular, will escape the traditional reaches of Google’s spidering business model. I hope to pore through that information and post more here, but for now, it’s worth reading a bit of history to put Google’s moves into broader context.

Now All That’s Left Is To Write It

By -

For posterity, if nothing else, here’s what my desk looks like at the moment. After a particularly enlightening whiteboard session with Steven Johnson late last week, then further musings on the back of bar menus and borrowed receipt-tape with my wife, and finally after waking up and scribbling notes in the middle of the night, I finally have a working outline of The Next Book. No, it’s not supposed to make much sense. Yet.

Now all I have to do is write it.

Do You Think The US Government Is Monitoring Social Media?

By - February 03, 2012

I had the news on in the background while performing morning ablutions. It was tuned to CBS This Morning – Charlie Rose has recently joined the lineup and my wife, a former news producer, favors both Rose and the Tiffany Network. But the piece that was running as I washed the sleep from my eyes was simply unbelievable.

It was about the two unfortunate british tourists detained by Homeland Security over jokes on Twitter about “destroying America” (a colloquialism for partying – think “tear up the town”) and “digging up Marilyn Monroe” whilst in Hollywood. DHS cuffed the poor kids and tossed them in a detention center with “inner city criminals,” according to reports, then sent them back home. Access denied.(I tweeted the story when it happened, then forgot about it.)

Silly stuff, but also serious – I mean, if DHS can’t tell a 140-character colloquialism from a real threat….(Slap Forehead Now). CBS had managed to get an interview with the unfortunate couple, who were back in the UK and most likely never able to travel here again.

The interview wasn’t what woke me up this morning, it was what CBS’s “Terrorism Expert” had to say afterwards. Apparently Homeland Security claims it is NOT monitoring Twitter and other social media, instead, it got a “tip” about the tweets, and that’s why the couple was detained. The on-air “expert,” who used to run counter-terror for the LAPD and was an official at DHS as well, was asked point blank if the US Government was “monitoring social media.” He flatly denied it. (His comments, oddly, were cut out of the piece that’s now on the web and embedded above).

I do not believe him. Do you? And if they really are not – why not? Shouldn’t they be? I was curious to your thoughts, so here’s a poll:


And then, here’s the next one. Regardless of whether you think it actually IS monitoring….



It’s Not Whether Google’s Threatened. It’s Asking Ourselves: What Commons Do We Wish For?

By - February 02, 2012

If Facebook’s IPO filing does anything besides mint a lot of millionaires, it will be to shine a rather unsettling light on a fact most of us would rather not acknowledge: The web as we know it is rather like our polar ice caps: under severe, long-term attack by forces of our own creation.

And if we lose the web, well, we lose more than funny cat videos and occasionally brilliant blog posts. We lose a commons, an ecosystem, a “tangled bank” where serendipity, dirt, and iterative trial and error drive open innovation. Google’s been the focus of most of this analysis (hell, I called Facebook an “existential threat” to Google on Bloomberg yesterday), but I’d like to pull back for a second.

This post has been brewing in me for a while, but I was moved to start writing after reading this piece in Time:

Is Google In Danger of Being Shut Out of the Changing Internet?

The short answer is Hell Yes. But while I’m a fan of Google (for the most part), to me the piece is focused too narrowly on what might happen to one company, rather than to the ecosystem which allowed that company to thrive. It does a good job of outlining the challenges Google faces, which are worth recounting (and expanding upon) as a proxy for the larger question I’m attempting to elucidate:

1. The “old” Internet is shrinking, and being replaced by walled gardens over which Google’s crawlers can’t climb. Sure, Google can crawl Facebook’s “public pages,” but those represent a tiny fraction of the “pages” on Facebook, and are not informed by the crucial signals of identity and relationship which give those pages meaning. Similarly, Google can crawl the “public pages” of Apple’s iTunes store on the web, but all the value creation in the mobile iOS appworld is behind the walls of Fortress Apple. Google can’t see that information, can’t crawl it, and can’t “make it universally available.” Same for Amazon with its Kindle universe, Microsoft’s Xbox and mobile worlds, and many others.

2. Google’s business model depends on the web remaining open, and given #1 above, that model is imperiled. It’s damn hard to change business models, but with Google+ and Android, the company is trying. The author of the Time piece is skeptical of Google’s chances of recreating the Open Web with these new tools, however.

He makes a good point. But to me, the real issue isn’t whether Google’s business model is under attack by forces outside its control. Rather, the question is far more existential in nature: What kind of a world do we want to live in?

I’m going to say that again, because it bears us really considering: What kind of a world do we want to live in? As we increasingly leverage our lives through the world of digital platforms, what are the values we wish to hold in common? I wrote about this issue a month or so ago:  On This Whole “Web Is Dead” Meme. In that piece I outlined a number of core values that I believe are held in common when it comes to what I call the “open” or “independent” web. They also bear repeating (I go into more detail in the post, should you care to read it):

No gatekeepers. The web is decentralized. Anyone can start a web site. No one has the authority (in a democracy, anyway) to stop you from putting up a shingle.

– An ethos of the commons. The web developed over time under an ethos of community development, and most of its core software and protocols are royalty free or open source (or both). There wasn’t early lockdown on what was and wasn’t allowed. This created chaos, shady operators, and plenty of dirt and dark alleys. But it also allowed extraordinary value to blossom in that roiling ecosystem.

- No preset rules about how data is used. If one site collects information from or about a user of its site, that site has the right to do other things with that data, assuming, again, that it’s doing things that benefit all parties concerned.

- Neutrality. No one site on the web is any more or less accessible than any other site. If it’s on the web, you can find it and visit it.

- Interoperability. Sites on the web share common protocols and principles, and determine independently how to work with each other. There is no centralized authority which decides who can work with who, in what way.

I find it hard to argue with any of the points above as core values of how the Internet should work. And it is these values that created Google and allowed the company to become the world beater is has been these past ten or so years. But if you look at this list of values, and ask if Apple, Facebook, Amazon, and the thousands of app makers align with them, I am afraid the answer is mostly no. And that’s the bigger issue I’m pointing to: We’re slowly but surely creating an Internet that is abandoning its original values for…well, for something else that as yet is not well defined.

This is why I wrote Put Your Taproot Into the Independent Web. I’m not out to “save Google,” I’m focused on trying to understand what the Internet would look like if we don’t pay attention to our core shared values.

And it’s not fair to blame Apple, Facebook, Amazon, or app makers here. In conversations with various industry folks over the past few months, it’s become clear that there are more than business model issues stifling the growth of the open web. In no particular order, they are:

1. Engineering. It’s simply too hard to create super-great experiences on the open web. For many high value products and services, HTML and its associated scripting languages, including HTML5, are messy, incomplete, and are not as fast, clean, and elegant as coding for iOS or the Facebook ecosystem. I’ve heard this over and over again. This means developers are drawn to the Apple universe first, web second. Value accrues where engineering efforts pay off in a more compelling user experience.

2. Mobility. The PC-based HTML web is hopelessly behind mobile in any number of ways. It has no eyes (camera), no ears (audio input), no sense of place (GPS/location data). Why would anyone want to invest in a web that’s deaf, dumb, blind, and stuck in one place?

3. Experience. The open web is full of spam, shady operators, and blatant falsehoods. Outside of a relatively small percentage of high quality sites, most of the web is chock full of popup ads and other interruptive come-ons. It’s nearly impossible to find signal in that noise, and the web is in danger of being overrun by all that crap. In the curated gardens of places like Apple and Facebook, the weeds are kept to a minimum, and the user experience is just…better.

So, does that mean the Internet is going to become a series of walled gardens, each subject to the whims of that garden’s liege?

I don’t think so. Scroll up and look at that set of values again. I see absolutely no reason why they can not and should not be applied to how we live our lives inside the worlds of Apple, Facebook, Amazon, and the countless apps we have come to depend upon. But it requires a shift in our relationship to the Internet. It requires that we, as the co-creators of value through interactions, data, and sharing, take responsibility for ensuring that the Internet continues to be a commons.

I expect this will be less difficult that it sounds. It won’t take a political movement or a wholesale migration from Facebook to more open services. Instead, I believe in the open market of ideas, of companies and products and services which identify  the problems I’ve outlined above, and begin to address them through innovative new approaches that solve for them. I believe in the Internet. Always have, and always will.

Related:

Predictions 2012 #4: Google’s Challenging Year

We Need An Identity Re-Aggregator (That We Control)

Set The Data Free, And Value Will Follow

A Report Card on Web 2 and the App Economy

The InterDependent Web

On This Whole “Web Is Dead” Meme

Where Good Ideas Come From: A Tangled Bank

By - January 31, 2012

After pushing my way through a number of difficult but important reads, it was a pleasure to rip through Steven Johnson’s Where Good Ideas Come From: A Natural History of Innovation. I consider Steven a friend and colleague, and that will color my review of his most recent work (it came out in paperback last Fall). In short, I really liked the book. There, now Steven will continue to accept my invitations to lunch…

Steven is author of seven books, and I admire his approach to writing. He mixes story with essay, and has an elegant, spare style that I hope to emulate in my next book. If What We Hath Wrought is compared to his work, I’ll consider that a win.

Where Good Ideas Come From is an interesting, fast paced read that outlines the kinds of environments which spawn world-changing ideas. In a sense, this book is the summary of “lessons learned” from several of Johnson’s previous books, which go deep into one really big idea – The Invention of Air, for example, or  the discovery of a cure for cholera. It’s also a testament to another of Johnson’s obsessions – the modern city, which he points out is a far more likely seedbed of great ideas than the isolated suburb or cabin-on-a-lake-somewhere.

Johnson draws a parallel between great cities and the open web – both allow for many ideas to bump up against each other, breed, and create new forms. 

Some environments squelch new ideas; some environments seem to breed them effortlessly. The city and the Web have been such engines of innovation because, for complicated historical reasons, they are both environments that are powerfully suited for the creation, diffusion, and adoption of good ideas.

While more than a year old, Where Good Ideas Come From is an important and timely book, because the conclusions Johnson draw are instructive to the digital world we are building right now – will it be one that fosters what Zittrain calls generativity, or are we feeding ecosystems that are closed in nature? Johnson writes:

…openness and connectivity may, in the end, be more valuable to innovation than purely competitive mechanisms. Those patterns of innovation deserve recognition—in part because it’s intrinsically important to understand why good ideas emerge historically, and in part because by embracing these patterns we can build environments that do a better job of nurturing good ideas…

…If there is a single maxim that runs through this book’s arguments, it is that we are often better served by connecting ideas than we are by protecting them. ….when one looks at innovation in nature and in culture, environments that build walls around good ideas tend to be less innovative in the long run than more open-ended environments. Good ideas may not want to be free, but they do want to connect, fuse, recombine. They want to reinvent themselves by crossing conceptual borders. They want to complete each other as much as they want to compete.

I couldn’t help but think of the data and commercial restrictions imposed by Facebook and Apple as I read those words. As I’ve written over and over on this site, I’m dismayed by the world we’re building inside Apple’s “appworld,” on the one hand, and the trend toward planting our personal and corporate taproots too deeply in the soils of Facebook, on the other. Johnson surveys centuries of important, world changing ideas, often relating compelling personal narratives on the way to explaining how those ideas came to be not through closed, corporate R&D labs, but through unexpected collisions between passions, hobbies, coffee house conversations, and seeming coincidence. If you’re ever stuck, Johnson advises, go outside and bump into things for a while. I couldn’t agree more.

One concept Johnson elucidates is the “adjacent possible,” a theory attributed to biologist Stuart Kauffman. In short, the adjacent possible is the space inhabited by “what could be” based on what currently is. In biology and chemistry, for example, it’s the potential for various combinations of molecules to build self-replicating proteins. When that occurs, new adjacent possibilities open up, to the point of an explosion in life and order.

Johnson applies this theory to ideas, deftly demonstrating how Darwin’s fascination with the creation of coral reefs led – over years – to what is perhaps the most powerful idea of modernity – evolution. He concludes that while most of us understand Darwin’s great insight as mostly about “survival of the fittest,” perhaps its greatest insight is how it has “revealed the collaborative and connective forces at work in the natural world.” Darwin’s famous metaphor for this insight is the tangled bank:

It is interesting to contemplate a tangled bank, clothed with many plants of many kinds, with birds singing on the bushes, with various insects flitting about, and with worms crawling through the damp earth, and to reflect that these elaborately constructed forms, so different from each other, and dependent upon each other in so complex a manner, have all been produced by laws acting around us. . .

Johnson also extolls the concept of “liquid networks” – where information freely flows between many minds, of “slow hunches,” where ideas develop over long periods of time, as well as the importance of noise, serendipity, and error in the development of good ideas. He explores “exaptation” – the repurposing of one idea for another use, and the concept of “platforms” that allow each of these concepts – from liquid networks to serendipity and exaptation – to blossom (Twitter is cited as such a platform).

Johnson concludes:

Ideas rise in crowds, as Poincaré said. They rise in liquid networks where connection is valued more than protection. So if we want to build environments that generate good ideas—whether those environments are in schools or corporations or governments or our own personal lives—we need to keep that history in mind, and not fall back on the easy assumptions that competitive markets are the only reliable source of good ideas. Yes, the market has been a great engine of innovation. But so has the reef.

Amen, I say. I look forward to our great tech companies – Apple and Facebook amongst them – becoming more tangled bank than carefully pruned garden.

A nice endcap to the book is a survey Johnson took of great ideas across history. He places each idea on an XY grid where an idea is either generated by an individual or a network of individuals (the X axis) and/or a commercial or non-commercial environment (the Y Axis). The results are pretty clear: ideas thrive in “non-market/networked” environments.

Johnson's chart of major ideas emerging during the 19th and 20th centuries

This doesn’t mean those ideas don’t become the basis for commerce – quite the opposite in fact. But this is a book about how good ideas are created, not how they might be exploited. And we’d be well advised to pay attention to that as we consider how we organize our corporations, our governments, and ourselves – we have some stubborn problems to solve, and we’ll need a lot of good ideas if we’re going to solve them.

Highly recommended.

Next up on the reading list: Inside Apple: How America’s Most Admired–and Secretive–Company Really Works by Adam Lashinsky, and Republic, Lost: How Money Corrupts Congress–and a Plan to Stop It, by Larry Lessig.

####

Other works I’ve reviewed:

The Singularity Is Near: When Humans Transcend Biology by Ray Kurzweil (my review)

The Corporation (film – my review).

What Technology Wants by Kevin Kelly (my review)

Alone Together: Why We Expect More from Technology and Less from Each Other by Sherry Turkle (my review)

The Information: A History, a Theory, a Flood by James Gleick (my review)

In The Plex: How Google Thinks, Works, and Shapes Our Lives by Steven Levy (my review)

The Future of the Internet–And How to Stop It by Jonathan Zittrain (my review)

The Next 100 Years: A Forecast for the 21st Century by George Friedman (my review)

Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives by the Year 2100 by Michio Kaku (my review)

 

What Happens When Sharing Is Turned Off? People Don’t Dance.

By - January 30, 2012

One of only two photos to emerge from last night's Wilco concert, image Eric Henegen

Last night my wife and I did something quite rare – we went to a concert on a Sunday night, in San Francisco, with three other couples (Wilco, playing at The Warfield). If you don’t have kids and don’t live in the suburbs, you probably think we’re pretty lame, and I suppose compared to city dwellers, we most certainly are. But there you have it.

So why am I telling you about it? Because something odd happened at the show: Wilco enforced a “no smartphone” rule. Apparently lead singer Jeff Tweedy hates looking out at the audience and seeing folks waving lit phones back at him. Members of the Warfield staff told me they didn’t like the policy, but they enforced it  – quite strictly, I might add. It created a weird vibe – folks didn’t even take out their phones for fear they might be kicked out for taking a picture of the concert. (A couple of intrepid souls did sneak a pic in, as you can see at left…)

And… no one danced, not till the very end, anyway. I’ve seen Wilco a few times, and I’ve never seen a more, well, motionless crowd. But more on that later.

Now, I have something of a history when it comes to smart phones and concerts. Back in 2008 I was a founding partner in a new kind of social music experiment we called “CrowdFire.” In my post explaining the idea, I wrote:

Over the course of several brainstorming sessions… an idea began to take shape based on a single insight: personal media is changing how we all experience music. (when I was at Bonnaroo in 2007), everyone there had a cel phone with a camera. Or a Flip. Or a digital camera. And when an amazing moment occurred, more folks held up their digital devices than they did lighters. At Bonnaroo, I took a picture that nails it for me – the image at left. A woman capturing an incredible personal memory of an incredible shared experience (in this case, it was Metallica literally blowing people’s minds), the three screens reflecting the integration of physical, personal, and shared experiences. That image informed our logo, as you can see (below).

So – where did all those experiences go (Searchblog readers, of course, know I’ve been thinking about this for a while)? What could be done with them if they were all put together in one place, at one time, turned into a great big feed by a smart platform that everyone could access? In short, what might happen if someone built a platform to let the crowd – the audience – upload their experiences of the music to a great big database, then mix, mash, and meld them into something utterly new?

Thanks to partners like Microsoft, Intel, SuperFly, Federated Media and scores of individuals, CrowdFire actually happened at Outside Lands, both in 2008 and in 2009. It was a massive effort – the first year literally broke AT&T’s network. But it was clear we were onto something. People want to capture and share the experience of being at a live concert, and the smart phone was clearly how they were now doing it.

It was the start of something – brainstorming with several of my friends prior to CrowdFire’s birth, we imagined a world where every shareable experience became data that could be recombined to create fungible alternate realities. Heady stuff, stuff that is still impossible, but I feel will eventually become our reality as we careen toward a future of big data and big platforms.

Since those early days, the idea of CrowdFire has certainly caught on. In early 2008, we had to build the whole platform from scratch, but now, folks use services like Instagram, Twitter, Facebook, and Foursquare to share their experiences. Many artists share back, sending out photos and tweets from on stage. Most major festivals and promoters have some kind of fan photo/input service that they promote as well. CrowdFire was a great idea, and maybe, had I not been overwhelmed with running FM, we might have turned it into a real company/service that could have integrated all this output and created something big in the world. But it was a bit ahead of its time.

What has happened since that first Outside Lands is that at every concert I’ve attended, I’ve noticed the crowd’s increasing connection to their smart phones – taking pictures, group texting, tweeting, and sharing the moments with their extended networks across any number of social services. It’s hard to find an experience more social than a big concert, and the thousands of constantly lit smartphone screens are a testament to that fact, as are the constant streams of photos and status updates coming out of nearly every show I’ve seen, or followed enviously online.

Which brings me back to last night. I was unaware of the policy, so as Wilco opened at the sold-out Warfield, something felt off to me. Here were two thousand San Francisco hipsters, all turned attentively toward the stage – but most of them had their hands in their pockets! As the band went into the impossible-not-to-move-to “Art of Almost” and “I Might,” I started wondering what was up – why weren’t people at least swaying?! The music was extraordinary, the sound system perfectly tuned. But everyone seemed very intent on…well…being intent. They stared forward, hands in pocket, nodded their heads a bit, but no one danced. It was a rather odd vibe. It was as if the crowd had been admonished to not be too … expressive.

Then it hit me. Nobody had their phone out. I turned to a security guard and asked why no one was holding up a phone. That’s when I learned of Wilco’s policy.

It seemed to me that the rule had the unintended consequence of muting the crowd’s ability to connect to the joy of the moment. Odd, that. We’re so connected to these devices and their ability to reflect our own sense of self that when we’re deprived of them, we feel somehow less…human.

My first reaction was “Well, this sucks,” but on second thought, I got why Tweedy wanted his audience to focus on the experience in the room, instead of watching and sharing it through the screens of their smartphones. By the encore, many people were dancing – they had loosened up. But in the end, I’m not sure I agree with Wilco – they’re fighting the wrong battle (and losing extremely valuable word of mouth in the process, but that’s another post).

There are essentially two main reasons to hold a phone up at a show. First, to capture a memory for yourself, a reminder of the moment you’re enjoying. And second, to share that moment with someone – to express your emotions socially. Both seem perfectly legitimate to me. (I’m not down with doing email or taking a call during a show, I’ll admit).
But the smart phone isn’t a perfect device, as we all know. It forces the world into a tiny screen. It runs out of battery, bandwidth, and power. It distracts us from the world around us. There are too many steps – too much friction – between capturing the things we are experiencing right now and the sharing of those things with people we care about.

But I sense that the sea of smart phones lit up at concerts is a temporary phenomenon. The integration of technology, sharing, and social into our physical world, on the other hand, well that ain’t going away. In the future, it’s going to be much harder to enforce policies like Wilco’s, because the phone will be integrated into our clothing, our jewelry, our eyeglasses, and possibly even ourselves. When that happens – when I can take a picture through my glasses, preview it, then send it to Instagram using gestures from my fingers, or eyeblinks, or a wrinkle of my nose – when technology becomes truly magical – asking people to turn it off is going to be the equivalent of asking them not to dance – to not express their joy at being in the moment.

And why would anyone want to do that?

The Future of War (From Jan., 1993 to the Present)

By - January 24, 2012

(image is a shot of my copy of the first Wired magazine, signed by our founding team)
I just read this NYT piece on the United States’ approach to unmanned warfare: Do Drones Undermine Democracy?. From it:

There is not a single new manned combat aircraft under research and development at any major Western aerospace company, and the Air Force is training more operators of unmanned aerial systems than fighter and bomber pilots combined. In 2011, unmanned systems carried out strikes from Afghanistan to Yemen. The most notable of these continuing operations is the not-so-covert war in Pakistan, where the United States has carried out more than 300 drone strikes since 2004.

Yet this operation has never been debated in Congress; more than seven years after it began, there has not even been a single vote for or against it. This campaign is not carried out by the Air Force; it is being conducted by the C.I.A. This shift affects everything from the strategy that guides it to the individuals who oversee it (civilian political appointees) and the lawyers who advise them (civilians rather than military officers).

It also affects how we and our politicians view such operations. President Obama’s decision to send a small, brave Navy Seal team into Pakistan for 40 minutes was described by one of his advisers as “the gutsiest call of any president in recent history.” Yet few even talk about the decision to carry out more than 300 drone strikes in the very same country.

Read the whole piece. Really, read it. If any article in the past year or so does a better job of displaying how what we’ve built with technology is changing the essence of our humanity, I’d like to read it.

For me, this was a pretty powerful reminder. Why? Because we put the very same idea on display as the very first cover story of Wired, nearly 20 years ago. Written by Bruce Sterling, whose star has only become brighter in the past two decades, it predicts the future of war with an eerie accuracy. In the article, Sterling describes “modern Nintendo training for modern Nintendo war.” Sure, if he was all seeing, he might have said Xbox, but still…here are some quotes from nearly 20 years ago:

The omniscient eye of computer surveillance can now dwell on the extremes of battle like a CAT scan detailing a tumor in a human skull. This is virtual reality as a new way of knowledge: a new and terrible kind of transcendent military power.

…(Military planners) want a pool of contractors and a hefty cadre of trained civilian talent that they can draw from at need. They want professional Simulation Battle Masters. Simulation system operators. Simulation site managers. Logisticians. Software maintenance people. Digital cartographers. CAD-CAM designers. Graphic designers.

(Ed: Like my son playing Call of Duty?)

And it wouldn’t break their hearts if the American entertainment industry picked up on their interactive simulation network technology, or if some smart civilian started adapting these open-architecture, virtual-reality network protocols that the military just developed. The cable TV industry, say. Or telephone companies running Distributed Simulation on fiber-to-the-curb. Or maybe some far-sighted commercial computer-networking service. It’s what the military likes to call the “purple dragon” angle. Distributed Simulation technology doesn’t have to stop at tanks and aircraft, you see. Why not simulate something swell and nifty for civilian Joe and Jane Sixpack and the kids? Why not purple dragons?

(Ed: Skyrim, anyone?!)

Can governments really exercise national military power – kick ass, kill people – merely by using some big amps and some color monitors and some keyboards, and a bunch of other namby-pamby sci-fi “holodeck” stuff?

The answer is yes.

Say you are in an army attempting to resist the United States. You have big tanks around you, and ferocious artillery, and a gun in your hands. And you are on the march.

Then high-explosive metal begins to rain upon you from a clear sky. Everything around you that emits heat, everything around you with an engine in it, begins to spontaneously and violently explode. You do not see the eyes that see you. You cannot know where the explosives are coming from: sky-colored Stealths invisible to radar, offshore naval batteries miles away, whip-fast and whip-smart subsonic cruise missiles, or rapid-fire rocket batteries on low-flying attack helicopters just below your horizon. It doesn’t matter which of these weapons is destroying your army – you don’t know, and you won’t be told, either. You will just watch your army explode.

Eventually, it will dawn on you that the only reason you, yourself, are still alive, still standing there unpierced and unlacerated, is because you are being deliberately spared. That is when you will decide to surrender. And you will surrender. After you give up, you might come within actual physical sight of an American soldier.

Eventually you will be allowed to go home. To your home town. Where the ligaments of your nation’s infrastructure have been severed with terrible precision. You will have no bridges, no telephones, no power plants, no street lights, no traffic lights, no working runways, no computer networks, and no defense ministry, of course. You have aroused the wrath of the United States. You will be taking ferries in the dark for a long time.

Now imagine two armies, two strategically assisted, cyberspace-trained, post-industrial, panoptic ninja armies, going head-to-head. What on earth would that look like? A “conventional” war, a “non-nuclear” war, but a true War in the Age of Intelligent Machines, analyzed by nanoseconds to the last square micron.

Who would survive? And what would be left of them?

Who indeed.

The Singularity Is Weird

By - January 23, 2012

It’s been a while since I’ve posted a book review, but that doesn’t mean I’ve not been reading. I finished two tomes over the past couple weeks, Ray Kurzweil’s The Singularity Is Near, and Stephen Johnson’s Where Good Ideas Come From. I’ll focus on Kurzweil’s opus in this post.

Given what I hope to do in What We Hath Wrought, I simply had to read Singularity. I’ll admit I’ve been avoiding doing so (it’s nearly six years old now) mainly for one reason: The premise (as I understood it) kind of turns me off, and I’d heard from various folks in the industry that the book’s author was a bit, er, strident when it came to his points of view. I had read many reviews of the book (some mixed), and I figured I knew enough to get by.

I was wrong. The Singularity Is Near is not an easy book to read (it’s got a lot of deep and loosely connected science, and the writing could really use a few more passes by a structural editor), but it is an important one to read. As Kevin Kelly said in What Technology Wants, Kurzweil has written a book that will be cited over and over again as our culture attempts to sort out its future relationship to technology, policy, and yes, to God.

I think perhaps the “weirdness” vibe of Kurzweil’s work relates, in the end, to his rather messianic tone – he’s not afraid to call himself a “Singulatarian” and to claim this philosophy as his religion. I don’t know about you, but I’m wary of anyone who invents  a new religion and then proclaims themselves the leader of it.

That’s not to say Kurzweil doesn’t have a point or two. The main argument of the book is that technology is moving far faster than we realize, and its exponential progress will surprise us all – within about thirty years, we’ll have the ability to not only compute most of the intractable problems of humanity, we’ll be able to transcend our bodies, download our minds, and reach immortality.

Or, a Christian might argue, we could just wait for the rapture. My problem with this book is that it feels about the same in terms of faith.

But then again, faith is one of those Very Hard Topics that most of us struggle with. And if you take this book at face value, it will force you to address that question. Which to me, makes it a worthy read.

For example, Kurzweil has faith that, as machines get smarter than humans, we’ll essentially merge with machines, creating a new form of humanity. Our current form is merely a step along the way to the next level of evolution – a level where we merge our technos with our bios, so to speak. Put another way, compared to what we’re about to become, we’re the equivalent of Homo Erectus right about now.

It’s a rather compelling argument, but a bit hard to swallow, for many reasons. We’re rather used to evolution taking a very long time – hundreds of generations, at the very least. But Kurzweil is predicting all this will happen in the next one or two generations – and should that occur, I’m pretty sure far more minds will be blown than merged.

And Kurzweil has a knack for taking the provable tropes of technology – Moore’s Law, for example – and applying them to all manner of things, like human intelligence, biology, and, well, rocks (Kurzweil calculates the computing power of a rock in one passage). I’m in no way qualified to say whether it’s fair to directly apply lessons learned from technology’s rise to all things human, but I can say it feels a bit off. Like perhaps he’s missing a high order bit along the way.

Of course, that could just be me clinging to my narrow-minded and entitled sense of Humanity As It’s Currently Understood. Now that I’ve read Kurzweil’s book, I’m far more aware of my own limitations, philosophically as well as computationally. And for that, I’m genuinely grateful.

Other works I’ve reviewed:

The Corporation (film – my review).

What Technology Wants by Kevin Kelly (my review)

Alone Together: Why We Expect More from Technology and Less from Each Other by Sherry Turkle (my review)

The Information: A History, a Theory, a Flood by James Gleick (my review)

In The Plex: How Google Thinks, Works, and Shapes Our Lives by Steven Levy (my review)

The Future of the Internet–And How to Stop It by Jonathan Zittrain (my review)

The Next 100 Years: A Forecast for the 21st Century by George Friedman (my review)

Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives by the Year 2100 by Michio Kaku (my review)

Facebook Coalition To Google: Don’t Be Evil, Focus On The User

By -

Last week I spent an afternoon down at Facebook, as I mentioned here. While at Facebook I met with Blake Ross, Direct of Product (and well known in web circles as one of the creators of Firefox). Talk naturally turned to the implications of Google’s controversial integration of Google+ into its search results – a move that must both terrify (OMG, Google is gunning for us!) as well as delight (Holy cow, Google is breaking its core promise to its users!).

Turns out Ross had been quite busy the previous weekend, and he had a little surprise to show me. It was a simple hack, he said, some code he had thrown together in response to the whole Google+ tempest. But there was most certainly a gleam in his eye as he brought up a Chrome browser window (Google’s own product, he reminded me).

Blake had installed a bookmarklet onto his browser, one he had titled – in a nod to Google’s informal motto –  “Don’t be evil.” For those of you who aren’t web geeks (I had to remind myself as well), a bookmarklet is “designed to add one-click functionality to a browser or web page. When clicked, a bookmarklet performs some function, one of a wide variety such as a search query or data extraction.”

When engaged, this “Don’t be evil” bookmarklet did indeed do one simple thing: It turned back the hands of time, and made Google work the way it did before the integration of Google+ earlier this month.

It was a very elegant hack, more thoughtful than the one or two I had seen before – those simply took all references to Google+ out of the index. This one went much further, and weaved a number of Google’s own tools – including its “rich snippet” webmaster tool and its own organic search listings, to re-order not only the search engine results, but also the results of the promotional Google+ boxes on the right side of the results, as well as the “typeahead” results that now feature only Google+ accounts (see example below, the first a search on my name using “normal Google” and then one using the bookmarklet).

After Blake showed me his work, we had a lively discussion about the implications of Facebook actually releasing such a tool. I mean, it’s one thing for a lone hacktivist to do this, it’s quite another for a member of the Internet Big Five to publicly call Google out. Facebook would need to vet this with legal, with management (this clearly had to pass muster with Mark Zuckerberg), and, I was told, Facebook wanted to reach out to others – such as Twitter – and get their input as well.

Due to all this, I had to agree to keep Blake’s weekend hack private till Facebook figured out whether (and how) it  would release Ross’s work.

Today, the hack goes public. It’s changed somewhat – it now resides at a site called “Focus On The User” and credit is given to engineers at Facebook, Twitter, and Myspace, but the basic implication is there: This is a tool meant to directly expose Google’s recent moves with Google+ as biased, hardcoded, and against Google’s core philosophy (which besides “don’t be evil,” has always been about “focusing on the user”).

Now, this wasn’t what I meant last week when I asked what a Facebook search engine might look like, but one can be very sure, this is certainly how Facebook and many others want Google to look like once again.

From the site’s overview:

We wanted to see how much better social search could be for consumers if Google chose to use all of the information already in its index. We think the results speak for themselves. Specifically, we created a bookmarklet that uses Google’s own relevance measure—the ranking of their organic search results—to determine what social content should appear in the areas where Google+ results are currently hardcoded. That includes the box on the right; the typeahead; and the indent under the first result for brand searches like “Macy’s” or “New York Times”.

All of the information in this demo comes from Google itself, and all of the ranking decisions are made by Google’s own algorithms. No other services, APIs or proprietary data stores are accessed.

Facebook released a video explaining how the hack works, including some rather devastating examples (be sure to watch the AT&T example at minute seven, and a search for my name as well), and it has open sourced the codebase. The video teasingly invites Google to use the code should it care to (er…not gonna happen).

Here’s an embed:

It’d be interesting if millions of people adopted the tool, however I don’t think that’s the point. A story such as this is tailor made for the Techmeme leaderboard, to be sure, and will no doubt be the talk of the Valley today. By tonight, the story most likely will go national, and that can’t help Google’s image. And I’m quite sure the folks at Facebook, Twitter, and others (think LinkedIn, Yahoo, etc) are making sure word of this exemplar reaches the right folks at the Federal Trade Commission, the Department of Justice, Congress, and government agencies around the world.

Not to mention, people in the Valley do care, deeply, about where they work. There are scores of former Google execs now working at Twitter, Facebook, and others. Many are dismayed by Google’s recent moves, and believe that inside Google, plenty of folks aren’t sleeping well because of what their beloved company’s single-minded focus on Google+. “Focus on The User” is a well-timed poke in the eye, a slap to the conscience of a company that has always claimed to be guided by higher principles, and an elegant hack, sure to become legend in the ongoing battle of the Big Five.

As I’ve said before, I’m planning on spending some time with folks at Google in the coming weeks. I’m eager to understand their point of view. Certainly they are playing a longer-term game here – and seem willing, at present, to take the criticism and not respond to the chorus of complaints. Should Google change that stance, I’ll let you know.

Related:

What Might A Facebook Search Engine Look Like?

Google+: Now Serving 90 Million. But…Where’s the Engagement Data!

Our Google+ Conundrum

It’s Not About Search Anymore, It’s About Deals

Hitler Is Pissed About Google+

Google Responds: No,That’s Not How Facebook Deal Went Down (Oh, And I Say: The Search Paradigm Is Broken)

Compete To Death, or Cooperate to Compete?

Twitter Statement on Google+ Integration with Google Search

Search, Plus Your World, As Long As It’s Our World