free html hit counter Random, But Interesting Archives | Page 12 of 142 | John Battelle's Search Blog

The Ecstasy of Telegraphy

By - February 14, 2012

My research manager turned up this gem in the course of answering a question I had about the popular response to the introduction of the telegraph in the US (a moment that informs the working title of my next book). What I find fascinating is how the invention incited an innate religious response (this editorial from a local Albany, NY newspaper is in no way unique). The logic goes something like this: Mankind has invented something that pushes the boundaries of our comprehension – we are now doing something that once was understood to be the provenance only of God. Therefore, we must remind ourselves that this invention, while seeming to contradict the supreme powers of God, in fact only reinforces His position in our world. 

The logic may feel a bit tortured, but it’s consistent with a point I make every time I explain one of the core ideas of the book – that in the 200 years between the introduction of the telegraph (early 1840s) and when my children have kids of their own (roughly 30 years from now, or  early 2040s), mankind will have completed something of a pivot when it comes to our shared understanding of the relationship between technology and God. When Morse couldn’t decide what the first telegraph message should be, he settled on a Biblical quote quite consistent with the Albany Atlas and Argus’ editorial: What Hath God Wrought? The telegraph was such a massive shift in the possible, it was best to ascribe its power to God. Humans can’t handle this power.*

But in the intervening centuries, we’ve come to realize that God isn’t going to provide an operating manual for the power we’ve unlocked, and if we’re going to get our arms around it, it’s on us to do so. We can’t throw up our hands and hope for the best. We have to shoulder the responsibility of entering these new realms of power. That’s why I change Morse’s famous quote for my working title: What We Hath Wrought. Two centuries after that first electronic message pierced time and space, what will we have built?

That’s the question my book will explore, using the tools of anthropology and journalism, and a bit of luck along the way.

*Indeed, the story of Morse’s precursor Claude Chappe, the inventor of the “optical telegraph,” offers additional pathos to the narrative. Raised “in church service,” Chappe chose an entrepreneurial path, developing a series of signal towers across France in the late 1790s. His first test message declared a far more earthly intention: “If you succeed, you will bask in glory.” But Chappe died ingloriously: He threw himself down a well in despair at accusations his invention was stolen from the military. 

  • Content Marquee

China Hacking: Here We Go

By - February 13, 2012

(image) Waaaay back in January of this year, in my annual predictions, I offered a conjecture that seemed pretty orthogonal to my usual focus:

“China will be caught spying on US corporations, especially tech and commodity companies. Somewhat oddly, no one will (seem to) care.”

Well, I just got this WSJ news alert, which reports:

Using seven passwords stolen from top Nortel executives, including the chief executive, the hackers—who appeared to be working in China—penetrated Nortel’s computers at least as far back as 2000 and over the years downloaded technical papers, research-and-development reports, business plans, employee emails and other documents.

The hackers also hid spying software so deeply within some employees’ computers that it took investigators years to realize the pervasiveness of the problem.

Now, before I trumpet my prognosticative abilities too loudly, let’s see if … anybody cares. At all. And if you’re wondering why I even bothered to make such a prediction, well, it’s because I think it’s going to prove important….eventually.

Nearly 90% of the World Uses Mobile Phones

By -

In the normal course of research for the book, I wondered how quickly mobile phone use got to the 1 billion mark. I figured we’re well past that number now, but I had no idea how far past it we’ve blown.Like, six times past it. We hit 1 billion in the year 2000, and never looked back.

According to the ITU, nearly 90% of people in the world use mobile phones. Holy. Cow. By comparison, just 35% of us are using the Internet. That is going to change, and fast. Everyone needs a new phone after some period of time. And the next one they get is going to be connected. Just some Monday afternoon Powerpoint fodder for you all. Now back to work.

 

Is Our Republic Lost?

By -

Over the weekend I finished Larry Lessig’s most recent (and ambitious) book, Republic, Lost: How Money Corrupts Congress–and a Plan to Stop It. Amongst those of us who considered Lessig our foremost voice on issues of Internet policy, his abrupt pivot to focus on government corruption was both disorienting and disheartening: here was our best Internet thinker, now tilting at government windmills. I mean, fix government? Take the money out of politics? Better to treat all that as damage, and route around it, right? Isn’t that what the Internet is supposed to be all about?

Well, maybe. But after the wake up call that was PIPA/SOPA, it’s become clear why Lessig decided to stop focusing on battles he felt he couldn’t win (reforming copyright law, for example), and instead aim his intellect at the root causes of why those battles were fruitless. As he writes in his preface:

I was driven to this shift when I became convinced that the questions I was addressing in the fields of copyright and Internet policy depended upon resolving the policy questions – the corruption – that I address (in Republic Lost).

Lessig, ever the lawyer at heart, presents his book as an argument, as well as a call to arms (more on that at the end). Early on he declares our country ruined, “poisoned” by an ineffective government, self-serving corporations, and an indifferent public. To be honest, it was hard to get through the first couple of chapters of Republic Lost without feeling like I was being lectured to on a subject I already acknowledged: Yes, we have a corrupt system, yes, lobbyists are in league with politicians to bend the law toward their client’s bottom lines, and yes, we should really do something about it.

But Lessig does make a promise, and in the book he keeps it: To identify and detail the “root” of the problem, and offer a prescription (or four) to address it. And yes, that root is corruption, in particular the corruption of money, but Lessig takes pains to define a particular kind of corruption. Contrary to popular sentiment, Lessig argues, special interest money is not directly buying votes (after all, that is illegal). Instead, an intricate “gift economy” has developed in Washington, one that is carefully cultivated by all involved, and driven by the incessant need of politicians to raise money so as to insure re-election.

Lessig calls this “dependency corruption” – politicians are dependent on major donors not only to be elected, but to live a lifestyle attendant with being a US Congressperson.  Lessig also points out how more than half of our representatives end up as lobbyists after serving – at salaries two to ten times those of a typical Congressperson (he also points out that we grossly underpay our representatives, compared to how they’d be remunerated for their talents in the private sector).

Lessig likens this dependency corruption to alcoholism – it “develops over time; it sets a patter of interaction that builds upon itself; it develops a resistance to breaking that pattern; it feeds a need that some find easier to resist than others; satisfying that need creates its own reward; that reward makes giving up the dependency difficult; for some, it makes it impossible.”

In short, Lessig says Washington DC is full of addicts, and if we’re to fix anything – health care, energy policy, education, social security, financial markets – we first have to address our politicians’ addiction to money, and our economic system’s enablement of that addiction. Because, as Lessig demonstrates in several chapters devoted to broken food and energy markets, broken schools, and broken financial systems, the problem isn’t that we can’t fix the problem. The problem, Lessig argues, is that we’re paying attention to the wrong problem.

Lessig’s argument essentially concludes that we’ve created a system of government that rewards policy failure – the bigger the issue, the stronger the lobbyists on one or even both sides, forcing Congress into a position of moral hazard – it can insure the most donations if it threatens regulation one way or the other, this way collecting from both sides. Lessig salts his argument with example after example of how the system fails at real reform due to the “money dance” each congressperson must perform.

It’s pretty depressing stuff. And yet – there are no truly evil characters here. In fact, Lessig makes quite the point of this: we face a corruption of “decent souls,” of “good people working in a corrupted system.”

Despite Lessig’s avowed liberal views (combined with his conservative, Reagan-era past), I could imagine that  Republic Lost could as easily be embraced by Tea Party fanatics as by Occupy Wall Street organizers. He focuses chapters on how “so damn much money” defeats the ends of both the left and the right, for example. And at times the book reads like an indictment of the Obama administration – Lessig, like many of us, believed that Obama was truly going to change Washington, then watched aghast as the new administration executed the same political playbook as every other career politician.

In the final section of his book, Lessig offers several plans to force fundamental campaign finance reform – the kind of reform that the majority of us seem to want, but that never seems to actually happen. Lessig acknowledges how unlikely it is that Congress would vote itself out of a system to which it is addicted, and offers some political gymnastics that have almost no chance of working (running a candidate for President who vetoes everything until campaign finance reform is passed, then promises to quit, for example).

The plan that has gotten the most attention is the “Grant and Franklin Project” – a plan to finance all candidacies for Congressional office through public funds. He suggests that the first fifty dollars of any Federal tax revenue (per person per year) be retained to fund political campaigns, then allocated by each of us as a voucher of sorts. In addition, we’d all be able to commit another $100 of our own money to any candidate we choose. Uncommitted funds go to our parties (if we do not actively wish to use our voucher). Any candidate can tap these resources, but only if that candidate agrees to take only vouchers and $100 contributions (bye bye, corporate and PAC money).  Lessig calculates the revenues of this plan would be well above the billions spent to elect politicians in our current system, and argues that the savings in terms of government pork would pay forward the investment many times over.

Lessig ends his book with a call to action – asking us to become “rootstrikers,” to get involved in bringing about the Grant and Franklin Project, or something like it (he goes into detail on a Constitutional convention as a means to the end, for example). And it’s here where I begin to lose the thread. On the one hand, I’m deeply frustrated by the problem Lessig outlines (I wrote about it here On The Problem of Money, Politics, and SOPA), but I’m also suspicious of any new “group” that I need to join – I find “activist” organizations tend to tilt toward unsustainable rhetoric. I’m not an activist by nature, but then again, perhaps it’s not activism Lessig is asking for. Perhaps it’s simply active citizenship.

I could see myself getting behind that. How about you?

####

Other works I’ve reviewed:

Where Good Ideas Come From: A Natural History of Innovation by Steven Johnson (my review)

The Singularity Is Near: When Humans Transcend Biology by Ray Kurzweil (my review)

The Corporation (film – my review).

What Technology Wants by Kevin Kelly (my review)

Alone Together: Why We Expect More from Technology and Less from Each Other by Sherry Turkle (my review)

The Information: A History, a Theory, a Flood by James Gleick (my review)

In The Plex: How Google Thinks, Works, and Shapes Our Lives by Steven Levy (my review)

The Future of the Internet–And How to Stop It by Jonathan Zittrain (my review)

The Next 100 Years: A Forecast for the 21st Century by George Friedman (my review)

Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives by the Year 2100 by Michio Kaku (my review)

Yahoo Visualizes Its Content CORE

By - February 09, 2012

Yahoo has always been proud of the algorithms that drive its choice of personalized content, but it’s hard to grok exactly what they do behind the scenes to make the magic happen. Today the company released a visualization of its “C.O.R.E.” (Content Optimization and Relevance Engine) technology, and the result is pretty cool. From a release sent to me by Yahoo:

 

  • C.O.R.E. (Content Optimization and Relevance Engine) is a suite of technologies developed by Yahoo! Labs to surface the stories most interesting to you, based on your reading behavior over time.
  • Every hour C.O.R.E. processes 1.2 terrabytes of data in order to learn how a user’s behaviors and interests influence the likelihood of clicking on a specific article. And, every day, C.O.R.E. personalizes 2.2 billion pieces of content for Yahoo! users.
  • Since optimizing with C.O.R.E., Yahoo!’s Homepage click-through rate has increased 300%.
  • Yahoo!’s personalization approach is a clever mix of scientific algorithms and human judgment, as editors have control to override C.O.R.E. at any time, to ensure certain stories are seen.
  • Initially developed within Yahoo! Labs, C.O.R.E. has become a vital tool used throughout the day by editors across the company to bring our users personalized news, first.

The visualization lets you see stories through filters of gender, age, and interest. The image above, for example, shows a male in may age range interested in business and finance. Well worth playing around with, and a very good example of what I call “dependent web” content.

More information on Yahoo’s blog here.

Larry Page’s “Tidal Wave Moment”?

By - February 07, 2012

Who remembers the moment, back in 1995, when Bill Gates wrote his famous Internet Tidal Wave Memo? In it he rallied his entire organization to the cause of the Internet, calling the new platform an existential threat/opportunity for Microsoft’s entire business. In the memo Gates wrote:

“I assign the Internet the highest level of importance. In this memo I want to make clear that our focus on the Internet is crucial to every part of our business. The Internet is the most important single development to come along since the IBM PC was introduced in 1981.”

The memo runs more than 5300 words and includes highly detailed product plans across all of Microsoft. In retrospect, it probably wasn’t a genius move to be so transparent – the memo became public during the US Dept. of Justice action against Microsoft in the late 1990s.

It strikes me that Larry Page at Google could have written such a memo to all Googlers last year. Of course, Page and his advisors must have learned from Microsoft’s mistakes, and certainly don’t want a declarative memo floating around the vast clouds of Internet eternity. Bad things can happen from direct mandates such as those made by Gates – in the memo he mentions that Microsoft must “match and beat” Netscape, for example, words that came back to haunt him during the DOJ action.

Here’s what Page might have written to his staff in 2011, with just a few words shifted:

” I assign social networking the highest level of importance. In this memo I want to make clear that our focus on social networking is crucial to every part of our business. Social networking is the most important single development to come along since Google was introduced in 1998.”

I very much doubt Page wrote anywhere that Google must “match and beat” Facebook. And unlike Gates, he probably did not pen detailed memos about integrating Google+ into all of Google’s products (as Gates did – for pages – declaring that Microsoft must integrate the Internet into all of its core products.)

But it’s certainly not lost on any Googler how important “social” is to the company: all of their bonuses were tied to social last year.

So why am I bringing this up now? Well, I’ve got no news hook. I’m just doing research for the book, and came across the memo, and its tone and urgency struck a familiar note. The furor around Search Plus Your World has died down, but it left a bad taste in a lot of folks’ mouths. But put in the context of “existential threat,” it’s easier to understand why Google did what it did.

Unlike the Internet, which was a freely accessible resource that any company could incorporate into its products and services, to date “social” has been dominated by one company, a company that Google has been unable to work with. Imagine if, when Gates wrote his Tidal Wave memo, the “Internet” he spoke of was controlled entirely by, say, MCI, and that Microsoft was unable to secure a deal to get all that Internet goodness into its future products.

That seems to be where Google finds itself, at least by its own reckoning. To continue being a great search engine, it needs the identity and relationship data found, for the most part, behind Facebook’s walls.

I’ve written elsewhere about the breakdown of the open web, the move toward more “walled gardens of data,” and what that does to Google’s ability to execute its core business of search. And it’s not just social – readers have sent me tons of information that predict how mobile, in particular, will escape the traditional reaches of Google’s spidering business model. I hope to pore through that information and post more here, but for now, it’s worth reading a bit of history to put Google’s moves into broader context.

Do You Think The US Government Is Monitoring Social Media?

By - February 03, 2012

I had the news on in the background while performing morning ablutions. It was tuned to CBS This Morning – Charlie Rose has recently joined the lineup and my wife, a former news producer, favors both Rose and the Tiffany Network. But the piece that was running as I washed the sleep from my eyes was simply unbelievable.

It was about the two unfortunate british tourists detained by Homeland Security over jokes on Twitter about “destroying America” (a colloquialism for partying – think “tear up the town”) and “digging up Marilyn Monroe” whilst in Hollywood. DHS cuffed the poor kids and tossed them in a detention center with “inner city criminals,” according to reports, then sent them back home. Access denied.(I tweeted the story when it happened, then forgot about it.)

Silly stuff, but also serious – I mean, if DHS can’t tell a 140-character colloquialism from a real threat….(Slap Forehead Now). CBS had managed to get an interview with the unfortunate couple, who were back in the UK and most likely never able to travel here again.

The interview wasn’t what woke me up this morning, it was what CBS’s “Terrorism Expert” had to say afterwards. Apparently Homeland Security claims it is NOT monitoring Twitter and other social media, instead, it got a “tip” about the tweets, and that’s why the couple was detained. The on-air “expert,” who used to run counter-terror for the LAPD and was an official at DHS as well, was asked point blank if the US Government was “monitoring social media.” He flatly denied it. (His comments, oddly, were cut out of the piece that’s now on the web and embedded above).

I do not believe him. Do you? And if they really are not – why not? Shouldn’t they be? I was curious to your thoughts, so here’s a poll:


And then, here’s the next one. Regardless of whether you think it actually IS monitoring….



In Which I Officially Declare RSS Is Truly Alive And Well.

By - February 02, 2012

I promise, for at least 18 months, to not bring this topic up again. But I do feel the need to report to all you RSS lovin’ freaks out there that the combined interactions on my two posts – 680 and still counting –  have exceeded the reach of my RSS feed (which clocked in at a miserable 664 the day I posted the first missive).

And as I said in my original post:

If I get more comments and tweets on this post than I have “reach” by Google Feedburner status, well, that’s enough for me to pronounce RSS Alive and Well (by my own metric of nodding along, of course). If it’s less than 664, I’m sorry, RSS is Well And Truly Dead. And it’s all your fault.

For those of you who don’t know what on earth I’m talking about, but care enough to click, here are the two posts:

Once Again, RSS Is Dead. But ONLY YOU Can Save It!

RSS Update: Not Dead, But On The Watch List

OK, now move along. Nothing to see here. No web standards have died. Happy Happy! Joy Joy!

Where Good Ideas Come From: A Tangled Bank

By - January 31, 2012

After pushing my way through a number of difficult but important reads, it was a pleasure to rip through Steven Johnson’s Where Good Ideas Come From: A Natural History of Innovation. I consider Steven a friend and colleague, and that will color my review of his most recent work (it came out in paperback last Fall). In short, I really liked the book. There, now Steven will continue to accept my invitations to lunch…

Steven is author of seven books, and I admire his approach to writing. He mixes story with essay, and has an elegant, spare style that I hope to emulate in my next book. If What We Hath Wrought is compared to his work, I’ll consider that a win.

Where Good Ideas Come From is an interesting, fast paced read that outlines the kinds of environments which spawn world-changing ideas. In a sense, this book is the summary of “lessons learned” from several of Johnson’s previous books, which go deep into one really big idea – The Invention of Air, for example, or  the discovery of a cure for cholera. It’s also a testament to another of Johnson’s obsessions – the modern city, which he points out is a far more likely seedbed of great ideas than the isolated suburb or cabin-on-a-lake-somewhere.

Johnson draws a parallel between great cities and the open web – both allow for many ideas to bump up against each other, breed, and create new forms. 

Some environments squelch new ideas; some environments seem to breed them effortlessly. The city and the Web have been such engines of innovation because, for complicated historical reasons, they are both environments that are powerfully suited for the creation, diffusion, and adoption of good ideas.

While more than a year old, Where Good Ideas Come From is an important and timely book, because the conclusions Johnson draw are instructive to the digital world we are building right now – will it be one that fosters what Zittrain calls generativity, or are we feeding ecosystems that are closed in nature? Johnson writes:

…openness and connectivity may, in the end, be more valuable to innovation than purely competitive mechanisms. Those patterns of innovation deserve recognition—in part because it’s intrinsically important to understand why good ideas emerge historically, and in part because by embracing these patterns we can build environments that do a better job of nurturing good ideas…

…If there is a single maxim that runs through this book’s arguments, it is that we are often better served by connecting ideas than we are by protecting them. ….when one looks at innovation in nature and in culture, environments that build walls around good ideas tend to be less innovative in the long run than more open-ended environments. Good ideas may not want to be free, but they do want to connect, fuse, recombine. They want to reinvent themselves by crossing conceptual borders. They want to complete each other as much as they want to compete.

I couldn’t help but think of the data and commercial restrictions imposed by Facebook and Apple as I read those words. As I’ve written over and over on this site, I’m dismayed by the world we’re building inside Apple’s “appworld,” on the one hand, and the trend toward planting our personal and corporate taproots too deeply in the soils of Facebook, on the other. Johnson surveys centuries of important, world changing ideas, often relating compelling personal narratives on the way to explaining how those ideas came to be not through closed, corporate R&D labs, but through unexpected collisions between passions, hobbies, coffee house conversations, and seeming coincidence. If you’re ever stuck, Johnson advises, go outside and bump into things for a while. I couldn’t agree more.

One concept Johnson elucidates is the “adjacent possible,” a theory attributed to biologist Stuart Kauffman. In short, the adjacent possible is the space inhabited by “what could be” based on what currently is. In biology and chemistry, for example, it’s the potential for various combinations of molecules to build self-replicating proteins. When that occurs, new adjacent possibilities open up, to the point of an explosion in life and order.

Johnson applies this theory to ideas, deftly demonstrating how Darwin’s fascination with the creation of coral reefs led – over years – to what is perhaps the most powerful idea of modernity – evolution. He concludes that while most of us understand Darwin’s great insight as mostly about “survival of the fittest,” perhaps its greatest insight is how it has “revealed the collaborative and connective forces at work in the natural world.” Darwin’s famous metaphor for this insight is the tangled bank:

It is interesting to contemplate a tangled bank, clothed with many plants of many kinds, with birds singing on the bushes, with various insects flitting about, and with worms crawling through the damp earth, and to reflect that these elaborately constructed forms, so different from each other, and dependent upon each other in so complex a manner, have all been produced by laws acting around us. . .

Johnson also extolls the concept of “liquid networks” – where information freely flows between many minds, of “slow hunches,” where ideas develop over long periods of time, as well as the importance of noise, serendipity, and error in the development of good ideas. He explores “exaptation” – the repurposing of one idea for another use, and the concept of “platforms” that allow each of these concepts – from liquid networks to serendipity and exaptation – to blossom (Twitter is cited as such a platform).

Johnson concludes:

Ideas rise in crowds, as Poincaré said. They rise in liquid networks where connection is valued more than protection. So if we want to build environments that generate good ideas—whether those environments are in schools or corporations or governments or our own personal lives—we need to keep that history in mind, and not fall back on the easy assumptions that competitive markets are the only reliable source of good ideas. Yes, the market has been a great engine of innovation. But so has the reef.

Amen, I say. I look forward to our great tech companies – Apple and Facebook amongst them – becoming more tangled bank than carefully pruned garden.

A nice endcap to the book is a survey Johnson took of great ideas across history. He places each idea on an XY grid where an idea is either generated by an individual or a network of individuals (the X axis) and/or a commercial or non-commercial environment (the Y Axis). The results are pretty clear: ideas thrive in “non-market/networked” environments.

Johnson's chart of major ideas emerging during the 19th and 20th centuries

This doesn’t mean those ideas don’t become the basis for commerce – quite the opposite in fact. But this is a book about how good ideas are created, not how they might be exploited. And we’d be well advised to pay attention to that as we consider how we organize our corporations, our governments, and ourselves – we have some stubborn problems to solve, and we’ll need a lot of good ideas if we’re going to solve them.

Highly recommended.

Next up on the reading list: Inside Apple: How America’s Most Admired–and Secretive–Company Really Works by Adam Lashinsky, and Republic, Lost: How Money Corrupts Congress–and a Plan to Stop It, by Larry Lessig.

####

Other works I’ve reviewed:

The Singularity Is Near: When Humans Transcend Biology by Ray Kurzweil (my review)

The Corporation (film – my review).

What Technology Wants by Kevin Kelly (my review)

Alone Together: Why We Expect More from Technology and Less from Each Other by Sherry Turkle (my review)

The Information: A History, a Theory, a Flood by James Gleick (my review)

In The Plex: How Google Thinks, Works, and Shapes Our Lives by Steven Levy (my review)

The Future of the Internet–And How to Stop It by Jonathan Zittrain (my review)

The Next 100 Years: A Forecast for the 21st Century by George Friedman (my review)

Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives by the Year 2100 by Michio Kaku (my review)

 

What Happens When Sharing Is Turned Off? People Don’t Dance.

By - January 30, 2012

One of only two photos to emerge from last night's Wilco concert, image Eric Henegen

Last night my wife and I did something quite rare – we went to a concert on a Sunday night, in San Francisco, with three other couples (Wilco, playing at The Warfield). If you don’t have kids and don’t live in the suburbs, you probably think we’re pretty lame, and I suppose compared to city dwellers, we most certainly are. But there you have it.

So why am I telling you about it? Because something odd happened at the show: Wilco enforced a “no smartphone” rule. Apparently lead singer Jeff Tweedy hates looking out at the audience and seeing folks waving lit phones back at him. Members of the Warfield staff told me they didn’t like the policy, but they enforced it  – quite strictly, I might add. It created a weird vibe – folks didn’t even take out their phones for fear they might be kicked out for taking a picture of the concert. (A couple of intrepid souls did sneak a pic in, as you can see at left…)

And… no one danced, not till the very end, anyway. I’ve seen Wilco a few times, and I’ve never seen a more, well, motionless crowd. But more on that later.

Now, I have something of a history when it comes to smart phones and concerts. Back in 2008 I was a founding partner in a new kind of social music experiment we called “CrowdFire.” In my post explaining the idea, I wrote:

Over the course of several brainstorming sessions… an idea began to take shape based on a single insight: personal media is changing how we all experience music. (when I was at Bonnaroo in 2007), everyone there had a cel phone with a camera. Or a Flip. Or a digital camera. And when an amazing moment occurred, more folks held up their digital devices than they did lighters. At Bonnaroo, I took a picture that nails it for me – the image at left. A woman capturing an incredible personal memory of an incredible shared experience (in this case, it was Metallica literally blowing people’s minds), the three screens reflecting the integration of physical, personal, and shared experiences. That image informed our logo, as you can see (below).

So – where did all those experiences go (Searchblog readers, of course, know I’ve been thinking about this for a while)? What could be done with them if they were all put together in one place, at one time, turned into a great big feed by a smart platform that everyone could access? In short, what might happen if someone built a platform to let the crowd – the audience – upload their experiences of the music to a great big database, then mix, mash, and meld them into something utterly new?

Thanks to partners like Microsoft, Intel, SuperFly, Federated Media and scores of individuals, CrowdFire actually happened at Outside Lands, both in 2008 and in 2009. It was a massive effort – the first year literally broke AT&T’s network. But it was clear we were onto something. People want to capture and share the experience of being at a live concert, and the smart phone was clearly how they were now doing it.

It was the start of something – brainstorming with several of my friends prior to CrowdFire’s birth, we imagined a world where every shareable experience became data that could be recombined to create fungible alternate realities. Heady stuff, stuff that is still impossible, but I feel will eventually become our reality as we careen toward a future of big data and big platforms.

Since those early days, the idea of CrowdFire has certainly caught on. In early 2008, we had to build the whole platform from scratch, but now, folks use services like Instagram, Twitter, Facebook, and Foursquare to share their experiences. Many artists share back, sending out photos and tweets from on stage. Most major festivals and promoters have some kind of fan photo/input service that they promote as well. CrowdFire was a great idea, and maybe, had I not been overwhelmed with running FM, we might have turned it into a real company/service that could have integrated all this output and created something big in the world. But it was a bit ahead of its time.

What has happened since that first Outside Lands is that at every concert I’ve attended, I’ve noticed the crowd’s increasing connection to their smart phones – taking pictures, group texting, tweeting, and sharing the moments with their extended networks across any number of social services. It’s hard to find an experience more social than a big concert, and the thousands of constantly lit smartphone screens are a testament to that fact, as are the constant streams of photos and status updates coming out of nearly every show I’ve seen, or followed enviously online.

Which brings me back to last night. I was unaware of the policy, so as Wilco opened at the sold-out Warfield, something felt off to me. Here were two thousand San Francisco hipsters, all turned attentively toward the stage – but most of them had their hands in their pockets! As the band went into the impossible-not-to-move-to “Art of Almost” and “I Might,” I started wondering what was up – why weren’t people at least swaying?! The music was extraordinary, the sound system perfectly tuned. But everyone seemed very intent on…well…being intent. They stared forward, hands in pocket, nodded their heads a bit, but no one danced. It was a rather odd vibe. It was as if the crowd had been admonished to not be too … expressive.

Then it hit me. Nobody had their phone out. I turned to a security guard and asked why no one was holding up a phone. That’s when I learned of Wilco’s policy.

It seemed to me that the rule had the unintended consequence of muting the crowd’s ability to connect to the joy of the moment. Odd, that. We’re so connected to these devices and their ability to reflect our own sense of self that when we’re deprived of them, we feel somehow less…human.

My first reaction was “Well, this sucks,” but on second thought, I got why Tweedy wanted his audience to focus on the experience in the room, instead of watching and sharing it through the screens of their smartphones. By the encore, many people were dancing – they had loosened up. But in the end, I’m not sure I agree with Wilco – they’re fighting the wrong battle (and losing extremely valuable word of mouth in the process, but that’s another post).

There are essentially two main reasons to hold a phone up at a show. First, to capture a memory for yourself, a reminder of the moment you’re enjoying. And second, to share that moment with someone – to express your emotions socially. Both seem perfectly legitimate to me. (I’m not down with doing email or taking a call during a show, I’ll admit).
But the smart phone isn’t a perfect device, as we all know. It forces the world into a tiny screen. It runs out of battery, bandwidth, and power. It distracts us from the world around us. There are too many steps – too much friction – between capturing the things we are experiencing right now and the sharing of those things with people we care about.

But I sense that the sea of smart phones lit up at concerts is a temporary phenomenon. The integration of technology, sharing, and social into our physical world, on the other hand, well that ain’t going away. In the future, it’s going to be much harder to enforce policies like Wilco’s, because the phone will be integrated into our clothing, our jewelry, our eyeglasses, and possibly even ourselves. When that happens – when I can take a picture through my glasses, preview it, then send it to Instagram using gestures from my fingers, or eyeblinks, or a wrinkle of my nose – when technology becomes truly magical – asking people to turn it off is going to be the equivalent of asking them not to dance – to not express their joy at being in the moment.

And why would anyone want to do that?