free html hit counter The Web As Platform Archives | Page 6 of 26 | John Battelle's Search Blog

Musings On “Streams” and the Future of Magazines

By - August 17, 2012

I’ve run into a number of folks these past few days who read my piece last week: The State of Digital Media: Passion, Goat Rodeos, and Unicorn Exits…. Some of you have asked me to explain a bit more on the economic issues regarding media startups. I didn’t really go too deep into them, but as I was answering one fellow in email, I realized I didn’t really explain how complicated they really are, particularly if you want to make new forms of publications. I’ll get into that in the second part of this post, but first, I wanted to address a few articles that have touched on a portion of the issue, in particular The Pretty New Web and the Future of “Native” Advertising (by Choire Sicha) and What happens to advertising in a world of streams? (by Matthew Ingram).

Bridging the Stream

Both these posts tackle the emerging world of “stream”-driven content, painting them as opposite to the format we’ve pretty much used for the past 20 years – “page”-based content (like this page, for example). An established, at-scale business model exists for page-driven content, and it’s called display advertising. And anyone who’s been reading this site knows that display advertising is under pressure from two sides: first, the rise of massive platforms that harvest web pages and monetize them in ways that don’t pay the creators (Facebook, Twitter, Pinterest) and secondly, the dramatic growth of programmatic buying platforms that do pay creators, but the payment amounts are too low to support great content (second generation ad networks called DSPs, backed by agencies and their marketing clients).

Sicha and Ingram note that “stream”-based models – the latest to get attention is Medium, from Twitter co-founders Biz Stone and Ev Williams – eschew display advertising. Platforms like Twitter and Facebook are focused on stream-based advertising (Facebook got to its initial billions in display, but is pivoting to Sponsored Stories, Twitter has always been about its Promoted Products). Everyone expects similar in-stream products from Pinterest and Tumblr. Stream-based advertising products are “native” in nature – which is to say, advertising that acts much like the content it supports.

But as I’ve advised in the past, those platforms simply don’t work as home bases for people who want to make a living from creating great publications. Nor, to my mind, are they particularly good media experiences – the way a great site or a great print publication can be. For now, the good old-fashioned page-driven web is where folks like The Awl and GigaOm execute their product and collect their money. Display is their model, and that model is under pressure. What to do? This is a question that matters, a lot, because there are literally millions of sites that currently run on the display model, like it or not.

Well, I don’t think it’s as hard as it might seem. While folks are pretty freaked out about the decline of display, I’m a bit more patient. We’re in the middle of a shift, and it’s not as radical as some might think. We need “native advertising” for the independent web – and it turns out, we’ve already got it in the form of new, integrated content units that fit into the flow of the page-driven web (see image of FMP’s “native conversationalist suite,” above), and, of course, content and conversational marketing, which we’ve been doing since 2006. The issue we now have to tackle is scale (the ability to buy native ads across the web efficiently and in large numbers) and data (the ability to buy these ads with excellent targeting, performance metrics, and application of first- and third-party data). That’s going to take time, but it’s already underway. The technology and efficiencies of programmatic buying will, over time, marry with “native” ads, driving higher value for great content.  (More on this in another post).

And by the way, “traditional” display isn’t going to go away. It’s just going to get far more efficient and valuable as the data gets better and better, programmatic begins to climb up the value curve, and the units evolve to better complement new approaches to content presentation across all instances of the web (including apps and big platforms).

So far, I’ve been talking only about publishing on the “traditional web,” for lack of a better phrase. Nearly all web publications are driven by the display model, which is in turn driven by page views. But we all know the web is shifting, thanks to mobile devices and the walled gardens they erect. The new landscape of the web is far more complicated, and new products must emerge. To wit….

It’s A Tough Time To Launch A Magazine, Which Is Why It’s A Great Time…

Quick: Name me a digital-only publication that’s blown you away, the way the paper-based Wired did in 1992 (well, at least for some of you), or maybe Boing Boing did when you first found it online. I don’t mean a cool new website (there’s been a ton of those), but a magazine in sense of a branded package of curated, unique content, one that really speaks to you, one that is an event each time it comes out.*

As much as I love scores of wonderful sites across the web, most of them are driven by the daily grind of the display/pageview hamster wheel. They create 20, 30, 40 “content snacks” a day, and I miss far more than I consume.  My media habits when it comes to these sites are rather like a hummingbird. I can’t think of a single “publication” in the digital space that resonates the way magazines used to for me – where I stop time for a while, and really soak in the essence of the publication’s experience. (For purely selfish reasons, if you can think of one, please note it in the comments!)

I think there’s a reason there’s a paucity of digital magazines, and it has a lot to do with the current, fractured state of digital publishing. In short, if you want to create such a product, the curent ecosystem makes it nearly impossible to do so. I think this is changing, but so far, not fast enough.

Just for kicks, let’s say you want to start the equivalent of a “new publication” in the Internet space. Let’s further state that you want it to be relatively cutting edge, IE, you want it to be available everywhere your customer might be, and take advantage of the digital environment where it lives. That means  editions in the Apple iTunes store, Amazon’s Kindle/Android newsstand, and Google’s Play (Android) store, for all those Android smartphones and tablets storming the market. And of course, you want to exist on the web (with spiffy HTML5, natch). Oh, and it’d be nice if you could also have a great version of it exist on Facebook, no?

Naturally, you want to be able to give the consumer of your publication a consistent, platform-agnostic experience across all those environments. If your reader starts engaging with your publication on, say, an iPad, but moves to her work PC later in the day, your publication should be aware of what she’s been doing, the environment she’s now in, and then serves up the right content, ads, and such based on that data. Kind of the way NetFlix works (hey, they solved it for movies!), or Amazon’s Kindle readers (books!) across various platforms.**

Ready to get to work making this happen?!

OK. Well, let’s use the “old” model of magazine publishing as a starting point to model your costs.

In that model, you spent about 35% of your operating costs in “audience development” – paying for circulation and newsstand costs, as well as the costs of selling your product to advertisers (assuming you are going to both charge a sub fee, and include ads, which most “traditional” publications do).

Another 20-40% (depending on how much you care about your product) are spent on actually creating your content. You know, paying editors, designers, writers, videographers, etc.

Add in 25% of your cost to make and ship the physical product, and the remainder is “G&A” – paying for management staff.

As you can see, the variable cost here is in “creating your content,” and if you’re a passionate creator of media, you want to spend every dollar you can creating a great product, naturally.

The problem is, if you’re making digital media these days, the costs of packaging that content have skyrocketed. Imagine making a magazine that has to be natively integrated into half a dozen or more different newsstand formats. Instead of one consistent, beautiful page layout, you have to make six of them – one for each device and distributor, each native to the environment. You don’t have to pay six times over for the content, but you do have to pay a lot more for your design, production, tech, and distribution resources.  You want your content to shine everywhere it might be consumed, right?! Blam, your fixed costs of making the product just went uneconomic!

The same problem applies to marketers – they have to make not one ad, but up to six, if they want their ad to travel everywhere the publisher’s content goes, and work in ways that take advantage of each platform. Trust me, they don’t want to deal with that. No to mention, not many advertisers want to buy ads inside “digital magazines” these days. It’s an unproven medium, so far. (However, as I argued in my 2006-7 series on Conversational Media, the best magazine ads are in fact truly native.)

Which begs the other side of the ledger: Revenues. As I said before, traditional publications have two sources of revenues, in the main: subscriptions (paid circulation) and advertising.

As we all know, the industry has historically punted on getting anyone to pay for content on the Internet, but that’s changing – people pay for Netflix, the Wall St. Journal, Spotify, various apps, etc. I think folks will pay for quality content if it’s truly valuable, so let’s pretend for the purposes of this example that your new publication plans to be in the “valuable” category.

If you want to sell your publication on the Big Guys’ platforms, you have to play by their rules, which means you turn over 30% of your circulation revenues. That’s a hefty chunk of revenue to lose before you even begin to pay for other costs! You can keep all the revenues from folks who buy your publication on the web,  but if they want to enjoy it on their iPad or Kindle via a native application, well, you have to deal with Apple and Amazon. Google’s Play store takes a smaller cut, but it takes a cut nonetheless.

Just for arguments’ sake, let’s say that you cancel out that 30% tax with what you used to call “audience development costs” for traditional publications. You’re pretty much even, right? Nope. You now have cross-platform inconsistencies to work out, and those are going to cost you money to manage. What if your customer has more than one device, or wants to engage with your app on Facebook? Sorry, you’re kind of screwed. There’s no easy way to rationalize your customer experience across all those platforms and devices in a way that makes business sense. So many gatekeepers, so many business rules, so many tech platforms….

You’ll have to account for the costs of managing a data platform that keeps track of all your customers (including your advertisers) and insures they have a consistent experience across platforms. And you’ll have to build that yourself, sorry. The only folks who’ve figured that out are Very Big (Amazon, NetFlix etc), and they’re not sharing how they do it. It’s doable, but it’s gonna be expensive if you want to roll it yourself. I’ve heard that some new publishing startups are trying to do just that, and I wish them godspeed.

To cope with this particular mess, some publishers, like The Daily, have decided to go with just one platform (Apple), and cross their fingers and hope it works out. Others, like the Times, have pushed themselves across several, and expended heroic resources trying to tie it all together (without totally succeeding, in the main). But remember, we’re talking about a new publishing startup here, not the New York Times or Newscorp. If those guys are struggling to make it work, well, what’s the chances a startup media company is going to succeed?

Then there’s the revenues associated with selling advertising. As I pointed out earlier, the traditional web display model is under transitionary pressure, and anyway, you want your content to work everywhere, not just the web. If you want your advertisers to be everywhere your content is, you’ll have to figure out a way to get their ads natively into all those half dozen or so platforms (oh, and you’ll need to report performance metrics too, sorry). So far, that’s also an unsolved problem (and one your advertisers won’t pay you to solve). That’s going to limit your ability to sell ads, and increase your costs of serving them.

OK, I’m going to stop, because if you’re an aspiring publisher, I may have given you a fit of the blues.

But cheer up. Because I really do believe these issues will be solved. So far, we’ve written off magazines as dying, because we can’t figure out how to replicate their core value proposition in the digital world. But I’ve got a strong sense this is changing. Crazy publishing entrepreneurs, and even the big players in media, will sooner rather than later drive solutions that resolve our current dilemma. We’ll develop ads that travel with content, content management systems that allow us to automatically and natively drive our creations into the big platforms, and sensible business rules with the Big Guys that allow independent, groundbreaking publications to flourish again.

It’s going to take time, patience, innovation, and pressure, but we’ll get there. In fact, getting there is going to be a great journey, one we’re already well into. So tell me – what’s your favorite digital “publication” and why? Do you read “traditional magazines” on your tablet or online? And what companies do you think are innovating in this area?

—-

*Some have argued that the era of a branded publication created by a dedicated team of content creators is over. I utterly disagree, but that’s another post. 

**The fact that these two “old media” formats have mostly solved their digital distribution issues, whilst magazines have not, is vexing. A movie is a movie, a book is a book – you can read it or watch it online in nearly the same way that you can offline.  Movies, TV shows, and books can easily flow, with very little new formatting, into any digital space. But a magazine clearly is a different beast. We don’t want to just flip through a PDF of our favorite magazine (or do we? Do you?). Something seems off, doesn’t it? We want more…clearly, the magazine holds some magic that so far, we’ve not unlocked. What a wonderful problem to think about….

  • Content Marquee

The State of Digital Media: Passion, Goat Rodeos, and Unicorn Exits….

By - August 09, 2012

Earlier in the week I was interviewed by a sharp producer from an Internet-based media company. That company, a relatively well-known startup in industry circles, will be launching a new site soon, and is making a documentary about the process. Our conversation put a fine point on scores of similar meetings and calls I’ve head with major media company execs, content startup CEOs, and product and business leaders at well known online content destinations.

When I call a producer “sharp,” I mean that he asked interesting questions that crystalized some thoughts that have been bouncing around my head recently. The main focus of our discussion was the challenges of launching new media products in the current environment, and afterwards, it struck me I might write a few words on the subject, as it has been much on my mind, and given my history as both an entrepreneur and author in this space, I very much doubt it will ever stop being on my mind. So here are a couple highlights:

* We have a false economy of valuation driving many startups in the content business. Once a year or so, an Internet media site is sold for an extraordinary amount of money, relative to traditional metrics of valuation. Examples include The Huffington Post, which sold for a reported 10X annual revenues, and, just this past week, Bleacher Report, which sold for even more than that ($200million or so on revenues, from what I understand, that were less than $20mm a year).

Such lofty multiples (typical media businesses  – yes, even Internet media businesses – trade at 1.2 to 3X revenues) can make Internet media entrepreneurs starry-eyed. They may have unrealistic expectations of their company’s value, leading to poor decision making about both product and business issues. The truth is, truly passionate media creators don’t get into the media business to make huge gains from spectacular unicorn exits. When it happens, we certainly all cheer (and perhaps secretly hope it happens to us). But the fact is, we make media because we don’t know what else to do with ourselves. It’s how we’re wired, so to speak. (There is another type of media entrepreneur who is far more mercenary in nature, but I’m not speaking of those types now).

Let me explain why HuffPo and Bleacher Report were sold for so much money: They happened to be in the right media segment, at the right time, while growing at the right rate, just as a large media-driven entity was struggling with a strategic problem that threatened a core part of its business. And that particular site happened to solve for that particular problem at that particular time. These major “strategic buys” occur quite rarely (though smaller, less pivotal strategic buys happen all the time – just for far lower multiples).

Media companies don’t like to pay more than their spreadsheets normally dictate. But if word comes from Time Warner’s CEO or its board that “ESPN is kicking our ass in sports” and “do something about it, pronto,” well, that’s when lightening might just strike. As for the Huffington Post, let’s be clear: AOL was a huge media business that faced a massive problem with audience retention, thanks to its declining dial-up business. The HuffPo brought a large and growing audience, not to mention some serious social media and content-platform chops. Right place, right time, right product, right team.

Now, both these businesses were leaders in their fields, they redefined news and sports coverage. And that’s why the acquirer with the major strategic problem bought them – they were the best at what they did. They met a large media company that had lost its way, and magic ensued. I applaud them both.

But if you’re building a content business in the hopes the same lightening is going to strike you, well, I too salute you. But that’s not really a plan for building a lasting media brand. At some point, you’re going to have to come to grips with the reality that making media is what you do for a living.  Making media companies that you hope to sell is not a lot of fun for anyone who cares deeply about making media.

*The current distribution and production landscape for media companies is an utter goat rodeo. Speaking of no fun, man, let’s talk about what we in the media business call “distribution” – IE, how we get our product to you, the consumer of our work. To illustrate what a total mess digital distribution has become, allow me to create a simple chart, based on the medium in which you might choose to create your media product. Note that I whipped this up in the past half hour, so it won’t be complete. But I think it makes my point:

And my point is this: If you are starting a digital media business today, you face a fractured, shifting, messy, business-rule-landmine-laden horrorshow. Your fantasy is that you can make one perfect version of your media product, and deliver it across all those tablets, Kindles, smart phones, PCs, Macs, and so on. The truth is, harmonizing your product (and, even more importantly, your consumer’s experience and your monetization) across all those platforms is currently impossible. Compare that to starting a website in 2002, or launching a magazine in 1992 (that’s when we launched Wired). The business rules were established, and you could focus, in the main, on one thing: producing great content. (Speaking of Wired, it had a great exit, as historians may recall. But again, it was a exit driven by the strategic need of a bigger company: the media business went for a typical multiple, despite how “hot” the brand was. What got the unicorn valuation was Wired’s Hotwired business, specifically, its search share….)

We’re in a messy transition phase right now, where the focus can’t only be on content, it also has to be on the *how* of distribution, production, and business terms. And that’s retarding growth and innovation in media businesses.

But I have hope. There’s a massive business opportunity inside this mess, one that I’m investigating, and I know others are as well. More on that as it develops. Meanwhile, a maxim: Most media businesses fail, always have, and always will. And most folks who make media already know this, which means they are close to batshit crazy anyway. But over and over and over, we keep making content, regardless of how ridiculous the landscape might be. And that, I am sure, will never change.

Here We Go Again: The Gray Market in Twitter and Facebook

By - August 07, 2012

So, casually reading through this Fast Company story about sexy female Twitter bots, I come across this astounding, unsubstantiated claim:

My goal was to draw a straight line from a Twitter bot to the real, live person whose face the bot had stolen. In the daily bot wars–the one Twitter fights every day, causing constant fluctuations in follower counts even as brands’ followers remain up to 48% bot–these women are the most visible and yet least acknowledged victims…

There it was, tossed in casually, almost as if it was a simple cost of doing business – nearly half of the followers of major brands could well be “bots.”

The article focuses on finding a pretty woman whose image had been hijacked, sure, but what I found most interesting (but sadly unsurprising) was how it pointed to a site that promises to a thousand  followers to anyone who pays…wait for it…about $17. Yes, the site is real. And no, you shouldn’t be surprised, in the least, that such services exist.

It has always been so.

Back when I was reporting for The Search, I explored the gray market that had sprung up around Google (and still flourishes, despite Google’s disputed attempts to beat it back). Fact is, wherever there is money to be made, and ignorance or desperation exists in some measure, shysters will flourish. And a further fact is this: Marketers, faced with CMO-level directives to “increase my follower/friend counts,” will turn to the gray market. Just as they did back in the early 2000s, when the directive was “make me rank higher in search.”

Earlier this week I got an email from a fellow who has been using Facebook to market his products. He was utterly convinced that nearly all the clicks he’s received on his ad were fake – bots, he thought, that were programmed to make his campaigns look as if they were performing well. He was further convinced that Facebook was running a scam – running bot networks to drive performance metrics. I reminded him that Facebook was a public company run by people I believed were well intentioned, intelligent people who knew that such behavior, if discovered, would ruin both their reputation as well as that of the company.

Instead, I suggested, he might look to third parties he might be working with – or, hell, he might just be the victim of a drive-by shooting – poorly coded bots that just click on ad campaigns, regardless of whose they might be.

In short, I very much doubt Facebook (or Twitter) are actively driving fraudulent behavior on their networks. In fact, they have legions of folks devoted to foiling such efforts.Yet there is absolutely no doubt that an entire, vibrant ecosystem is very much engaged in gaming these services. And just like Google had at the dawn of search marketing, Twitter and Facebook have a very – er – complicated relationship with these fraudsters. On the one hand, the gray hats are undermining the true value of these social networks. But on the other, well, they seem to help important customers hit their Key Performance Indicators, driving very real money into company coffers, either directly or indirectly.

I distinctly recall a conversation with a top Google official in 2005, who – off the record – defended AdSense-splattered domain-squatters as “providing a service to folks who typed the wrong thing into the address bar.” Uh huh.

As long as marketers are obsessed with hollow metrics like follower counts, Likes, and unengaged “plays,” this ecosystem will thrive.

What truly matters, of course, is engagement that can be measured beyond the actions of bots. It is coming. But not before millions of dollars are siphoned off by the opportunists who have always lived on the Internet’s gray edge.

First, Software Eats the World, Then, The Mirror World Emerges

By - July 18, 2012

David Gelernter of Yale

(image Edge.org) A month or so ago I had the pleasure of sitting down with Valley legend Marc Andreessen, in the main for the purpose of an interview for my slowly-developing-but-still-moving-forward book. At that point, I had not begun re-reading David Gelernter’s 1991 classic Mirror Worlds: or the Day Software Puts the Universe in a Shoebox…How It Will Happen and What It Will Mean.

Man, I wish I had, because I could have asked Marc if it was his life-goal to turn David’s predictions into reality. Marc is well known for many things, but his recent mantra that “Software Is Eating the World” (Wall St. Journal paid link, more recent overview here) has become nearly everyone’s favorite Go-To Big Valley Trend. And for good reason – the idea seductively resonates on many different levels, and forms the backbone of not just Andreessen’s investment thesis, but of much of the current foment in our startup-driven industry.

A bit of background: Andreessen’s core argument is that nearly every industry in the world is being driven by or turned into software in one way or another. In some places, this process is deeply underway: The entertainment business is almost all software now, for example, and the attendant disruption has created extraordinary value for savvy investors in companies like Amazon, Netflix, and Apple. Further, Marc points out that the largest company in direct marketing these days is a software company: Google. His  thesis extends to transportation (think Uber but also FedEx, which runs on software), retail (besides Amazon, Walmart is a data machine),  healthcare (huge data opportunity, as yet unrealized), energy (same), and even defense. From his Journal article:

The modern combat soldier is embedded in a web of software that provides intelligence, communications, logistics and weapons guidance. Software-powered drones launch airstrikes without putting human pilots at risk. Intelligence agencies do large-scale data mining with software to uncover and track potential terrorist plots.

That quote reminds me of Wired’s first cover story, in 1993, about the future of war. But in 1991, two years before even that watershed moment (well, for me anyway), Yale scholar Gelernter published Mirror Worlds, and in it he predicted that we’d be putting the entire “universe in a shoebox” via software.  Early in the book, Gelernter posits the concept of the Mirror World, which might best be described as a more benign version of The Matrix, specific to any given task, place, or institution. He lays out how such worlds will come to be, and declares that the technology already exists for such worlds to be created. “The software revolution hasn’t begun yet; but it will soon,” he promises.

As we become infinite shadows of data, I sense Gelernter is right, and VCs like Andreessen and the entrepreneurs they are backing are leading the charge. I’ll be reviewing Mirror Worlds later in the summer – I’m spending time with Gelernter at this home in New Haven next month – but for now, I wanted to just note how far we’ve come, and invite all of you, if you are fans of his work, to help me ask Gelernter intelligent questions about how his original thesis has morphed in two decades.

It seems to me that if true “mirror worlds” are going to emerge, the first step will have to be “software eating the world” – IE, we’ll have to infect our entire physical realities with software, such that those realities emanate with real time and useful data. That seems to be happening apace. And the implications of how we go about architecting such systems are massive.

One of my favorite passages from Mirror Worlds, for what it’s worth:

The intellectual content, the social implications of these software gizmos make them far too important to be left in the hands of the computer sciencearchy…..Public policy will be forced to come to grips with the implications. So will every thinking person: A software revolution will change the way society’s business is conducted, and it will change the intellectual landscape.

Indeed!

How Not To Post A Comment

By - June 30, 2012

Recently my site has been hit with a ton of “manual spam” – folks who are paid to post short comments in the hope they’ll appear and drive pagerank back to various sites (or perhaps just increase their or their clients’ visibility.) It’s not hard to kill these comments, though it’s a bit of an irritant when they pile up. I don’t really mind, because their full-blown amateur-hour earnestness is pretty entertaining. Besides leaving chuckleworthy comments like “Facebook now 100 billion company there big really now”, the spammers also leave behind their user handles, which are simply priceless. Enjoy:


Google’s Transparency Report: A Good And Troubling Thing

By - June 19, 2012

A couple of days ago Google released its latest “Transparency Report,” part of the company’s ongoing commitment to disclose requests by individuals, corporations, and governments to change what users see in search results and other Google properties such as YouTube.

The press coverage of Google’s report was copious – far more than the prior two years, and for good reason. This week’s disclosure included Google’s bi-annual report of government takedown requests (corporate and individual requests are updated in near real time). The news was not comforting.

As the Atlantic wrote:

The stories Google tells to accompany the broad-brush numbers (found in the “annotations” section and its blog) paint a picture to accompany those numbers that Google calls “alarming” — noting, in particular, that some of the requests for removal of political speech come from “Western democracies not typically associated with censorship.”

The number of takedown requests from governments is on the rise – up about 100% year to year for the US alone. Part of this, perhaps, can be explained by what might be called a “catchup effect” – governments are coming to terms with the pervasive power of digital information, and finally getting their heads around trying to control it, much as governments have attempted to control more analog forms of information like newspapers, television stations, and books.

But as we know, digital information is very, very different. It’s one thing to try to control the press, it’s quite another to do the same with the blog postings, YouTube videos, Twitter feeds, and emails of an entire citizenry. Given the explosion of arguably illegal or simply embarrassing information available to Google’s crawlers (cough, cough, Wikileaks), I’m rather surprised that worldwide government takedown requests haven’t grown at an exponential rate.

But to me, the rise of government takedown requests isn’t nearly as interesting as the role Google and other companies play in all of this. As I’ve written elsewhere, it seems that as we move our public selves into the digital sphere, we seem to be also moving our trust from the institutions of government to the institution of the corporation. For example, our offline identity is established by a government ID like a driver’s license. Online, many of us view Facebook as our identity service. Prior to email, our private correspondance was secured by a government institution called the postal service. Today, we trust AOL, Microsoft, Yahoo, Facebook, or Gmail with our private utterances. When documents were analog, they were protected by government laws against unreasonable search and seizure. When they live in the cloud….the ground is shifting. I could go on, but I think you get my point.

As we move ourselves into the realm of digital information, a realm mediated by private corporations, those corporations naturally become the focus of government attention. I find Google’s Transparency Report to be a refreshing response to this government embrace – but it’s an exercise that almost no other corporation completes (Twitter has a record of disclosing, but on a case by case basis). Where is Amazon’s Transparency Report? Yahoo’s? Microsoft’s? And of course, the biggest question in terms of scale and personal information – where is Facebook’s? Oh, and of course, where is Apple’s?

Put another way: If we are shifting our trust from the government to the corporation, who’s watching the corporations? With government, we’ve at least got clear legal recourse – in the United States, we’ve got the Constitution, the Freedom of Information Act, and a deep legal history protecting the role of the press – what Jefferson called the Fourth Estate. With corporations, we’re on far less comforting ground – most of us have agreed to Terms of Services we’ve never read, much less studied in sixth grade civics class.

As the Atlantic concludes:

Google is trying to make these decisions responsibly, and the outcome, as detailed in the report, is reason to have confidence in Google as an arbiter of these things if, as is the case, Google is going to be the arbiter of these issues. But unlike a US Court, we don’t see the transcripts of oral arguments, or the detailed reasoning of a judge. …The Transparency Report sheds more light on the governments Google deals with than with its own internal processes for making judgments about compliance….Google’s Transparency Report is the work of a company that is grappling with its power and trying to show its work.

I applaud Google’s efforts here, but I’m wary of placing such an important public trust in the hands of private corporations alone. Google is a powerful company, with access to a wide swath of the world’s information. But with the rise of walled gardens like iOS and Facebook, an increasing amount of our information doesn’t touch Google’s servers. We literally are in the dark about how this data is being accessed by governments around the world.

Google is setting an example I hope all corporations with access to our data will follow. So far, however, most companies don’t. And that should give all of us pause, and it should be the basis of an ongoing conversation about the role of government in our digital lives.

On Small, Intimate Data

By - May 29, 2012

Part of the research I am doing for the book involves trying to get my head around the concept of “Big Data,” given the premise that we are in a fundamental shift to a digitally driven society. Big data, as you all know, is super hot – Facebook derives its value because of all that big data it has on you and me, Google is probably the original consumer-facing big data company (though Amazon might take issue with that), Microsoft is betting the farm on data in the cloud, Splunk just had a hot IPO because it’s a Big Data play, and so on.

But I’m starting to wonder if Big Data is the right metaphor for all of us as we continue this journey toward a digitally enhanced future. It feels so – impersonal – Big Data is something that is done to us or without regard for us as individuals. We need a metaphor that is more about the person, and less about the machine. At the very least, it should start with us, no?

Elsewhere I’ve written about the intersection of data and the platform for that data – expect a lot more from me on this subject in the future. But in short, I am unconvinced that the current architecture we’ve adopted is ideal – where all “our” data, along with the data created by that data’s co-mingling with other data – lives in “cloud” platforms controlled by large corporations whose terms and values we may or may not agree with (or even pay attention to, though some interesting folks are starting to). And the grammar and vocabulary now seeping into our culture is equally mundane and bereft of the subject’s true potential – the creation, sharing and intermingling of data is perhaps the most important development of our generation, in terms of potential good it can create in the world.

At Web 2 last year a significant theme arose around the idea of “You Are the Platform,” driven by people and companies like Chris Poole, Mozilla, Singly, and many others. I think this is an under-appreciated and important idea for our industry, and it centers around, to torture a phrase, the idea of “small” rather than Big Data. To me, small means limited, intimate, and actionable by individuals. It’s small in the same sense that the original web was “small pieces loosely joined” (and the web itself was “big.”)  It’s intimate in that it’s data that matters a lot to each of us, and that we share with much the same kind of social parameters that might constrain a story at an intimate dinner gathering, or a presentation at a business meeting. And should we choose to share a small amount of intimate data with “the cloud,” it’s important that the cloud understand the nature of that data as distinct from its masses of “Big Data.”

An undeveloped idea, to be sure, but I wanted to sketch this out today before I leave for a week of travel.

Facebook’s Real Question: What’s the “Native Model”?

By - May 23, 2012

 

The headlines about Facebook’s IPO – along with questions about its business model – are now officially cringeworthy. It’s an ongoing, rolling study in how society digests important news about our industry, and it’s far from played out. But we seem at an interesting tipping point in perception, and now seemed a good time to weigh in with a few words on the subject.

Prior to Facebook’s IPO, I drafted a post about its core business model (targeted display advertising), but decided not to publish it. The main thrust of my post is below, but I want to explain why I didn’t post right away, and provide you all with something of a “tick tock” of what’s happened over the past few days.

The truth is, I didn’t post last week because I didn’t feel like piling on to what was becoming a media frenzy. Less than 24 hours before the biggest Internet IPO in history, the negative stories questioning Facebook’s core revenue model were coming fast and furious. My piece wasn’t negative, per se, its intention was to be thoughtful. And in the face of a media scrum, I often pull back until the dust settles. (There’s a media business in there somewhere, but I digress).

I figured I’d wait till Monday. Things would have settled down by then…

Well, that didn’t happen. Compared to Google’s IPO, which was controversial for very different reasons (they ran a “modified auction,” remember?), the Facebook IPO is quickly becoming the biggest story in tech so far this year. And unfortunately for the good people at Facebook, it’s not a positive one.

The starting gun of Facebook’s IPO woes was the news that GM planned to pull its $10 million spend – but would continue to invest around $30 million in maintaining its Facebook “presence.” Interestingly, that $30 million was not going to Facebook, but rather to GM’s agency and other partners. I’m not sure how that $30 million is spent – that’s a lot of cheddar to have a presence anywhere (you could build about 15 Instagrams with that kind of money, for example). But most have speculated it goes to staffing social media experts and working with companies like Buddy Media, buying “likes” through third party ad networks, and maintaining a burgeoning amount of content to feed GM’s myriad and increasingly sophisticated presence on the site.

Now, some folks have said the reason GM pulled its ads were because the auto giant failed to understand how to market on Facebook – but if that’s true, I’m not sure it’s entirely GM’s fault. Regardless, since the original WSJ piece came out, a raft of pieces questioning Facebook’s money machine have appeared, and they mostly say the same thing. Here’s last week’s New York Times, for example (titled Ahead of Facebook I.P.O., a Skeptical Madison Ave):

“It’s one of the most powerful branding mechanisms in the world, but it’s not an advertising mechanism,” said Martin Sorrell, chief executive of WPP, the giant advertising agency.

“Facebook’s a silo,” said Darren Herman, the chief digital media officer at the Media Kitchen, an agency that helps clients on Facebook. “It is very hard to understand the efficacy of what a Facebook like, or fan or follow is worth.”

It seems, just ahead of the IPO, folks were realizing that Facebook doesn’t work like Google, or the web at large. It’s a service layered on top of the Web, and it has its own rules, its own ecosystem, and its own “native advertising platform.” In the run up to the IPO, a lot of folks began questioning whether that platform stands the test of time.

I’ll have more thoughts on that below, after a quick review of the past few days in FacebookLand.

What Just Happened?!

As I outlined above, Facebook faced a building storyline about the efficacy of  its core revenue model, right before the opening bell. Not a good start, but then again, not unusual for a company going public.

One of the inevitabilities of negative news about a company is that it begets more negativity – people start to look for patterns that might prove that the initial bad news was just the tip of an iceberg. When word came out last week that demand for the stock was so high that insiders planned to sell even more  shares at the open, many industry folks I spoke to began to wonder if the “greater fool” theory was kicking in. In other words, these people wondered, if the bankers and early investors in Facebook were increasing the number of shares they were selling at the outset, perhaps they knew something the general public didn’t – maybe they thought that $38 was as high as the stock was going to get – at least for a while.

Clearly, those industry folks were talking to more than just me. The press started questioning the increase. As Bloomberg reported at the time: “…insiders’ decision to pare holdings further may heighten some investors’ concern over Facebook’s earnings growth, said Greenwood Capital’s Walter Todd.”

That quote would prove prescient.

As Facebook opened trading last Friday, the stock instantly shot up – always taken as a good sign – but then it began to sink. Were it not for significant supportive buying by the offering’s lead banker, the stock would have closed below its opening price, an embarrassing signal that the offering was poorly handled. Facebook closed its first day of trading up marginally – not exactly the rocketship that many expected (a crowdsourced site predicted it would soar to $54, for example).

Then things got really bad. Over the weekend, officials at NASDAQ, the exchange where Facebook debuted, admitted they bungled the stock’s opening trades due to the massive demand, citing technical and other issues. Monday, the Wall Street Journal, among many others, questioned Morgan Stanley’s support of the stock. To make matters worse, the stock slid to around $34 by the end of the day.  A frenzy of media coverage erupted – including a number of extraordinary allegations, first made late Monday evening, around insider information provided verbally to institutional investors but not disclosed to the public. That information included concerns that Facebook’s ad revenues were not growing as quickly as first thought, and that mobile usage, where Facebook’s monetization is weak, was exploding, exposing another hole in the company’s revenue model.

In other words, what my industry sources suspected might have been true  – that insiders knew something, and decided to get out when the getting was good – may have been what really happened. True or not, such a story taints the offering considerably.

Predictably, those allegations have spawned calls for investigations by regulatory authorities and lawsuits or subpoenas by individual investors as well as the state of Massachusetts. On Tuesday, the stock sank again, closing at near $31, $7 off its opening price and more than $10 off its high point on opening day.

Not exactly a honeymoon for new public company CEO Mark Zuckerberg, who got married last Sunday to his college sweetheart. Today’s early trading must provide at least some comfort – Facebook is trading a bit up, in the $32 range, a price that many financial news outlets reported as the number most sophisticated investors felt was correct in the first place.

Is the worst of it over for Facebook’s IPO? I have no idea. But the core of the issue is what’s most interesting to me.

Stepping Back: What’s This Really All About?

Facebook is  a very large, very profitable company and I am sure it will find its feet. I’m not a stock analyst, and I’m not going to try to predict whether or not the company is properly valued at any price.

But I do have a few thoughts about the underlying questions that are driving this whole fracas – Facebook’s revenue model.

Facebook makes 82% of its money by selling targeted display advertising – boxes on the top and right side of the site (it’s recently added ads at logout, and in newsfeeds). Not a particularly unique model on its face, but certainly unique underneath: Because Facebook knows so much about each person on its service, it can target in ways Google and others can only dream about. Over the years, Facebook has added new advertising products based on the unique identity, interest, and relationship data it owns: Advertisers can incorporate the fact that a friend of a friend “likes” a product, for example. Or they can incorporate their own marketing content into their ads, a practice known as “conversational marketing” that I’ve been on about for seven or so years (for more on that, see my post Conversational Marketing Is Hot – Again. Thanks Facebook!).

But as many have pointed out, Facebook’s approach to advertising has a problem: People don’t (yet) come to Facebook with the intention of consuming quality content (as they do with media sites), or finding an answer to a question (as they do at Google search). Yet Facebook’s ad system combines both those models – it employs a display ad unit (the foundation of brand-driven media sites) as well as a sophisticated ad-buying platform that’d be familiar to anyone who’s ever used Google AdWords.

I’m not sure how many advertisers use Facebook, but it’s probably a fair guess to say the number approaches or crosses the hundreds of thousands. That’s about how many used Overture and Google a decade ago. The big question is simply this: Do those Facebook ads work as well or better than other approaches? If the answer is yes, the question of valuation is rather moot. If the answer is no…Facebook’s got some work to do.

No such question hung over Google upon its stock debut. AdWords worked. People came to search with clear intent, and if you, as an advertiser, could match your product or service to that intent, you won. You’d put as much money as you could into the Google machine, because profit came out the other side. It was an entirely new model for advertising.

I think it’s fair to say the same is not yet true for Facebook’s native advertising solution. And that’s really what Facebook Ads are: the biggest example of a platform-specific “native advertising” play since Google AdWords broke out ten years ago.

But it’s not clear that Facebook’s ad platform works better than any number of other alternatives. For brand advertisers, those large “rising star” units, replete with video capabilities and rich contextual placements, are a damn good option, and increasingly affordable. And if an advertiser wants to message at the point of intent, well, that’s what Google (and Bing) are for.

It’s astonishing how quickly Facebook has gotten to $4billion in revenue – but at the end of the day, marketers must justify their spend. Sure, it makes sense to engage on a platform where nearly a billion people spend hours each month. But the question remains – how do you engage, and who do you pay for that engagement? Facebook is huge, and terribly successful at engaging its users. But what GM seems to have realized is that it can engage all day long on Facebook, without having to pay Facebook for the privilege of doing so. Perhaps the question can be rephrased this way: Has Facebook figured out how to deliver marketers long-term value creation?  The jury seems out on that one.

Now that Facebook is public, it will face relentless pressure to convince that jury, which now demonstrates its vote via a real time stock price. That pressure could force potentially new and more intrusive ad units, and/or new approaches to monetization we’ve yet to see, including, as I predicted in January, a web-wide display network driven by Facebook data.

As Chris Dixon wrote earlier in the month:

The key question when trying to value Facebook’s stock is: can they find another business model that generates significantly more revenue per user without hurting the user experience?

A good question, and one I can only imagine folks at Facebook are pondering at the moment. Currently, Facebook’s ads are, in the majority, stuck in a model that doesn’t feel truly native to how people actually use the service. Can Facebook come up with a better solution? Integration of ad units into newsfeeds is one approach that bears watching (it reminds me of Twitter’s approach, for example), but I’m not sure that’s enough to feed a $4billion beast.

These questions are fascinating to consider – in particular in light of the “native monetization” craze sweeping other platforms like Tumblr, Twitter, Pinterest, and others. As I’ve argued elsewhere, unique approaches to marketing work only if they prove a return on total investment, including the cost of creating, optimizing, and supporting those native ad units when compared to other marketing approaches. Facebook clearly has the heft, and now the cash, to spend considerable resources to prove its approach. I can’t wait to see what happens next.

Larry Lessig on Facebook, Apple, and the Future of “Code”

By - May 09, 2012

Larry Lessig is an accomplished author, lawyer, and professor, and until recently, was one of the leading active public intellectuals in the Internet space. But as I wrote in my review of his last book (Is Our Republic Lost?), in the past few years Lessig has changed his focus from Internet law to reforming our federal government.

But that doesn’t mean Lessig has stopped thinking about our industry, as the dialog below will attest. Our conversation came about last month after I finished reading Code and Other Laws of Cyberspace, Version 2. The original book, written in 1999, is still considered an authoritative text on how the code of computing platforms interacts with our legal and social codes. In 2006, Lessig “crowdsourced” an update to his book, and released it as “Version 2.0.” I’d never read the updated work (and honestly didn’t remember the details of the first book), so finally, six years later, I dove in again.

It’s a worthy dive, but not an easy one. Lessig is a lawyer by nature, and his argument is laid out like proofs in a case. Narrative is sparse, and structure sometimes trumps writing style. But his essential point – that the Internet is not some open “wild west” destined to always be free of regulation, is soundly made. In fact, Lessig argues, the Internet is quite possibly the most regulable technology ever invented, and if we don’t realize that fact, and protect ourselves from it, we’re in for some serious pain down the road. And for Lessig, the government isn’t the only potential regulator. Instead, Lessig argues, commercial interests may become the most pervasive regulators on the Internet.

Indeed, during the seven years between Code’s first version and its second, much had occurred to prove Lessig’s point. But even as Lessig was putting the finishing touches on the second version of his manuscript, a new force was erupting from the open web: Facebook. And a year after that, the iPhone redefined the Internet once again.

In Code, Lessig enumerates several examples of how online services create explicit codes of control – including the early AOL, Second Life, and many others. He takes the reader though important lessons in understanding regulation as more than just governmental – explaining normative (social), market (commercial), and code-based (technological) regulation. He warns that once we commit our lives to commercial services that hold our identity, a major breach of security will most likely force the government into enacting overzealous and anti-constitutional measures (think 9/11 and the Patriot Act). He makes a case for the proactive creation of an intelligent identity layer for the Internet, one that might offer just the right amount of information for the task at hand. In 2006, such an identity layer was a controversial idea – no one wanted the government, for example, to control identity on the web.

But for reasons we’re still parsing as a culture, in the six years since the publication of Code v2, nearly 1 billion of us have become comfortable with Facebook as our defacto identity, and hundreds of millions of us have become inhabitants of Apple’s iOS.

Instead of going into more detail on the book (as I have in many other reviews), I thought I’d reach out to Lessig and ask him about this turn of events. Below is a lightly edited transcript of our dialog. I think you’ll find it provocative.

As to the book: If you consider yourself active in the core issues of the Internet industry, do yourself a favor and read it. It’s worth your time.

Q: After reading your updated Code v2, which among many other things discusses how easily the Internet might become far more regulated than it once was, I found myself scribbling one word in the margins over and over again. That word was “Facebook.”

You and your community updated your 1999 classic in 2006, a year or two before Facebook broke out, and several years before it became the force it is now. In Code you cover the regulatory architectures of places where people gather online, including MUDS, AOL, and the then-hot darling known as Second Life. But the word Facebook isn’t in the text.

What do you make of Facebook, given the framework of Code v2?

Lessig: If I were writing Code v3, there’d be a chapter — right after I explained the way (1) code regulates, and (2) commerce will use code to regulate — titled: “See, e.g., Facebook.” For it strikes me that no phenomena since 2006 better demonstrates precisely the dynamic I was trying to describe. The platform is dominant, and built into the platform are a million ways in which behavior is regulated. And among those million ways are 10 million instances of code being use to give to Facebook a kind of value that without code couldn’t be realized. Hundreds of millions from across the world live “in” Facebook. It more directly (regulating behavior) than any government structures and regulates their lives while there. There are of course limits to what Facebook can do. But the limits depend upon what users see. And Facebook has not yet committed itself to the kind of transparency that should give people confidence. Nor has it tied itself to the earlier and enabling values of the internet, whether open source or free culture.

Q: Jonathan Zittrain wrote his book two years after Code v2, and warned of non-generative systems that might destroy the original values of the Internet. Since then, Apple iOS (the “iWorld”) and Facebook have blossomed, and show no signs of slowing down. Do you believe we’re in a pendulum swing, or are you more pessimistic – that consumers are voting with their dollars, devices, and data for a more closed ecosystem?

Lessig: The trend JZ identified is profound and accelerating, and most of us who celebrate the “free and open” net are simply in denial. Facebook now lives oblivious to the values of open source software, or free culture. Apple has fully normalized the iNannyState. And unless Google’s Android demonstrates how open can coexist with secure, I fear the push away from our past will only continue. And then when our i9/11 event happens — meaning simply a significant and destructive cyber event, not necessarily tied to any particular terrorist group — the political will to return to control will be almost irresistible.

The tragedy in all this is that it doesn’t have to be this way. If we could push to a better identity layer in the net, we could get both better privacy and better security. But neither side in this extremist’s battle is willing to take the first step towards this obvious solution. And so in the end I fear the extremists I like least will win.

Q: You seem profoundly disappointed in our industry. What can folks who want to make a change do?

Lessig: Not at all. The industry is doing what industry does best — doing well, given the rules as they are. What industry is never good at (and is sometimes quite evil at) is identifying the best mix of rules. Government is supposed to do something with that. Our problem is that we have today such a fundamentally dysfunctional government that we don’t even recognize the idea that it might have a useful role here. So we get stuck in these policy-dead-ends, with enormous gains to both sides left on the table.

And that’s only to speak about the hard problems — which security in the Net is. Much worse (and more frustrating) are the easy problems which the government also can’t solve, not because the answer isn’t clear (again, these are the easy problems) but because the incumbents are so effective at blocking the answer that makes more sense so as to preserve the answer that makes them more dollars. Think about the “copyright wars” — practically every sane soul is now focused on a resolution of that war that is almost precisely what the disinterested souls were arguing a dozen years ago (editor’s note: abolishing DRM). Yet the short-termism of the industry wouldn’t allow those answers a dozen years ago, so we have had an completely useless war which has benefited no one (save the lawyers-as-soldiers in that war). We’ve lost a decade of competitive innovation in ways to spur and spread content in ways that would ultimately benefit creators, because the dinosaurs owned the lobbyists.

—-

I could have gone on for some time with Lessig, but I wanted to stop there, and invite your questions in the comments section. Lessig is pretty busy with his current work, which focuses on those lobbyists and the culture of money in Congress, but if he can find the time, he’ll respond to your questions in the comments below, or to me in email, and I’ll update the post.

###

Other works I’ve reviewed: 

You Are Not A Gadget by Jaron Lanier (review)

Wikileaks And the Age of Transparency  by Micah Sifry (review)

Republic Lost by Larry Lessig (review)

Where Good Ideas Come From: A Natural History of Innovation by Steven Johnson (my review)

The Singularity Is Near: When Humans Transcend Biology by Ray Kurzweil (my review)

The Corporation (film – my review).

What Technology Wants by Kevin Kelly (my review)

Alone Together: Why We Expect More from Technology and Less from Each Other by Sherry Turkle (my review)

The Information: A History, a Theory, a Flood by James Gleick (my review)

In The Plex: How Google Thinks, Works, and Shapes Our Lives by Steven Levy (my review)

The Future of the Internet–And How to Stop It by Jonathan Zittrain (my review)

The Next 100 Years: A Forecast for the 21st Century by George Friedman (my review)

Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives by the Year 2100 by Michio Kaku (my review)

A Coachella “Fail-ble”: Do We Hold Spectrum in Common?

By - April 18, 2012

Neon Indian at Coachella last weekend.

 

Last weekend I had the distinct pleasure of taking two days off the grid and heading to a music festival called Coachella. Now, when I say “off the grid,” I mean time away from my normal work life (yes, I tend to work a bit on the weekends), and my normal family life (I usually reserve the balance of weekends for family, this was the first couple of days “alone” I’ve had in more than a year.)

What I most certainly did not want to be was off the information grid – the data lifeline that all of us so presumptively leverage through our digital devices. But for the entire time I was at the festival, unfortunately, that’s exactly what happened – to me, and to most of the 85,000 or so other people trying to use their smartphones while at the show.

I’m not writing this post to blame AT&T (my carrier), or Verizon, or the producers of Coachella, though each have some part to play in the failure that occurred last weekend (and most likely will occur again this weekend, when Coachella produces its second of two festival weekends). Rather, I’m deeply interested in how this story came about, why it matters, and what, if anything, can be done about it.

First, let’s set some assumptions. When tens of thousands of young people (the average age of a Coachella fan is in the mid to low 20s) gather in any one place in the United States, it’s a safe bet these things are true:

– Nearly everyone has a smartphone in their possession.

– Nearly everyone plans on using that smartphone to connect with friends at the show, as well as to record, share, and amplify the experience they are having while at the event.

– Nearly everyone knows that service at large events is awful, yet they hope their phone will work, at least some of the time. Perhaps a cash-rich sponsor will pay to bring in extra bandwidth, or maybe the promoter will spring for it out of the profit from ticket sales. Regardless, they expect some service delays, and plan on using low-bandwidth texting services more than they’d like to.

– Nearly everyone leaves a show like Coachella unhappy with their service provider, and unable to truly express themselves in ways they wished they could. Those ways might include, in no particular order: Communicating with friends so as to meet up (“See you at the Outdoor stage, right side middle, for Grace Potter!”), tweeting or Facebooking a message to followers (“Neon Indian is killing it right now!”), checking in on Foursquare or any other location service so as to gain value in a social game (or in my case, to create digital breadcrumbs to remind me who I was once in hit dotage), uploading photos to any number of social photo services like Instagram, or using new, music-specific apps like TastemakerX on a whim (“I’d like to buy 100 shares of Yuck, those guys just blew me away!”). Oh, and it’d be nice to make a phone call home if you need to.

But for the most part, I and all my friends were unable to do any of these things at Coachella last weekend, at least not in real time. I felt as if I was drinking from a very thin, very clogged cocktail straw. Data service was simply non existent onsite. Texts came in, but more often than not they were timeshifted: I’d get ten texts delivered some 20 minutes after they were sent. And phone service was about as good as it is on Sand Hill Road – spotty, prone to drops, and often just not available. I did manage to get some data service while at the show, but that was because I found a press tent and logged onto the local wifi network there, or I “tricked” my phone into thinking it was logging onto the network for the first time (by turning “airplane mode” off and on over and over again).

This all left me wondering – what if? What if there was an open pipe, both up and down, that could handle all that traffic? What if everyone who came to the show knew that pipe would be open, and work? What kind of value would have been created had that been the case? How much more data would have populated the world, how much richer would literally millions of people’s lives been for seeing the joyful expressions of their friends as they engaged in a wonderful experience? How much more learning might have countless startups gathered, had they been able to truly capture the real time intentions of their customers at such an event?

In short, how much have we lost as a society because we’ve failed to solve our own bandwidth problems?

I know, it’s just a rock festival, and jeez Battelle, shut off your phone and just dance, right? OK, I get that, I trust me, I did dance, a lot. But I also like to take a minute here or there to connect to the people I love, or who follow me, and share with them my passions and my excitement. We are becoming a digital society, to pretend otherwise is to ignore reality. And with very few exceptions, it was just not possible to intermingle the digital and the physical at Coachella. (I did hear reports that folks with Verizon were having better luck, but that probably because there were fewer Verizon iPhones than those with AT&T. And think about that language – “luck”?!).

Way back in 2008, when the iPhone was new and Instagram was a gleam in Kevin Systrom’s eye, I was involved in creating a service called CrowdFire. It was a way for fans at a festival (the first was Outside Lands) to share photos, tweets, and texts in a location and event specific way. I’ve always rued our decision to not spin CrowdFire out as a separate company, but regardless, my main memory of the service was how crippled it was due to bandwidth failure. It was actually better than Coachella, but not by much. So in four years, we’ve managed to go backwards when it comes to this problem.

Of course, the amount of data we’re using has exploded, so credit to the carriers for doing their best to keep up. But can they get to the promised land? I wonder, at least under the current system of economic incentives we’ve adopted in the United States. Sure, there will always be traffic jams, but have we really thought through the best approach to how we execute “the Internet in the sky?”

Put another way, do we not hold the ability to share who we are, our very digital reflections, as a commons to which all of us should have equal access?

As I was driving to the festival last Saturday, I engaged in a conversation with one of my fellow passengers about this subject. What do we, as a society, hold in commons, and where do digital services fit in, if at all?

Well, we were driving to Coachella on city roads, held in commons through municipalities, for one. And we then got on Interstate 10 for a few miles, which is held in commons by federal agencies in conjunction with local governments. So it’s pretty clear we have, as a society, made the decision that the infrastructure for the transport of atoms – whether they be cars and the humans in them, or trucks and the commercial goods within them – is held in a public commons.Sure, we hit some traffic, but it wasn’t that bad, and there were ways to route around it.

What else do we hold in a commons? We ticked off the list of stuff upon we depend – the transportation of water and power to our homes and our businesses, for example. Those certainly are (mostly) held in the public commons as well.

So it’s pretty clear that over the course of time, we’ve decided that when it comes to moving ourselves around, and making sure we have power and water, we’re OK with the government managing the infrastructure. But what of bits? What of “ourselves” as expressed digitally?

For the “hardwired” Internet – the place that gave us the Web, Google, Facebook, et al, we built upon what was arguably a publicly common infrastructure. Thanks to government and social normative regulation, the hard-wired Internet was architected to be open to all, with a commercial imperative that insured bandwidth issues were addressed in a reasonable fashion (Cisco, Comcast, etc.).

But with wireless, we’ve taken what is a public asset – radio spectrum – and we’ve licensed it to private companies under a thicket of regulatory oversight. And without laying blame – there’s probably plenty of it to go around – we’ve proceeded to make a mess of it. What we have here, it seems to me, is a failure. Is it a market failure – which usual preceeds government action? I’m not sure that’s the case. But it’s a fail, nevertheless. I’d like to get smarter on this issue, even though the prospect of it makes my head hurt.

As I wrote yesterday, I recently spent some time in Washington DC, and sat down with the Obama administration’s point person on that question, FCC Chair Julius Genachowski. As I expected, the issue of spectrum allocation is extraordinarily complicated, and it’s unlikely we’ll find a way out of the “Coachella Fail-ble” anytime soon. But there is hope. Technological disruption is one way – watch the “white spaces,” for instance. And in a world where marketing claims to be “the fastest” spur customer switching, our carriers are madly scrambling to upgrade their networks. Yet in the US, wireless speeds are far below those of countries in Europe and Asia.

I plan on finding out more as I report, but I may as well ask you, my smarter readers: Why is this the case? And does it have anything to do with what those other countries consider to be held in “digital commons”?

I’ll readily admit I’m simply a journeyman asking questions here, not a firebrand looking to lay blame. I understand this is a complicated topic, but it’s one for which I’d love your input and guidance.