Recently my site has been hit with a ton of “manual spam” – folks who are paid to post short comments in the hope they’ll appear and drive pagerank back to various sites (or perhaps just increase their or their clients’ visibility.) It’s not hard to kill these comments, though it’s a bit of an irritant when they pile up. I don’t really mind, because their full-blown amateur-hour earnestness is pretty entertaining. Besides leaving chuckleworthy comments like “Facebook now 100 billion company there big really now”, the spammers also leave behind their user handles, which are simply priceless. Enjoy:
A couple of days ago Google released its latest “Transparency Report,” part of the company’s ongoing commitment to disclose requests by individuals, corporations, and governments to change what users see in search results and other Google properties such as YouTube.
The press coverage of Google’s report was copious – far more than the prior two years, and for good reason. This week’s disclosure included Google’s bi-annual report of government takedown requests (corporate and individual requests are updated in near real time). The news was not comforting.
As the Atlantic wrote:
The stories Google tells to accompany the broad-brush numbers (found in the “annotations” section and its blog) paint a picture to accompany those numbers that Google calls “alarming” — noting, in particular, that some of the requests for removal of political speech come from “Western democracies not typically associated with censorship.”
The number of takedown requests from governments is on the rise – up about 100% year to year for the US alone. Part of this, perhaps, can be explained by what might be called a “catchup effect” – governments are coming to terms with the pervasive power of digital information, and finally getting their heads around trying to control it, much as governments have attempted to control more analog forms of information like newspapers, television stations, and books.
But as we know, digital information is very, very different. It’s one thing to try to control the press, it’s quite another to do the same with the blog postings, YouTube videos, Twitter feeds, and emails of an entire citizenry. Given the explosion of arguably illegal or simply embarrassing information available to Google’s crawlers (cough, cough, Wikileaks), I’m rather surprised that worldwide government takedown requests haven’t grown at an exponential rate.
But to me, the rise of government takedown requests isn’t nearly as interesting as the role Google and other companies play in all of this. As I’ve written elsewhere, it seems that as we move our public selves into the digital sphere, we seem to be also moving our trust from the institutions of government to the institution of the corporation. For example, our offline identity is established by a government ID like a driver’s license. Online, many of us view Facebook as our identity service. Prior to email, our private correspondance was secured by a government institution called the postal service. Today, we trust AOL, Microsoft, Yahoo, Facebook, or Gmail with our private utterances. When documents were analog, they were protected by government laws against unreasonable search and seizure. When they live in the cloud….the ground is shifting. I could go on, but I think you get my point.
As we move ourselves into the realm of digital information, a realm mediated by private corporations, those corporations naturally become the focus of government attention. I find Google’s Transparency Report to be a refreshing response to this government embrace – but it’s an exercise that almost no other corporation completes (Twitter has a record of disclosing, but on a case by case basis). Where is Amazon’s Transparency Report? Yahoo’s? Microsoft’s? And of course, the biggest question in terms of scale and personal information – where is Facebook’s? Oh, and of course, where is Apple’s?
Put another way: If we are shifting our trust from the government to the corporation, who’s watching the corporations? With government, we’ve at least got clear legal recourse – in the United States, we’ve got the Constitution, the Freedom of Information Act, and a deep legal history protecting the role of the press – what Jefferson called the Fourth Estate. With corporations, we’re on far less comforting ground – most of us have agreed to Terms of Services we’ve never read, much less studied in sixth grade civics class.
As the Atlantic concludes:
Google is trying to make these decisions responsibly, and the outcome, as detailed in the report, is reason to have confidence in Google as an arbiter of these things if, as is the case, Google is going to be the arbiter of these issues. But unlike a US Court, we don’t see the transcripts of oral arguments, or the detailed reasoning of a judge. …The Transparency Report sheds more light on the governments Google deals with than with its own internal processes for making judgments about compliance….Google’s Transparency Report is the work of a company that is grappling with its power and trying to show its work.
I applaud Google’s efforts here, but I’m wary of placing such an important public trust in the hands of private corporations alone. Google is a powerful company, with access to a wide swath of the world’s information. But with the rise of walled gardens like iOS and Facebook, an increasing amount of our information doesn’t touch Google’s servers. We literally are in the dark about how this data is being accessed by governments around the world.
Google is setting an example I hope all corporations with access to our data will follow. So far, however, most companies don’t. And that should give all of us pause, and it should be the basis of an ongoing conversation about the role of government in our digital lives.
Part of the research I am doing for the book involves trying to get my head around the concept of “Big Data,” given the premise that we are in a fundamental shift to a digitally driven society. Big data, as you all know, is super hot – Facebook derives its value because of all that big data it has on you and me, Google is probably the original consumer-facing big data company (though Amazon might take issue with that), Microsoft is betting the farm on data in the cloud, Splunk just had a hot IPO because it’s a Big Data play, and so on.
But I’m starting to wonder if Big Data is the right metaphor for all of us as we continue this journey toward a digitally enhanced future. It feels so – impersonal – Big Data is something that is done to us or without regard for us as individuals. We need a metaphor that is more about the person, and less about the machine. At the very least, it should start with us, no?
Elsewhere I’ve written about the intersection of data and the platform for that data – expect a lot more from me on this subject in the future. But in short, I am unconvinced that the current architecture we’ve adopted is ideal – where all “our” data, along with the data created by that data’s co-mingling with other data – lives in “cloud” platforms controlled by large corporations whose terms and values we may or may not agree with (or even pay attention to, though some interesting folks are starting to). And the grammar and vocabulary now seeping into our culture is equally mundane and bereft of the subject’s true potential – the creation, sharing and intermingling of data is perhaps the most important development of our generation, in terms of potential good it can create in the world.
At Web 2 last year a significant theme arose around the idea of “You Are the Platform,” driven by people and companies like Chris Poole, Mozilla, Singly, and many others. I think this is an under-appreciated and important idea for our industry, and it centers around, to torture a phrase, the idea of “small” rather than Big Data. To me, small means limited, intimate, and actionable by individuals. It’s small in the same sense that the original web was “small pieces loosely joined” (and the web itself was “big.”) It’s intimate in that it’s data that matters a lot to each of us, and that we share with much the same kind of social parameters that might constrain a story at an intimate dinner gathering, or a presentation at a business meeting. And should we choose to share a small amount of intimate data with “the cloud,” it’s important that the cloud understand the nature of that data as distinct from its masses of “Big Data.”
An undeveloped idea, to be sure, but I wanted to sketch this out today before I leave for a week of travel.
The headlines about Facebook’s IPO – along with questions about its business model – are now officially cringeworthy. It’s an ongoing, rolling study in how society digests important news about our industry, and it’s far from played out. But we seem at an interesting tipping point in perception, and now seemed a good time to weigh in with a few words on the subject.
Prior to Facebook’s IPO, I drafted a post about its core business model (targeted display advertising), but decided not to publish it. The main thrust of my post is below, but I want to explain why I didn’t post right away, and provide you all with something of a “tick tock” of what’s happened over the past few days.
The truth is, I didn’t post last week because I didn’t feel like piling on to what was becoming a media frenzy. Less than 24 hours before the biggest Internet IPO in history, the negative stories questioning Facebook’s core revenue model were coming fast and furious. My piece wasn’t negative, per se, its intention was to be thoughtful. And in the face of a media scrum, I often pull back until the dust settles. (There’s a media business in there somewhere, but I digress).
I figured I’d wait till Monday. Things would have settled down by then…
Well, that didn’t happen. Compared to Google’s IPO, which was controversial for very different reasons (they ran a “modified auction,” remember?), the Facebook IPO is quickly becoming the biggest story in tech so far this year. And unfortunately for the good people at Facebook, it’s not a positive one.
The starting gun of Facebook’s IPO woes was the news that GM planned to pull its $10 million spend - but would continue to invest around $30 million in maintaining its Facebook “presence.” Interestingly, that $30 million was not going to Facebook, but rather to GM’s agency and other partners. I’m not sure how that $30 million is spent – that’s a lot of cheddar to have a presence anywhere (you could build about 15 Instagrams with that kind of money, for example). But most have speculated it goes to staffing social media experts and working with companies like Buddy Media, buying “likes” through third party ad networks, and maintaining a burgeoning amount of content to feed GM’s myriad and increasingly sophisticated presence on the site.
Now, some folks have said the reason GM pulled its ads were because the auto giant failed to understand how to market on Facebook – but if that’s true, I’m not sure it’s entirely GM’s fault. Regardless, since the original WSJ piece came out, a raft of pieces questioning Facebook’s money machine have appeared, and they mostly say the same thing. Here’s last week’s New York Times, for example (titled Ahead of Facebook I.P.O., a Skeptical Madison Ave):
“It’s one of the most powerful branding mechanisms in the world, but it’s not an advertising mechanism,” said Martin Sorrell, chief executive of WPP, the giant advertising agency.
“Facebook’s a silo,” said Darren Herman, the chief digital media officer at the Media Kitchen, an agency that helps clients on Facebook. “It is very hard to understand the efficacy of what a Facebook like, or fan or follow is worth.”
It seems, just ahead of the IPO, folks were realizing that Facebook doesn’t work like Google, or the web at large. It’s a service layered on top of the Web, and it has its own rules, its own ecosystem, and its own “native advertising platform.” In the run up to the IPO, a lot of folks began questioning whether that platform stands the test of time.
I’ll have more thoughts on that below, after a quick review of the past few days in FacebookLand.
What Just Happened?!
As I outlined above, Facebook faced a building storyline about the efficacy of its core revenue model, right before the opening bell. Not a good start, but then again, not unusual for a company going public.
One of the inevitabilities of negative news about a company is that it begets more negativity – people start to look for patterns that might prove that the initial bad news was just the tip of an iceberg. When word came out last week that demand for the stock was so high that insiders planned to sell even more shares at the open, many industry folks I spoke to began to wonder if the “greater fool” theory was kicking in. In other words, these people wondered, if the bankers and early investors in Facebook were increasing the number of shares they were selling at the outset, perhaps they knew something the general public didn’t – maybe they thought that $38 was as high as the stock was going to get – at least for a while.
Clearly, those industry folks were talking to more than just me. The press started questioning the increase. As Bloomberg reported at the time: “…insiders’ decision to pare holdings further may heighten some investors’ concern over Facebook’s earnings growth, said Greenwood Capital’s Walter Todd.”
That quote would prove prescient.
As Facebook opened trading last Friday, the stock instantly shot up – always taken as a good sign – but then it began to sink. Were it not for significant supportive buying by the offering’s lead banker, the stock would have closed below its opening price, an embarrassing signal that the offering was poorly handled. Facebook closed its first day of trading up marginally – not exactly the rocketship that many expected (a crowdsourced site predicted it would soar to $54, for example).
Then things got really bad. Over the weekend, officials at NASDAQ, the exchange where Facebook debuted, admitted they bungled the stock’s opening trades due to the massive demand, citing technical and other issues. Monday, the Wall Street Journal, among many others, questioned Morgan Stanley’s support of the stock. To make matters worse, the stock slid to around $34 by the end of the day. A frenzy of media coverage erupted – including a number of extraordinary allegations, first made late Monday evening, around insider information provided verbally to institutional investors but not disclosed to the public. That information included concerns that Facebook’s ad revenues were not growing as quickly as first thought, and that mobile usage, where Facebook’s monetization is weak, was exploding, exposing another hole in the company’s revenue model.
In other words, what my industry sources suspected might have been true - that insiders knew something, and decided to get out when the getting was good – may have been what really happened. True or not, such a story taints the offering considerably.
Predictably, those allegations have spawned calls for investigations by regulatory authorities and lawsuits or subpoenas by individual investors as well as the state of Massachusetts. On Tuesday, the stock sank again, closing at near $31, $7 off its opening price and more than $10 off its high point on opening day.
Not exactly a honeymoon for new public company CEO Mark Zuckerberg, who got married last Sunday to his college sweetheart. Today’s early trading must provide at least some comfort – Facebook is trading a bit up, in the $32 range, a price that many financial news outlets reported as the number most sophisticated investors felt was correct in the first place.
Is the worst of it over for Facebook’s IPO? I have no idea. But the core of the issue is what’s most interesting to me.
Stepping Back: What’s This Really All About?
Facebook is a very large, very profitable company and I am sure it will find its feet. I’m not a stock analyst, and I’m not going to try to predict whether or not the company is properly valued at any price.
But I do have a few thoughts about the underlying questions that are driving this whole fracas – Facebook’s revenue model.
Facebook makes 82% of its money by selling targeted display advertising – boxes on the top and right side of the site (it’s recently added ads at logout, and in newsfeeds). Not a particularly unique model on its face, but certainly unique underneath: Because Facebook knows so much about each person on its service, it can target in ways Google and others can only dream about. Over the years, Facebook has added new advertising products based on the unique identity, interest, and relationship data it owns: Advertisers can incorporate the fact that a friend of a friend “likes” a product, for example. Or they can incorporate their own marketing content into their ads, a practice known as “conversational marketing” that I’ve been on about for seven or so years (for more on that, see my post Conversational Marketing Is Hot – Again. Thanks Facebook!).
But as many have pointed out, Facebook’s approach to advertising has a problem: People don’t (yet) come to Facebook with the intention of consuming quality content (as they do with media sites), or finding an answer to a question (as they do at Google search). Yet Facebook’s ad system combines both those models – it employs a display ad unit (the foundation of brand-driven media sites) as well as a sophisticated ad-buying platform that’d be familiar to anyone who’s ever used Google AdWords.
I’m not sure how many advertisers use Facebook, but it’s probably a fair guess to say the number approaches or crosses the hundreds of thousands. That’s about how many used Overture and Google a decade ago. The big question is simply this: Do those Facebook ads work as well or better than other approaches? If the answer is yes, the question of valuation is rather moot. If the answer is no…Facebook’s got some work to do.
No such question hung over Google upon its stock debut. AdWords worked. People came to search with clear intent, and if you, as an advertiser, could match your product or service to that intent, you won. You’d put as much money as you could into the Google machine, because profit came out the other side. It was an entirely new model for advertising.
I think it’s fair to say the same is not yet true for Facebook’s native advertising solution. And that’s really what Facebook Ads are: the biggest example of a platform-specific “native advertising” play since Google AdWords broke out ten years ago.
But it’s not clear that Facebook’s ad platform works better than any number of other alternatives. For brand advertisers, those large “rising star” units, replete with video capabilities and rich contextual placements, are a damn good option, and increasingly affordable. And if an advertiser wants to message at the point of intent, well, that’s what Google (and Bing) are for.
It’s astonishing how quickly Facebook has gotten to $4billion in revenue – but at the end of the day, marketers must justify their spend. Sure, it makes sense to engage on a platform where nearly a billion people spend hours each month. But the question remains – how do you engage, and who do you pay for that engagement? Facebook is huge, and terribly successful at engaging its users. But what GM seems to have realized is that it can engage all day long on Facebook, without having to pay Facebook for the privilege of doing so. Perhaps the question can be rephrased this way: Has Facebook figured out how to deliver marketers long-term value creation? The jury seems out on that one.
Now that Facebook is public, it will face relentless pressure to convince that jury, which now demonstrates its vote via a real time stock price. That pressure could force potentially new and more intrusive ad units, and/or new approaches to monetization we’ve yet to see, including, as I predicted in January, a web-wide display network driven by Facebook data.
As Chris Dixon wrote earlier in the month:
The key question when trying to value Facebook’s stock is: can they find another business model that generates significantly more revenue per user without hurting the user experience?
A good question, and one I can only imagine folks at Facebook are pondering at the moment. Currently, Facebook’s ads are, in the majority, stuck in a model that doesn’t feel truly native to how people actually use the service. Can Facebook come up with a better solution? Integration of ad units into newsfeeds is one approach that bears watching (it reminds me of Twitter’s approach, for example), but I’m not sure that’s enough to feed a $4billion beast.
These questions are fascinating to consider – in particular in light of the “native monetization” craze sweeping other platforms like Tumblr, Twitter, Pinterest, and others. As I’ve argued elsewhere, unique approaches to marketing work only if they prove a return on total investment, including the cost of creating, optimizing, and supporting those native ad units when compared to other marketing approaches. Facebook clearly has the heft, and now the cash, to spend considerable resources to prove its approach. I can’t wait to see what happens next.
Larry Lessig is an accomplished author, lawyer, and professor, and until recently, was one of the leading active public intellectuals in the Internet space. But as I wrote in my review of his last book (Is Our Republic Lost?), in the past few years Lessig has changed his focus from Internet law to reforming our federal government.
But that doesn’t mean Lessig has stopped thinking about our industry, as the dialog below will attest. Our conversation came about last month after I finished reading Code and Other Laws of Cyberspace, Version 2. The original book, written in 1999, is still considered an authoritative text on how the code of computing platforms interacts with our legal and social codes. In 2006, Lessig “crowdsourced” an update to his book, and released it as “Version 2.0.” I’d never read the updated work (and honestly didn’t remember the details of the first book), so finally, six years later, I dove in again.
It’s a worthy dive, but not an easy one. Lessig is a lawyer by nature, and his argument is laid out like proofs in a case. Narrative is sparse, and structure sometimes trumps writing style. But his essential point – that the Internet is not some open “wild west” destined to always be free of regulation, is soundly made. In fact, Lessig argues, the Internet is quite possibly the most regulable technology ever invented, and if we don’t realize that fact, and protect ourselves from it, we’re in for some serious pain down the road. And for Lessig, the government isn’t the only potential regulator. Instead, Lessig argues, commercial interests may become the most pervasive regulators on the Internet.
Indeed, during the seven years between Code’s first version and its second, much had occurred to prove Lessig’s point. But even as Lessig was putting the finishing touches on the second version of his manuscript, a new force was erupting from the open web: Facebook. And a year after that, the iPhone redefined the Internet once again.
In Code, Lessig enumerates several examples of how online services create explicit codes of control – including the early AOL, Second Life, and many others. He takes the reader though important lessons in understanding regulation as more than just governmental – explaining normative (social), market (commercial), and code-based (technological) regulation. He warns that once we commit our lives to commercial services that hold our identity, a major breach of security will most likely force the government into enacting overzealous and anti-constitutional measures (think 9/11 and the Patriot Act). He makes a case for the proactive creation of an intelligent identity layer for the Internet, one that might offer just the right amount of information for the task at hand. In 2006, such an identity layer was a controversial idea – no one wanted the government, for example, to control identity on the web.
But for reasons we’re still parsing as a culture, in the six years since the publication of Code v2, nearly 1 billion of us have become comfortable with Facebook as our defacto identity, and hundreds of millions of us have become inhabitants of Apple’s iOS.
Instead of going into more detail on the book (as I have in many other reviews), I thought I’d reach out to Lessig and ask him about this turn of events. Below is a lightly edited transcript of our dialog. I think you’ll find it provocative.
As to the book: If you consider yourself active in the core issues of the Internet industry, do yourself a favor and read it. It’s worth your time.
Q: After reading your updated Code v2, which among many other things discusses how easily the Internet might become far more regulated than it once was, I found myself scribbling one word in the margins over and over again. That word was “Facebook.”
You and your community updated your 1999 classic in 2006, a year or two before Facebook broke out, and several years before it became the force it is now. In Code you cover the regulatory architectures of places where people gather online, including MUDS, AOL, and the then-hot darling known as Second Life. But the word Facebook isn’t in the text.
What do you make of Facebook, given the framework of Code v2?
Lessig: If I were writing Code v3, there’d be a chapter — right after I explained the way (1) code regulates, and (2) commerce will use code to regulate — titled: “See, e.g., Facebook.” For it strikes me that no phenomena since 2006 better demonstrates precisely the dynamic I was trying to describe. The platform is dominant, and built into the platform are a million ways in which behavior is regulated. And among those million ways are 10 million instances of code being use to give to Facebook a kind of value that without code couldn’t be realized. Hundreds of millions from across the world live “in” Facebook. It more directly (regulating behavior) than any government structures and regulates their lives while there. There are of course limits to what Facebook can do. But the limits depend upon what users see. And Facebook has not yet committed itself to the kind of transparency that should give people confidence. Nor has it tied itself to the earlier and enabling values of the internet, whether open source or free culture.
Q: Jonathan Zittrain wrote his book two years after Code v2, and warned of non-generative systems that might destroy the original values of the Internet. Since then, Apple iOS (the “iWorld”) and Facebook have blossomed, and show no signs of slowing down. Do you believe we’re in a pendulum swing, or are you more pessimistic – that consumers are voting with their dollars, devices, and data for a more closed ecosystem?
Lessig: The trend JZ identified is profound and accelerating, and most of us who celebrate the “free and open” net are simply in denial. Facebook now lives oblivious to the values of open source software, or free culture. Apple has fully normalized the iNannyState. And unless Google’s Android demonstrates how open can coexist with secure, I fear the push away from our past will only continue. And then when our i9/11 event happens — meaning simply a significant and destructive cyber event, not necessarily tied to any particular terrorist group — the political will to return to control will be almost irresistible.
The tragedy in all this is that it doesn’t have to be this way. If we could push to a better identity layer in the net, we could get both better privacy and better security. But neither side in this extremist’s battle is willing to take the first step towards this obvious solution. And so in the end I fear the extremists I like least will win.
Q: You seem profoundly disappointed in our industry. What can folks who want to make a change do?
Lessig: Not at all. The industry is doing what industry does best — doing well, given the rules as they are. What industry is never good at (and is sometimes quite evil at) is identifying the best mix of rules. Government is supposed to do something with that. Our problem is that we have today such a fundamentally dysfunctional government that we don’t even recognize the idea that it might have a useful role here. So we get stuck in these policy-dead-ends, with enormous gains to both sides left on the table.
And that’s only to speak about the hard problems — which security in the Net is. Much worse (and more frustrating) are the easy problems which the government also can’t solve, not because the answer isn’t clear (again, these are the easy problems) but because the incumbents are so effective at blocking the answer that makes more sense so as to preserve the answer that makes them more dollars. Think about the “copyright wars” — practically every sane soul is now focused on a resolution of that war that is almost precisely what the disinterested souls were arguing a dozen years ago (editor’s note: abolishing DRM). Yet the short-termism of the industry wouldn’t allow those answers a dozen years ago, so we have had an completely useless war which has benefited no one (save the lawyers-as-soldiers in that war). We’ve lost a decade of competitive innovation in ways to spur and spread content in ways that would ultimately benefit creators, because the dinosaurs owned the lobbyists.
I could have gone on for some time with Lessig, but I wanted to stop there, and invite your questions in the comments section. Lessig is pretty busy with his current work, which focuses on those lobbyists and the culture of money in Congress, but if he can find the time, he’ll respond to your questions in the comments below, or to me in email, and I’ll update the post.
Where Good Ideas Come From: A Natural History of Innovation by Steven Johnson (my review)
The Singularity Is Near: When Humans Transcend Biology by Ray Kurzweil (my review)
In The Plex: How Google Thinks, Works, and Shapes Our Lives by Steven Levy (my review)
The Next 100 Years: A Forecast for the 21st Century by George Friedman (my review)
Last weekend I had the distinct pleasure of taking two days off the grid and heading to a music festival called Coachella. Now, when I say “off the grid,” I mean time away from my normal work life (yes, I tend to work a bit on the weekends), and my normal family life (I usually reserve the balance of weekends for family, this was the first couple of days “alone” I’ve had in more than a year.)
What I most certainly did not want to be was off the information grid – the data lifeline that all of us so presumptively leverage through our digital devices. But for the entire time I was at the festival, unfortunately, that’s exactly what happened – to me, and to most of the 85,000 or so other people trying to use their smartphones while at the show.
I’m not writing this post to blame AT&T (my carrier), or Verizon, or the producers of Coachella, though each have some part to play in the failure that occurred last weekend (and most likely will occur again this weekend, when Coachella produces its second of two festival weekends). Rather, I’m deeply interested in how this story came about, why it matters, and what, if anything, can be done about it.
First, let’s set some assumptions. When tens of thousands of young people (the average age of a Coachella fan is in the mid to low 20s) gather in any one place in the United States, it’s a safe bet these things are true:
- Nearly everyone has a smartphone in their possession.
- Nearly everyone plans on using that smartphone to connect with friends at the show, as well as to record, share, and amplify the experience they are having while at the event.
- Nearly everyone knows that service at large events is awful, yet they hope their phone will work, at least some of the time. Perhaps a cash-rich sponsor will pay to bring in extra bandwidth, or maybe the promoter will spring for it out of the profit from ticket sales. Regardless, they expect some service delays, and plan on using low-bandwidth texting services more than they’d like to.
- Nearly everyone leaves a show like Coachella unhappy with their service provider, and unable to truly express themselves in ways they wished they could. Those ways might include, in no particular order: Communicating with friends so as to meet up (“See you at the Outdoor stage, right side middle, for Grace Potter!”), tweeting or Facebooking a message to followers (“Neon Indian is killing it right now!”), checking in on Foursquare or any other location service so as to gain value in a social game (or in my case, to create digital breadcrumbs to remind me who I was once in hit dotage), uploading photos to any number of social photo services like Instagram, or using new, music-specific apps like TastemakerX on a whim (“I’d like to buy 100 shares of Yuck, those guys just blew me away!”). Oh, and it’d be nice to make a phone call home if you need to.
But for the most part, I and all my friends were unable to do any of these things at Coachella last weekend, at least not in real time. I felt as if I was drinking from a very thin, very clogged cocktail straw. Data service was simply non existent onsite. Texts came in, but more often than not they were timeshifted: I’d get ten texts delivered some 20 minutes after they were sent. And phone service was about as good as it is on Sand Hill Road – spotty, prone to drops, and often just not available. I did manage to get some data service while at the show, but that was because I found a press tent and logged onto the local wifi network there, or I “tricked” my phone into thinking it was logging onto the network for the first time (by turning “airplane mode” off and on over and over again).
This all left me wondering – what if? What if there was an open pipe, both up and down, that could handle all that traffic? What if everyone who came to the show knew that pipe would be open, and work? What kind of value would have been created had that been the case? How much more data would have populated the world, how much richer would literally millions of people’s lives been for seeing the joyful expressions of their friends as they engaged in a wonderful experience? How much more learning might have countless startups gathered, had they been able to truly capture the real time intentions of their customers at such an event?
In short, how much have we lost as a society because we’ve failed to solve our own bandwidth problems?
I know, it’s just a rock festival, and jeez Battelle, shut off your phone and just dance, right? OK, I get that, I trust me, I did dance, a lot. But I also like to take a minute here or there to connect to the people I love, or who follow me, and share with them my passions and my excitement. We are becoming a digital society, to pretend otherwise is to ignore reality. And with very few exceptions, it was just not possible to intermingle the digital and the physical at Coachella. (I did hear reports that folks with Verizon were having better luck, but that probably because there were fewer Verizon iPhones than those with AT&T. And think about that language – “luck”?!).
Way back in 2008, when the iPhone was new and Instagram was a gleam in Kevin Systrom’s eye, I was involved in creating a service called CrowdFire. It was a way for fans at a festival (the first was Outside Lands) to share photos, tweets, and texts in a location and event specific way. I’ve always rued our decision to not spin CrowdFire out as a separate company, but regardless, my main memory of the service was how crippled it was due to bandwidth failure. It was actually better than Coachella, but not by much. So in four years, we’ve managed to go backwards when it comes to this problem.
Of course, the amount of data we’re using has exploded, so credit to the carriers for doing their best to keep up. But can they get to the promised land? I wonder, at least under the current system of economic incentives we’ve adopted in the United States. Sure, there will always be traffic jams, but have we really thought through the best approach to how we execute “the Internet in the sky?”
Put another way, do we not hold the ability to share who we are, our very digital reflections, as a commons to which all of us should have equal access?
As I was driving to the festival last Saturday, I engaged in a conversation with one of my fellow passengers about this subject. What do we, as a society, hold in commons, and where do digital services fit in, if at all?
Well, we were driving to Coachella on city roads, held in commons through municipalities, for one. And we then got on Interstate 10 for a few miles, which is held in commons by federal agencies in conjunction with local governments. So it’s pretty clear we have, as a society, made the decision that the infrastructure for the transport of atoms – whether they be cars and the humans in them, or trucks and the commercial goods within them – is held in a public commons.Sure, we hit some traffic, but it wasn’t that bad, and there were ways to route around it.
What else do we hold in a commons? We ticked off the list of stuff upon we depend – the transportation of water and power to our homes and our businesses, for example. Those certainly are (mostly) held in the public commons as well.
So it’s pretty clear that over the course of time, we’ve decided that when it comes to moving ourselves around, and making sure we have power and water, we’re OK with the government managing the infrastructure. But what of bits? What of “ourselves” as expressed digitally?
For the “hardwired” Internet – the place that gave us the Web, Google, Facebook, et al, we built upon what was arguably a publicly common infrastructure. Thanks to government and social normative regulation, the hard-wired Internet was architected to be open to all, with a commercial imperative that insured bandwidth issues were addressed in a reasonable fashion (Cisco, Comcast, etc.).
But with wireless, we’ve taken what is a public asset – radio spectrum – and we’ve licensed it to private companies under a thicket of regulatory oversight. And without laying blame – there’s probably plenty of it to go around – we’ve proceeded to make a mess of it. What we have here, it seems to me, is a failure. Is it a market failure – which usual preceeds government action? I’m not sure that’s the case. But it’s a fail, nevertheless. I’d like to get smarter on this issue, even though the prospect of it makes my head hurt.
As I wrote yesterday, I recently spent some time in Washington DC, and sat down with the Obama administration’s point person on that question, FCC Chair Julius Genachowski. As I expected, the issue of spectrum allocation is extraordinarily complicated, and it’s unlikely we’ll find a way out of the “Coachella Fail-ble” anytime soon. But there is hope. Technological disruption is one way – watch the “white spaces,” for instance. And in a world where marketing claims to be “the fastest” spur customer switching, our carriers are madly scrambling to upgrade their networks. Yet in the US, wireless speeds are far below those of countries in Europe and Asia.
I plan on finding out more as I report, but I may as well ask you, my smarter readers: Why is this the case? And does it have anything to do with what those other countries consider to be held in “digital commons”?
I’ll readily admit I’m simply a journeyman asking questions here, not a firebrand looking to lay blame. I understand this is a complicated topic, but it’s one for which I’d love your input and guidance.
A few weeks ago I ventured to our nation’s capital to steep in its culture a bit, and get some first hand reporting done for the book. I met with about a dozen or so folks, including several scholars, the heads of the FCC and FTC, and senior folks in the Departments of Commerce and State. I also spoke to a lobbyist from the Internet industry, as well as people from various “think tanks” that populate the city. It was my first such trip, but it certainly won’t be my last.
Each of the conversations was specific to the person I was interviewing, but I did employ one device to tie them together – I asked each person the same set of questions toward the end of the conversation. And as I was on the plane home, I wrote myself a little reminder to post about the most interesting set of answers I got, which was to this simple question: What doesn’t the Valley understand about Washington?
It’s not a secret that the Valley, as a whole, has an ambivalent attitude toward DC. Until recently, the prevailing philosophy has trended libertarian – just stay out of the way, please, and let us do what we do best. Just about every startup CEO I’ve ever known – including myself – ignores Washington in the early years of a company’s lifecycle. Government is treated like plumbing – it’s dirty, it costs too much, it’s preferably someone else’s job, and it’s ignored until it stops working the way we want it to.
SOPA and PIPA is the classic example of the plumbing going out – and the Internet’s response to it was the topic of much of my conversations last month. Sure, “we” managed to stop some stupid legislation from passing, but the fact is, we almost missed it, and Lord knows what else we’re missing due to our refusal to truly engage with the instrument of our shared governance.
To be fair, in the past few years a number of major Internet companies have gotten very serious about joining the conversation in DC – Google is perhaps the most serious of them all (I’m not counting Micrsoft, which got pretty serious back in 1997 when it lost an antitrust suit). Now, one can argue that like Microsoft before it, Google’s seriousness is due to how interested Washington has become in Google, but regardless, it was interesting to hear from source after source how they respected Google for at least fully staffing a presence in DC.
Other large Internet companies also have offices in Washington, but from what I hear, they are not that effective beyond very narrow areas of interest. Two of the largest e-commerce companies in the world have a sum total of eight people in DC, I was told by a well-placed source. Eight people can’t get much done when you’re dealing with regulatory frameworks around fraud, intellectual property, international trade, infrastructure and spectrum policy, and countless other areas of regulation that matter to the Internet.
In short, and perhaps predictably, nearly everyone I spoke to in Washington told me that the Valley’s number one issue was its lack of engagement with the government. But the answers were far more varied and interesting than that simple statement. Here they are, without attribution, as most of my conversations were on background pending clearance of actual quotes for the book:
- The Valley doesn’t understand the threat that comes from Washington. Put another way, our industry figures it out too late. The Valley doesn’t understand how much skin it already has in the game. “When things are bent in the right direction here, it can be a really good thing,” one highly placed government source told me. Washington is “dismissed, and when it’s dismissed you neither realize the upside nor mitigate the downside.”
- When the Valley does engage, it’s too lightly, and too predictably. Larger Valley companies get an office on K Street (where the lobbyists live) and hire an ex-Congressperson to lobby on that company’s core issues. But “that’s not where the magic is,” one source told me. The real magic is for companies to use their own platforms to engage with their customers in authentic conversations that get the attention of lawmakers. This happened – albeit very late – with SOPA/PIPA, and it got everyone’s attention in Washington. Imagine if this was an ongoing conversation, and not a one-off “Chicken Little” scenario? Counter to what many believe about Washington, where money and lobbying connections are presumed to always win the day, “Fact-based arguments matter, a lot,” one senior policymaker told me. “Fact-based debates occur here, every day. If you take yourself out of that conversation, it’s like going into litigation without a lawyer.” Internet companies are uniquely positioned to change the approach to how lawmakers “hear” their constituents, but have done very little to actually leverage that fact.
- The Valley is too obsessed with the issue of privacy, one scholar told me. Instead, it should look to regulations around whether or not harm is being done to consumers. This was an interesting insight – and perhaps a way to think about protecting our data and our identities. There are already a thicket of regulations and law around keeping consumers safe from the harmful effects of business practices. Perhaps we are paying attention to the wrong thing, this scholar suggested.
- The Valley assumes that bad legislation will be rooted out and defeated in the same way that SOPA and PIPA were. But that’s a faulty assumption. “The Valley is techno-deterministic, and presumes ‘we can engineer around it,’” one scholar told me. “They don’t realize they’ve already been blinkered – a subset of possible new technological possibilities has already been removed that they are not even aware of.” One example of this is the recent “white spaces” spectrum allocation, which while promising avenues of new market opportunity, was severely retarded by forces in Washington far more powerful than the Internet industry (more on this in another post).
- The framework of “us vs. them” is unproductive and produces poor results. The prevailing mentality in the valley, one well-connected scholar told me, is the “heroic techie versus the wicked regulator…Rather than just having libertarian abstractions about regulations versus freedom,” this source continued, “it’s important to realize that in every single debate there are… regulations that strike better or worse balances between competing values. You just have to engage enough to defend the good ones.”
Put another way, as another senior government official told me, “The Valley doesn’t understand there are good and decent people here who really want to get things done.”
If I were to sum up the message from all my conversations in Washington, it’d be this: We’re here because as a society, we decided we needed people to help manage values we hold in common. Increasingly, the Internet is how we express those values. So stop ignoring us and hoping we’ll go away, and start engaging with us more. Decidedly better results will occur if you do.
I don’t pretend that one trip to DC makes me an expert on the subject (it surely does not), but I left DC energized and wanting to engage more than I have in the past. I hope you’ll feel the same.
By around this time of year, most of you are used to hearing about this year’s Web 2 Summit theme, its initial lineup of speakers, and any other related goings on, like our annual VIP dinners or perhaps some crazy map I’ve dreamt up. It’s become a familiar ritual in early spring, and many of you have been asking what’s up with this year’s event, in particular given the success of both last year’s theme (The Data Frame) and its amazing lineup of speakers and attendees.
Truth is, we’re not going to do the Web 2 Summit this year, and I’m writing this post to explain why. For the most part, it has to do with my book, the subject of which was outlined in my previous post. As the person who focuses on the core product – the programming on the stage – I just could not pull off both writing a book and creating a pitch-perfect onstage program. It takes months and months of hard work to execute a conference like Web 2 (and not just by me). My partners at O’Reilly and UBM TechWeb are full to the brink with other conferences, and after months of discussions about how we might route around this problem, we all agreed there really wasn’t a way to do it. It’s not fun being the guy who stops the party, but in this case, I have to step up and take responsibility.
That’s not to say we won’t be back – we’re keeping our options open there. For now, the Web 2 Summit is on hiatus. Each of the partners will continue to produce conferences (I am doing five for FM this year alone, and have ideas about others in the works). We’re just letting the Web 2 Summit lay fallow for a year.
I want to note that the partnership the three of us have enjoyed these past eight years has been nothing short of extraordinary. It’s quite unusual for a three-way venture to work, much less thrive as Web 2 Summit has. I am deeply grateful to Tim O’Reilly, Tony Uphoff, and their teams. I also want to note that this decision has nothing to do with any debate or disagreement between us – it’s really due to my desire to focus my time on FM and my new book.
Taking this year off will give all of us a chance to reflect on what we’ve done, consider our options going forward, and then take action. Expect to hear from us again in the next few months, and thanks for being part of the Web 2 Summit community.
Over the past few months I’ve been developing a framework for the book I’ve been working on, and while I’ve been pretty quiet about the work, it’s time to lay it out and get some responses from you, the folks I most trust with keeping me on track.
I’ll admit the idea of putting all this out here makes me nervous – I’ve only discussed this with a few dozen folks, and now I’m going public with what I’ll admit is an unbaked cake. Anyone can criticize it now, (or, I suppose, steal it), but then again, I did the very same thing with the core idea in my last book (The Database of Intentions, back in 2003), and that worked out just fine.
So here we go. The original promise of my next book is pretty simple: I’m trying to paint a picture of the kind of digital world we’ll likely live in one generation from now, based on a survey of where we are presently as a digital society. In a way, it’s a continuation and expansion of The Search – the database of intentions has expanded from search to nearly every corner of our world – we now live our lives leveraged over digital platforms and data. So what might that look like thirty years hence?
As the announcement last year stated:
WHAT WE HATH WROUGHT will give us a forecast of the interconnected world in 2040, then work backwards to explain how the personal, economic, political, and technological strands of this human narrative have evolved from the pivotal moment in which we find ourselves now.
That’s a pretty tall order. At first, I spent a lot of time trying to boil any number of oceans – figuring out who to talk to in politics, energy, healthcare, technology, and, well, just about every major field. It quickly became quite evident that I’d end up with a book a thousand miles wide and one inch deep – unless I got very lucky and stumbled upon a perfect narrative actor that tied it all up into one neat story. Last time Google provided me that actor, but given I’m writing a book about how the world might look in 30 years, I’m not holding my breath waiting for another perfect protagonist to step out a time machine somewhere.
But what if those protagonists are already here? Allow me to explain…
For the past few months I’ve been stewing on how the hell to actually write this book I’ve promised everyone I would deliver. The manuscript is not actually due till early next year, but still, books take a lot of time. And every day that goes by without a clear framework is a day partially lost.
A couple of months ago, worried that I’d never figure this thing out (but knowing there had to be a way), I invited one of my favorite authors (and new Marin resident) Steven Johnson over to my house for a brainstorming session. I outlined where I was in my thinking, and posed to him my essential problem: I was trying to do too much, and needed to focus my work on a narrative that paid off the promise, but didn’t read like a textbook, or worse yet, like a piece of futurism. As I said to Steven, “If I write a book that has a scene where an alarm clock wakes you up on a ‘typical morning in 2045,’ please shoot me.”
It’s not that I don’t appreciate futurism – it’s just that I truly believe, as William Gibson famously put it, that the future is already here, it’s just unevenly distributed. If I could just figure out a way to report on that future, to apply the tools of journalism to the story of the future we’re creating, I’d come up with a book worth reading. Of course, it was this approach we took in the early years of Wired magazine. Our job, as my colleague Kevin Kelly put it, was to send writers off in search of where the future was erupting, with instructions to report back.
To find that future, we asked our writers (and editors) to look hard at the present, and find people, places or things that augured what might come next. Hence, issue one of Wired had articles about the future of war, education, entertainment, and sex, based on reporting done in the here and now. While we didn’t call it such, over the years we developed an “If-Then” approach to many of the stories we’d assign. We’d think out loud: “If every school had access to the Internet, then what might change about education?” Or, “If the government had the ability to track everything we do both offline and on, then what might our society look like?” The conditional “If” question followed with a realistic “Then” answer provided a good way to wrap our heads around a sometimes challenging subject (and for you programmers out there, we’d also consider the “ands” as well as the “elses.”)
Next, we’d ask a reporter to go find out all he or she could about that scenario – to go in search of artifacts from the future which told a story of where things might be going. (Wired, in fact, later created the popular “Found: Artifacts from the Future” series in the pages of the magazine.)
As an early reader and contributor to Wired, Steven knew all this, and reminded me of it as we spoke that day at my house. What if, he asked me, the book was framed as a series of stories about “future antiquities” or “future relics” (I think he first dubbed them “Magic Coins”)? Could we find examples of things currently extant, which, if widely adopted over the next generation, would presage significant changes in the world we’ll be inhabiting? Why, indeed, yes we could. Immediately I thought of five or six, and since that day, many more have come to mind.
Now, I think it bears unpacking this concept of what I mean by “widely adopted.” To me, it means clearing a pretty high hurdle – by 2045 or so, it’d mean that more than a billion people would be regularly interacting with whatever the future antiquity might be. When you get a very large chunk of the population engaged in a particular behavior, that behavior has the ability to effect real change on our political, social, and cultural norms. Those are the kind of artifacts I’m looking to find.
As a thought experiment, imagine I had given myself this assignment back in the early 1980s, when I was just starting my love affair with this story as a technology reporter (yes, there’s a symmetry here – that’s 30 years ago – one generation past). Had I gone off in search of digital artifacts that presaged the future, ones that I believed might be adopted by a billion or more people, I certainly would have started with the personal computer, which at that point was counted in the high hundreds of thousands in the US. And I also would have picked the Internet, which was being used, at that point, by only thousands of people. I’d have described the power of these two artifacts in the present day, imagined how technological and social change might develop as more and more people used them, and spoken to the early adopters, entrepreneurs, and thinkers of the day about what would happen if a billion or more people were using them on a regular basis.
Pushing the hypothetical a bit further, I imagine I’d find the Dan Bricklins, Vint Cerfs, Ray Ozzies, and Bill Gates of the day, and noticed that they hung out in universities, for the most part. I’d have noticed that they used their computers and online networks to communicate with each other, to share information, to search and discover things, and to create communities of interest. It was in those universities where the future was erupting 30 years ago, and had I been paying close attention, it’s plausible I might have declared email, search, and social networks – or at least “communities on the Internet” – as artifacts of our digital future. And of course, I’d have noticed the new gadget just released called the mobile phone, and probably declared that important as well. If more than a billion people had a mobile phone by 2012, I’d have wondered, then what might our world look like?
I’m pretty sure I’d have gotten a lot wrong, but the essential framework – a way to think about finding and declaring the erupting future – seems a worthy endeavor. So I’ve decided to focus my work on doing just that. It helps that it combines two of my favorite approaches to thinking – anthropology and journalism. In essence, I’m going on a dig for future antiquities.
So what might some of today’s artifacts from the future be? I don’t pretend to have an exhaustive list, but I do have a good start. And while the “If-Then” framework could work for all sorts of artifacts, I’m looking for those that “ladder up” to significant societal change. To that end, I’ve begun exploring innovations in energy, finance, health, transportation, communications, commerce – not surprisingly, all subjects to which we have devoted impressive stone buildings in our capital city. (Hence my trip to DC last week.)
Here’s one example that might bring the concept home: The Fitbit. At present, there are about half a million of them in the world, as far as I can tell (I’m meeting with the company soon). But Fitbit-like devices are on the rise – Nike launched its FuelBand earlier this year, for example. And while the first generation of these devices may only appeal to early adopters, with trends in miniaturization, processing power, and data platforms, it’s not hard to imagine a time when billions of us are quantifying our movement, caloric intake and output, sleep patterns, and more, then sharing that data across wide cohorts so as to draw upon the benefits of pattern-recognizing algorithms to help us make better choices about our behavior.
If that were to happen, what then might be the impact on our healthcare systems? Our agricultural practices and policies? Our insurance industries? Our life expectancies? I’m not entirely sure, but it’d sure be fun to try to answer such questions.
I won’t tip my hand as to my entire current list of Future Antiquities, but I certainly would welcome your ideas and input as to what they might be. I’d also like your input on the actual title of the book. “What We Hath Wrought” is a cool title, but perhaps it’s a bit….too heady. Some might even call it overwrought. What if I called the book “If-Then”? I’m thinking about doing just that. Let me know in comments, and as always, thanks for reading.
Early in Lessig’s “Code v2,” which at some point this week I hope to review in full, Lessig compares the early campus networks of two famous educational institutions. Lessig knew them well – in the mid 1990s, he taught at both Harvard and the University of Chicago. Like most universities, Harvard and Chicago provided Internet access to their students. But they took quite different approaches to doing so. True to its philosophy of free and anonymous speech, Chicago simply offered an open connection to its students – plug in anywhere on campus, and start using the net.
Harvard’s approach was the polar opposite, as Lessig explains:
At Harvard, the rules are different….You cannot plug your machine to the Net at Harvard unless the machine is registered – licensed, approved, verified. Only members of the university community can register their machines. Once registered, all interactions with the network are monitored and identified to a particular machine. To join the network, users have to “sign” a user agreement. The agreement acknowledges this pervasive practice of monitoring. Anonymous speech on this network is not permitted – it is against the rules. Acceess can be controlled based on who you are, and interactions can be traced based on what you did.
In the preceding paragraph, change “Harvard” and “university” to “Facebook” and – there you have it. Facebook was the product of a Harvard mindset – and probably could never have come from a place like Chicago or Berkeley (where I taught).
I called up Harvard’s IT department to see if the policy had changed since Lessig’s experiences in the 1990s, or Mark Zuckerbeg’s six or so years ago. The answer was no – machines still must be registered, and all actions across Harvard’s network are trackable.
There are many benefits associated with a “real names” identity policy, including personalized services and a far greater likelihood of civil discourse. But the reverse is also true: without the right to speak anonymously (or pseudonymously), dissent and exploration are often muted. And of course, there’s that tracking/monitoring/data issue as well…
In Code, Lessig goes on to predict that while the original Internet began with a very Chicago-like approach to the world, architectures of regulation and control will ultimately end up winning if we don’t pay close attention.
He wrote the original Code in 1999, and updated it in 2006. The word Facebook is not in either version of the text. Just thought that a curious anecdote worth sharing.