free html hit counter Joints After Midnight & Rants Archives | Page 9 of 42 | John Battelle's Search Blog

Jaron Lanier: Something Doesn’t Smell Right

By - May 08, 2012

Jaron Lanier’s You Are Not A Gadget has been on my reading list for nearly two years, and if nothing else comes of this damn book I’m trying to write, it’ll be satisfying to say that I’ve made my way through any number of important works that for one reason or another, I failed to read up till now.

I met Jaron in the Wired days (that’d be 20 years ago) but I don’t know him well – as with Sherry Turkle and many others, I encountered him through my role as an editor, then followed his career with interest as he veered from fame as a virtual reality pioneer into his current role as chief critic of all things “Web 2.0.” Given my role in that “movement” – I co-founded the Web 2 conferences with Tim O’Reilly in 2004 – it’d be safe to assume that I disagree with most of what Lanier has to say.

I don’t. Not entirely, anyway. In fact, I came away, as I did with Turkle’s work, feeling a strange kinship with Lanier. But more on that in a moment.

In essence, You Are Not A Gadget is a series of arguments, some concise, others a bit shapeless, centering on one theme: Individual human beings are special, and always will be, and digital technology is not a replacement for our humanity. In particular, Lanier is deeply skeptical of any kind of machine-based mechanism that might be seen as replacing or diminishing our specialness, which over the past decade, Lanier sees happening everywhere.

Lanier is most eloquent when he describes, late in the book, what he believes humans to be: the result of a very long, very complicated interaction with reality (sure, irony alert given Lanier’s VR fame, but it makes sense when you read the book):

I believe humans are the result of billions of years of implicit, evolutionary study in the school of hard knocks. The cybernetic structure of a person has been refined by a very large, very long, and very deep encounter with physical reality.

Lanier worries we’re losing that sense of reality. From crowdsourcing and Wikipedia to the Singularity movement, he argues that we’re starting to embrace a technological philosophy that can only lead to loss. Early in the book, he writes:

“…certain specific, popular internet designs of the moment…tend to pull us into life patterns that gradually degrade the ways in which each of us exists as an individual. These unfortunate designs are more oriented toward treating people as relays in a global brain….(this) leads to all sorts of maladies….”

Lanier goes on to specific examples, including the online tracking associated with advertising, the concentration of power in the hands of the “lords of the clouds” such as Microsoft, Facebook, Google, and even Goldman Sachs, the loss of analog musical notation, the rise of locked in, fragile, and impossibly complicated software programs; and ultimately, the demise of the middle class. It’s a potentially powerful argument, and one I wish Lanier had made more completely. Instead, after reading his book, I feel forewarned, but not quite forearmed.

Lanier singles out many of our shared colleagues – the leaders of the Web 2.0 movement – as hopelessly misguided, labeling them “cynernetic totalists” who believe technology will solve all problems, including that of understanding humanity and consciousness. He worries about the fragmentation of our online identity, and warns that Web 2 services – from blogs to Facebook – lead us to leave little pieces of ourselves everywhere, feeding a larger collective, but resulting in no true value to the individual.

If you read my recent piece On Thneeds and the “Death of Display”, this might sound familiar, but I’m not sure I’d be willing to go as far as Lanier does in claiming all this behavior of ours will end up impoverishing our culture forever. I tend to be an optimist, Lanier, less so. He rues the fact that the web never implemented Ted Nelson’s vision of true hypertext – where the creator is remunerated via linked micro-transactions, for example. I think there were good reasons this system didn’t initially win, but there’s no reason to think it never will.

Lanier, an accomplished musician – though admittedly not a very popular one – is convinced that popular culture has been destroyed by the Internet. He writes:

Pop culture has entered into a nostalgic malaise. Online culture is dominated by trivial mashups of the culture that existed before the onset of mashups, and by fandom responding to the dwindling outposts of centralized mass media. It is a culture of reaction without action.

As an avid music fan, I’m not convinced. But Lanier goes further:

Spirituality is committing suicide. Consciousness is attempting to will itself out of existence…the deep meaning of personhood is being reduced by illusions of bits.

Wow! That’s some powerful stuff. But after reading the book, I wasn’t convinced about that, either, though Lanier raises many interesting questions along the way. One of them boils down to the concept of smell – the one sense that we can’t represent digitally. In a section titled “What Makes Something Real Is That It Is Impossible to Represent It To Completion,” Lanier writes:

It’s easy to forget that the very idea of a digital expression involves a trade-off with metaphysical overtones. A physical oil painting cannot convey an image created in another medium; it is impossible to make an oil painting look just like an ink drawing, for instance, or vice versa. But a digital image of sufficient resolution can capture any kind of perceivable image—or at least that’s how you’ll think of it if you believe in bits too much. Of course, it isn’t really so. A digital image of an oil painting is forever a representation, not a real thing. A real painting is a bottomless mystery, like any other real thing. An oil painting changes with time; cracks appear on its face. It has texture, odor, and a sense of presence and history.

This really resonates with me. In particular, the part about the odor. Turns out, odor is a pretty interesting subject. Our sense of smell is inherently physical – actual physical molecules of matter are required to enter our bodies and “mate” with receptors in our nervous system in order for us to experience an odor:

Olfaction, like language, is built up from entries in a catalog, not from infinitely morphable patterns. …the world’s smells can’t be broken down into just a few numbers on a gradient; there is no “smell pixel.”

Lanier suspects – and I find the theory compelling – that olfaction is deeply embedded in what it means to be human. Certainly such a link presents a compelling thought experiment as we transition to a profoundly digital world. I am very interested in what it means for our culture that we are truly “becoming digital,” that we are casting shadows of data in nearly everything we do, and that we are struggling to understand, instrument, and respond socially to this shift. I’m also fascinated by the organizations attempting to leverage that data, from the Internet Big Five to the startups and behind the scenes players (Palantir, IBM, governments, financial institutions, etc) who are profiting from and exploiting this fact.

But I don’t believe we’re in early lockdown mode, destined to digital serfdom. I still very much believe in the human spirit, and am convinced that if any company, government, or leader pushes too hard, we will “sniff them out,” and they will be routed around. Lanier is less complacent: he is warning that if we fail to wake up, we’re in for a very tough few decades, if not worse.

Lanier and I share any number of convictions, regardless. His prescriptions for how to insure we don’t become “gadgets” might well have been the inspiration for my post Put Your Taproot Into the Independent Web, for example (he implores us to create, deeply, and not be lured into expressing ourselves solely in the templates of social networking sites). And he reminds readers that he loves the Internet, and pines, a bit, for the way it used to be, before Web 2 and Facebook (and one must assume, Apple), rebuilt it into forms he now decries.

I pine a bit myself, but remain (perhaps foolishly) optimistic that the best of what we’ve created together will endure, even as we journey onward to discover new ways of valuing what it means to be a person. And I feel lucky to know that I can reach out to Jaron – and I have – to continue this conversation, and report the results of our dialog on this site, and in my own book.

Next up: A review (and dialog with the author) of Larry Lessig’s Code And Other Laws of Cyberspace, Version 2.

Other works I’ve reviewed:

Wikileaks And the Age of Transparency  by Micah Sifry (review)

Republic Lost by Larry Lessig (review)

Where Good Ideas Come From: A Natural History of Innovation by Steven Johnson (my review)

The Singularity Is Near: When Humans Transcend Biology by Ray Kurzweil (my review)

The Corporation (film – my review).

What Technology Wants by Kevin Kelly (my review)

Alone Together: Why We Expect More from Technology and Less from Each Other by Sherry Turkle (my review)

The Information: A History, a Theory, a Flood by James Gleick (my review)

In The Plex: How Google Thinks, Works, and Shapes Our Lives by Steven Levy (my review)

The Future of the Internet–And How to Stop It by Jonathan Zittrain (my review)

The Next 100 Years: A Forecast for the 21st Century by George Friedman (my review)

Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives by the Year 2100 by Michio Kaku (my review)

  • Content Marquee

On Thneeds and the “Death of Display”

By - May 07, 2012

It’s all over the news these days: Display advertising is dead. Or put more accurately, the world of “boxes and rectangles” is dead. No one pays attention to banner ads, the reasoning goes, and the model never really worked in the first place (except for direct response). Brand marketers are demanding more for their money, and “standard display” is simply not delivering. After nearly 20 years*, it’s time to bury the banner, and move on to….

…well, to something else. Mostly, if you believe the valuations these days, to big platforms that have their own proprietary ad systems.

All over the industry, you’ll find celebration of new advertising-driven platforms that have eschewed the “boxes and rectangles” model. Twitter makes money off its native “promoted” suite of marketing tools. Tumblr just this week rolled out a similar offering. Pinterest recently hired Facebook’s original monetization wizard to create its own advertising model, separate from standard display. And of course there’s Facebook, which has gone so far as to call its new products “Featured Stories” (as opposed to “Ads” – which is what they are.) Lastly, we mustn’t forget the grandaddy of native advertising platforms, Google, whose search ads redefined the playing field more than a decade ago (although AdSense, it must be said, is very much in the “standard display” business).

Together, these platforms comprise what I’ve come to call the “dependent web,” and they live in a symbiotic relationship with what I call the “independent web.”

But there’s a very big difference between the two when it comes to revenue and perceived value. Dependent web companies are, in short, killing it. Facebook is about to go public at a valuation of $100 billion. Twitter is valued at close to $10 billion. Pinterest is rumored to be worth $4 billion, and who knows what Tumblr’s worth now – it was nearly $1 billion, on close to no revenues, last Fall. And of course Google has a market cap of around $200 billion.

Independent web publishers? With a few exceptions, they’re not killing it. They aren’t massively scaled platforms, after all, they’re often one or two person shops. If “display is dead,” then, well – they’re getting killed.

That’s because, again with few exceptions, independent web sites rely on the “standard display” model to scratch out at least part of a living. And that standard display model was not built to leverage the value of great content sites: engagement with audience. Boxes and rectangles on the side or top of a website simply do not deliver against brand advertising goals. Like it or not, boxes and rectangles have for the most part become the province of direct response advertising, or brand advertising that pays, on average, as if it’s driven by direct response metrics. And unless you’re running a high-traffic site about asbestos lawsuits, that just doesn’t pay the bills for content sites.

Hence, the rolling declaration of display’s death – often by independent industry news sites plastered with banners, boxes and rectangles.

But I don’t think online display is dead. It just needs to be rethought, re-engineered, and reborn. Easy, right?

Well, no, because brand marketers want scale and proof of ROI – and given that any new idea in display has to break out of the box-and-rectangle model first, we’ve got a chicken and egg problem with both scale and proof of value.

But I’ve noticed some promising sprigs of green pushing through the earth of late. First of all, let’s not forget the growth and success of programmatic buying across those “boxes and rectangles.” Using data and real time bidding, demand- and supply-side platforms are growing very quickly, and while the average CPM is low, there is a lot of promise in these new services – so much so, that FMP recently joined forces with one of the best, Lijit Networks. Another promising development is the video interstitial. Once anathema to nearly every publisher on the planet, this full page unit is now standard on the New York Times, Wired, Forbes, and countless other publishing sites. And while audiences may balk at seeing a full-page video ad after clicking from a search engine or other referring agent, the fact is, skipping the ad is about as hard as turning the page in a magazine. And in magazines, full page ads work for marketers.

Another is what many are now calling “native advertising” (sure to be confused with Twitter, Tumblr, and others’ native advertising solutions…) Over at Digiday, which has been doing a bang up job covering the display story, you’ll see debate about the growth of  publisher-based “native advertising units,” which are units that run in the editorial well, and are often populated with advertiser-sponsored content. FMP has been doing this kind of advertising for nearly three years, and of course it pioneered the concept of content marketing back in 2006. The key to success here, we’ve found, is getting context right, at scale, and of course providing transparency (IE, don’t try to fool an audience, they’re far smarter than that.)

And lastly, there are the new “Rising Star” units from the IAB (where I am a board member). These are, put quite simply, reimagined, larger and more interactive boxes and rectangles. A good step, but not a panacea.

So as much as I am rooting for these new approaches to display, and expect that they will start to be combined in ways that really pay off for publishers, they have a limitation: they’re focused on what I’ll call a “site-specific” model: for a publisher to get rewarded for creating great content, that publisher must endeavor to bring visitors to their site so those visitors can see the ads.  If we look toward the future, that’s not going to be enough. In an ideal Internet world, great content is rewarded for being shared, reposted,  viewed elsewhere and yes, even “liked.”

Up to now, that reward has had one single currency: Traffic back to the site.

Think of the largest referrers of traffic to the “rest of the web” – who are they? Yep – the very same companies with huge valuations – Google, Facebook, Twitter, and now Pinterest. What do they have in common? They’ve figured out a way to leverage the content created by the “rest of the web” and resell it to marketers at scale and for value (or, at least VCs believe they will soon). It’s always been an implicit deal, starting with search and moving into social: We cull, curate, and leverage your content, and in return, we’ll send traffic back to your site.

But given that we’re in for an extended transition from boxes and rectangles to ideas that, we hope, are better over time, well, that traffic deal just isn’t enough. It’s time to imagine bigger things.

Before we do, let’s step back for a moment and consider the independent web site. The…content creator. The web publisher. The talent, if you will. The person with a voice, an audience, a community. The hundreds of thousands (millions, really) of folks who, for good or bad, have plastered banners all over their site in the hope that perhaps the checks might get a bit bigger next month. (Of course this includes traditional media sites, like publishers who made their nut in print, for example). To me, these people comprise the equivalent of forests in the Internet’s ecosystem. They create the oxygen that feeds much of our world: Great content, great engagement, and great audiences.

Perhaps I’m becoming a cranky old man, a Lorax, if you must, but I’m going to jump up on a stump right now and say it: curation-based platform models that harvest the work of great content creators, creating “Thneeds” along the way, are failing to see the forest for the trees. Their quid pro quo deal to “send more traffic” ain’t enough.**

It’s time that content creators derived real value from the platforms they feed. A new model is needed, and if one doesn’t emerge (or is obstructed by the terms of service of large platforms), I worry about the future of the open web itself. If we, as an industry, don’t get just a wee bit better at taking care of content creators, we’re going to destroy our own ecosystem – and we’ll watch the Pinterests, Twitters, and yes, even the Google and Facebooks of the world deteriorate for lack of new content to curate.

Put another way: Unless someone cares, a whole awful lot…it isn’t going to get better. It’s not.

Cough.

So I’m here to say not only do I care, so do the hundreds of people working at Federated Media Publishing and Lijit, and at a burgeoning ecosystem of companies, publishers, and marketers who are coming to realize it’s time to wake up from our “standard display” dream and create some new models. It’s not the big platforms’ job to create that model – but it will be their job to not stand in the way of it.

So what might a new approach look like? Well first and foremost, it doesn’t mean abandoning the site-specific approach. Instead, I suggest we augment that revenue stream with another, one that ties individual “atomic units” of content to similar “atomic units” of marketing messaging, so that together they can travel the Seussian highways of the social web with a business model intact.

Because if the traffic referral game has proven anything to us as publishers, it’s that great content doesn’t want to be bound to one site. The rise of Pinterest, among others, proves this fact. Ideally, content should be shared, mixed, mashed, and reposted – it wants to flow through the Internet like water. This was the point of RSS, after all – a technology that has actually been declared dead more often than the lowly display banner. (For those of you too young to recall RSS, it’s a technology that allows publishers to share their content as “feeds” to any third party.)

RSS has, in the main, “failed” as a commercial entity because publishers realized they couldn’t make money by allowing people to consume their content “offsite.” The tyranny of the site-specific model forced most commercial publishers to use RSS only for display of headlines and snippets of text – bait, if you will, to bring audiences back to the site.

I’ve written about the implications of RSS and its death over and over again, because I love its goal of weaving content throughout the Internet. But and each time I’ve considered RSS, I’ve found myself wanting for a solution to its ills. I love the idea of content flowing any and everywhere around the Internet, but I also understand and sympathize with the content creator’s dilemma: If my content is scattered to the Internet’s winds, consumed on far continents with no remuneration to me, I can’t make a living as a content creator. So it’s no wonder that the creator swallows hard, and limits her RSS feed in the hopes that traffic will rise on her site (a few intrepid souls, like me, keep their RSS feeds “full text.” But I don’t rely on this site, directly, to make a living.)

So let’s review. We now have three broken or limping models in independent Internet publishing: the traffic-hungry site-specific content model, the “standard display” model upon which it depends, and the RSS model, which failed due to lack of “monetization.”

But inside this seeming mess, if you stare long and hard enough, there are answers staring back at you. In short, it’s time to leverage the big platforms for more than just traffic. It’s time to do what the biggest holders of IP (the film and TV folks) have already done – go where the money is. But this time, the approach needs to be different.

I’ve already hinted at it above: Wrap content with appropriate underwriting, and set it free to roam the Internet. Of course, such a system will have to navigate business process rules (the platforms’ Terms of Service), and break free of scale and ROI constraints. I believe this can be done.

But given that I’m already at 2500 words, I think I’ll be writing about that approach in a future post. Stay tuned, and remember – “Unless….”

———

*As a co-founder of Wired, I had a small part to play in the banner’s birth – the first banner ran on HotWired in 1994. It had a 78% clickthrough rate. 

**Using ad networks, the average small publisher earns about seventy-five cents per thousand on her display ads. Let’s do the math. Let’s say Molly the Scone Blogger gets an average of 50,000 page views a month, pretty good for a food blogger. We know the average ad network pays about 65 to 85 cents per thousand page views at the moment (for reasons explained above, despite the continuing focus of the industrial ad technology complex, which is working to raise those prices with data and context). And let’s say Molly puts two ads per page on her site. That means she has one hundred “thousands” to sell, at around 75 cents a thousand. This means Molly gets a check for about $75 each month. Now, Molly loves her site, and loves her audience and community, and wants to make enough to do it more. Since her only leverage is increased traffic, she will labor at Pinterest, Twitter, Facebook, and Google+, promoting her content and doing her best to gain more audience.

Perhaps she can double her traffic, and her monthly income might go from $75 to $150. That helps with the groceries, but it’s a terrible return on invested time. So what might Molly do? Well, if she can’t join a higher-paying network like FMP, she may well decide to abandon content creation all together. And when she stops investing in her own site, guess what happens? She’s not creating new content for Pinterest, Twitter, Facebook and Google to harvest, and she’s not using those platforms to distribute her content.

For the past seven years, it’s been FMP’s business to get people like Molly paid a lot more than 75 cents per thousand audience members. We’re proud of the hundred plus million dollars we’ve injected into the Independent web, but I have to be honest with you. There are way more Mollys in the world than FMP can help – at least under current industry conditions. And while we’ve innovated like crazy to create value beyond standard banners, it’s going to take more to insure content creators get paid appropriately. It’s time to think outside the box.

—-

Special thanks to folks who have been helping me think through this issue, including Deanna Brown and the FMP team, Randall Rothenberg of the IAB, Brian Monahan, Chas Edwards, Jeff Dachis, Brian Morrissey, and many, many more. 

 

A Coachella “Fail-ble”: Do We Hold Spectrum in Common?

By - April 18, 2012

Neon Indian at Coachella last weekend.

 

Last weekend I had the distinct pleasure of taking two days off the grid and heading to a music festival called Coachella. Now, when I say “off the grid,” I mean time away from my normal work life (yes, I tend to work a bit on the weekends), and my normal family life (I usually reserve the balance of weekends for family, this was the first couple of days “alone” I’ve had in more than a year.)

What I most certainly did not want to be was off the information grid – the data lifeline that all of us so presumptively leverage through our digital devices. But for the entire time I was at the festival, unfortunately, that’s exactly what happened – to me, and to most of the 85,000 or so other people trying to use their smartphones while at the show.

I’m not writing this post to blame AT&T (my carrier), or Verizon, or the producers of Coachella, though each have some part to play in the failure that occurred last weekend (and most likely will occur again this weekend, when Coachella produces its second of two festival weekends). Rather, I’m deeply interested in how this story came about, why it matters, and what, if anything, can be done about it.

First, let’s set some assumptions. When tens of thousands of young people (the average age of a Coachella fan is in the mid to low 20s) gather in any one place in the United States, it’s a safe bet these things are true:

- Nearly everyone has a smartphone in their possession.

- Nearly everyone plans on using that smartphone to connect with friends at the show, as well as to record, share, and amplify the experience they are having while at the event.

- Nearly everyone knows that service at large events is awful, yet they hope their phone will work, at least some of the time. Perhaps a cash-rich sponsor will pay to bring in extra bandwidth, or maybe the promoter will spring for it out of the profit from ticket sales. Regardless, they expect some service delays, and plan on using low-bandwidth texting services more than they’d like to.

- Nearly everyone leaves a show like Coachella unhappy with their service provider, and unable to truly express themselves in ways they wished they could. Those ways might include, in no particular order: Communicating with friends so as to meet up (“See you at the Outdoor stage, right side middle, for Grace Potter!”), tweeting or Facebooking a message to followers (“Neon Indian is killing it right now!”), checking in on Foursquare or any other location service so as to gain value in a social game (or in my case, to create digital breadcrumbs to remind me who I was once in hit dotage), uploading photos to any number of social photo services like Instagram, or using new, music-specific apps like TastemakerX on a whim (“I’d like to buy 100 shares of Yuck, those guys just blew me away!”). Oh, and it’d be nice to make a phone call home if you need to.

But for the most part, I and all my friends were unable to do any of these things at Coachella last weekend, at least not in real time. I felt as if I was drinking from a very thin, very clogged cocktail straw. Data service was simply non existent onsite. Texts came in, but more often than not they were timeshifted: I’d get ten texts delivered some 20 minutes after they were sent. And phone service was about as good as it is on Sand Hill Road – spotty, prone to drops, and often just not available. I did manage to get some data service while at the show, but that was because I found a press tent and logged onto the local wifi network there, or I “tricked” my phone into thinking it was logging onto the network for the first time (by turning “airplane mode” off and on over and over again).

This all left me wondering – what if? What if there was an open pipe, both up and down, that could handle all that traffic? What if everyone who came to the show knew that pipe would be open, and work? What kind of value would have been created had that been the case? How much more data would have populated the world, how much richer would literally millions of people’s lives been for seeing the joyful expressions of their friends as they engaged in a wonderful experience? How much more learning might have countless startups gathered, had they been able to truly capture the real time intentions of their customers at such an event?

In short, how much have we lost as a society because we’ve failed to solve our own bandwidth problems?

I know, it’s just a rock festival, and jeez Battelle, shut off your phone and just dance, right? OK, I get that, I trust me, I did dance, a lot. But I also like to take a minute here or there to connect to the people I love, or who follow me, and share with them my passions and my excitement. We are becoming a digital society, to pretend otherwise is to ignore reality. And with very few exceptions, it was just not possible to intermingle the digital and the physical at Coachella. (I did hear reports that folks with Verizon were having better luck, but that probably because there were fewer Verizon iPhones than those with AT&T. And think about that language – “luck”?!).

Way back in 2008, when the iPhone was new and Instagram was a gleam in Kevin Systrom’s eye, I was involved in creating a service called CrowdFire. It was a way for fans at a festival (the first was Outside Lands) to share photos, tweets, and texts in a location and event specific way. I’ve always rued our decision to not spin CrowdFire out as a separate company, but regardless, my main memory of the service was how crippled it was due to bandwidth failure. It was actually better than Coachella, but not by much. So in four years, we’ve managed to go backwards when it comes to this problem.

Of course, the amount of data we’re using has exploded, so credit to the carriers for doing their best to keep up. But can they get to the promised land? I wonder, at least under the current system of economic incentives we’ve adopted in the United States. Sure, there will always be traffic jams, but have we really thought through the best approach to how we execute “the Internet in the sky?”

Put another way, do we not hold the ability to share who we are, our very digital reflections, as a commons to which all of us should have equal access?

As I was driving to the festival last Saturday, I engaged in a conversation with one of my fellow passengers about this subject. What do we, as a society, hold in commons, and where do digital services fit in, if at all?

Well, we were driving to Coachella on city roads, held in commons through municipalities, for one. And we then got on Interstate 10 for a few miles, which is held in commons by federal agencies in conjunction with local governments. So it’s pretty clear we have, as a society, made the decision that the infrastructure for the transport of atoms – whether they be cars and the humans in them, or trucks and the commercial goods within them – is held in a public commons.Sure, we hit some traffic, but it wasn’t that bad, and there were ways to route around it.

What else do we hold in a commons? We ticked off the list of stuff upon we depend – the transportation of water and power to our homes and our businesses, for example. Those certainly are (mostly) held in the public commons as well.

So it’s pretty clear that over the course of time, we’ve decided that when it comes to moving ourselves around, and making sure we have power and water, we’re OK with the government managing the infrastructure. But what of bits? What of “ourselves” as expressed digitally?

For the “hardwired” Internet – the place that gave us the Web, Google, Facebook, et al, we built upon what was arguably a publicly common infrastructure. Thanks to government and social normative regulation, the hard-wired Internet was architected to be open to all, with a commercial imperative that insured bandwidth issues were addressed in a reasonable fashion (Cisco, Comcast, etc.).

But with wireless, we’ve taken what is a public asset – radio spectrum – and we’ve licensed it to private companies under a thicket of regulatory oversight. And without laying blame – there’s probably plenty of it to go around – we’ve proceeded to make a mess of it. What we have here, it seems to me, is a failure. Is it a market failure – which usual preceeds government action? I’m not sure that’s the case. But it’s a fail, nevertheless. I’d like to get smarter on this issue, even though the prospect of it makes my head hurt.

As I wrote yesterday, I recently spent some time in Washington DC, and sat down with the Obama administration’s point person on that question, FCC Chair Julius Genachowski. As I expected, the issue of spectrum allocation is extraordinarily complicated, and it’s unlikely we’ll find a way out of the “Coachella Fail-ble” anytime soon. But there is hope. Technological disruption is one way – watch the “white spaces,” for instance. And in a world where marketing claims to be “the fastest” spur customer switching, our carriers are madly scrambling to upgrade their networks. Yet in the US, wireless speeds are far below those of countries in Europe and Asia.

I plan on finding out more as I report, but I may as well ask you, my smarter readers: Why is this the case? And does it have anything to do with what those other countries consider to be held in “digital commons”?

I’ll readily admit I’m simply a journeyman asking questions here, not a firebrand looking to lay blame. I understand this is a complicated topic, but it’s one for which I’d love your input and guidance.

If-Then and Antiquities of the Future

By - April 03, 2012

Over the past few months I’ve been developing a framework for the book I’ve been working on, and while I’ve been pretty quiet about the work, it’s time to lay it out and get some responses from you, the folks I most trust with keeping me on track.

I’ll admit the idea of putting all this out here makes me nervous – I’ve only discussed this with a few dozen folks, and now I’m going public with what I’ll admit is an unbaked cake. Anyone can criticize it now, (or, I suppose, steal it), but then again, I did the very same thing with the core idea in my last book (The Database of Intentions, back in 2003), and that worked out just fine.

So here we go. The original promise of my next book is pretty simple: I’m trying to paint a picture of the kind of digital world we’ll likely live in one generation from now, based on a survey of where we are presently as a digital society. In a way, it’s a continuation and expansion of The Search – the database of intentions has expanded from search to nearly every corner of our world – we now live our lives leveraged over digital platforms and data. So what might that look like thirty years hence?

As the announcement last year stated:

WHAT WE HATH WROUGHT will give us a forecast of the interconnected world in 2040, then work backwards to explain how the personal, economic, political, and technological strands of this human narrative have evolved from the pivotal moment in which we find ourselves now.

That’s a pretty tall order. At first, I spent a lot of time trying to boil any number of oceans – figuring out who to talk to in politics, energy, healthcare, technology, and, well, just about every major field. It quickly became quite evident that I’d end up with a book a thousand miles wide and one inch deep – unless I got very lucky and stumbled upon a perfect narrative actor that tied it all up into one neat story. Last time Google provided me that actor, but given I’m writing a book about how the world might look in 30 years, I’m not holding my breath waiting for another perfect protagonist to step out a time machine somewhere.

But what if those protagonists are already here? Allow me to explain…

For the past few months I’ve been stewing on how the hell to actually write this book I’ve promised everyone I would deliver. The manuscript is not actually due till early next year, but still, books take a lot of time. And every day that goes by without a clear framework is a day partially lost.

A couple of months ago, worried that I’d never figure this thing out (but knowing there had to be a way), I invited one of  my favorite authors (and new Marin resident) Steven Johnson over to my house for a brainstorming session. I outlined where I was in my thinking, and posed to him my essential problem: I was trying to do too much, and needed to focus my work on a narrative that paid off the promise, but didn’t read like a textbook, or worse yet, like a piece of futurism. As I said to Steven, “If I write a book that has a scene where an alarm clock wakes you up on a ‘typical morning in 2045,’ please shoot me.”

It’s not that I don’t appreciate futurism – it’s just that I truly believe, as William Gibson famously put it, that the future is already here, it’s just unevenly distributed. If I could just figure out a way to report on that future, to apply the tools of journalism to the story of the future we’re creating, I’d come up with a book worth reading. Of course, it was this approach we took in the early years of Wired magazine. Our job, as my colleague Kevin Kelly put it, was to send writers off in search of where the future was erupting, with instructions to report back.

To find that future, we asked our writers (and editors) to look hard at the present, and find people, places or things that augured what might come next. Hence, issue one of Wired had articles about the future of war, education, entertainment, and sex, based on reporting done in the here and now. While we didn’t call it such, over the years we developed an “If-Then” approach to many of the stories we’d assign. We’d think out loud: “If every school had access to the Internet, then what might change about education?” Or, “If the government had the ability to track everything we do both offline and on, then what might our society look like?” The conditional “If” question followed with a realistic “Then” answer provided a good way to wrap our heads around a sometimes challenging subject  (and for you programmers out there, we’d also consider the “ands” as well as the “elses.”)

Next, we’d ask a reporter to go find out all he or she could about that scenario – to go in search of artifacts from the future which told a story of where things might be going. (Wired, in fact, later created the popular “Found: Artifacts from the Future” series in the pages of the magazine.)

As an early reader and contributor to Wired, Steven knew all this, and reminded me of it as we spoke that day at my house. What if, he asked me, the book was framed as a series of stories about “future antiquities” or “future relics” (I think he first dubbed them “Magic Coins”)? Could we find examples of things currently extant, which, if widely adopted over the next generation, would presage significant changes in the world we’ll be inhabiting? Why, indeed, yes we could. Immediately I thought of five or six, and since that day, many more have come to mind.

Now, I think it bears unpacking this concept of what I mean by “widely adopted.” To me, it means clearing a pretty high hurdle – by 2045 or so, it’d mean that more than a billion people would be regularly interacting with whatever the future antiquity might be.  When you get a very large chunk of the population engaged in a particular behavior, that behavior has the ability to effect real change on our political, social, and cultural norms. Those are the kind of artifacts I’m looking to find.

As a thought experiment, imagine I had given myself this assignment back in the early 1980s, when I was just starting my love affair with this story as a technology reporter (yes, there’s a symmetry here – that’s 30 years ago – one generation past). Had I gone off in search of digital artifacts that presaged the future, ones that I believed might be adopted by a billion or more people, I certainly would have started with the personal computer, which at that point was counted in the high hundreds of thousands in the US. And I also would have picked the Internet, which was being used, at that point, by only thousands of people. I’d have described the power of these two artifacts in the present day, imagined how technological and social change might develop as more and more people used them, and spoken to the early adopters, entrepreneurs, and thinkers of the day about what would happen if a billion or more people were using them on a regular basis.

An antiquity from the 1980s, with its future descendant (image from machinelake.com)

Pushing the hypothetical a bit further, I imagine I’d find the Dan Bricklins, Vint Cerfs, Ray Ozzies, and Bill Gates of the day, and noticed that they hung out in universities, for the most part. I’d have noticed that they used their computers and online networks to communicate with each other, to share information, to search and discover things, and to create communities of interest. It was in those universities where the future was erupting 30 years ago, and had I been paying close attention, it’s plausible I might have declared email, search, and social networks – or at least “communities on the Internet” – as artifacts of our digital future. And of course, I’d have noticed the new gadget just released called the mobile phone, and probably declared that important as well. If more than a billion people had a mobile phone by 2012, I’d have wondered, then what might our world look like?

I’m pretty sure I’d have gotten a lot wrong, but the essential framework – a way to think about finding and declaring the erupting future – seems a worthy endeavor. So I’ve decided to focus my work on doing just that. It helps that it combines two of my favorite approaches to thinking – anthropology and journalism. In essence, I’m going on a dig for future antiquities.

So what might some of today’s artifacts from the future be? I don’t pretend to have an exhaustive list, but I do have a good start. And while the “If-Then” framework could work for all sorts of artifacts, I’m looking for those that “ladder up” to significant societal change. To that end, I’ve begun exploring innovations in energy, finance, health, transportation, communications, commerce – not surprisingly, all subjects to which we have devoted impressive stone buildings in our capital city. (Hence my trip to DC last week.)

Here’s one example that might bring the concept home: The Fitbit. At present, there are about half a million of them in the world, as far as I can tell (I’m meeting with the company soon). But Fitbit-like devices are on the rise – Nike launched its FuelBand earlier this year, for example. And while the first generation of these devices may only appeal to early adopters, with trends in miniaturization, processing power, and data platforms, it’s not hard to imagine a time when billions of us are quantifying our movement, caloric intake and output, sleep patterns, and more, then sharing that data across wide cohorts so as to draw upon the benefits of pattern-recognizing algorithms to help us make better choices about our behavior.

If that were to happen, what then might be the impact on our healthcare systems? Our agricultural practices and policies? Our insurance industries? Our life expectancies? I’m not entirely sure, but it’d sure be fun to try to answer such questions.

I won’t tip my hand as to my entire current list of Future Antiquities, but I certainly would welcome your ideas and input as to what they might be. I’d also like your input on the actual title of the book. “What We Hath Wrought” is a cool title, but perhaps it’s a bit….too heady. Some might even call it overwrought. What if I called the book “If-Then”? I’m thinking about doing just that. Let me know in comments, and as always, thanks for reading.

China To Bloggers: Stop Talking Now. K Thanks Bye.

By - March 31, 2012

(image) Yesterday I finished reading Larry Lessig’s updated 1999 classic, Code v2. I’m five years late to the game, as the book was updated in 2006 by Lessig and a group of fans and readers (I tried to read the original in 1999, but I found myself unable to finish it. Something to do with my hair being on fire for four years running…). In any event, no sooner had I read the final page yesterday when this story breaks:

Sina, Tencent Shut Down Commenting on Microblogs (WSJ)

In an odd coincidence, late last night I happened to share a glass of wine with a correspondent for the Economist who is soon to be reporting from Shanghai. Of course this story came up, and an interesting discussion ensued about the balance one must strike to cover business in a country like China. Essentially, it’s the same balance any Internet company must strike as it attempts to do business there: Try to enable conversation, while at the same time regulating that conversation to comply with the wishes of a mercurial regime.

Those of us who “grew up” in Internet version 1.0 have a core belief in the free and open exchange of ideas, one unencumbered by regulation. We also tend to think that the Internet will find a way to “route around” bad law – and that what happens in places like China or Iran will never happen here.

But as Lessig points out quite forcefully in Code v2, the Internet is, in fact, one of the most “regulable” technologies ever invented, and it’s folly to believe that only regimes like China will be drawn toward leveraging the control it allows. In addition, it need not be governments that create these regulations, it could well be the platforms and services we’ve come to depend on instead. And while those services and platforms might never be as aggressive as China or Iran, they are already laying down the foundation for a slow erosion of values many of us take for granted. If we don’t pay attention, we may find ourselves waking up one morning and asking…Well, How Did I Get Here?

More on all of this soon, as I’m in the midst of an interview (via email) with Lessig on these subjects. Once I’ll post the dialog here once we’re done.

 

CM Summit White Paper from 2007

By - March 15, 2012

I am in the midst of writing a post on the history of FM (update – here it is), and I thought it’d be fun to post the PDF linked to below. It’s a summary of musings from Searchblog circa 2006-7 on the topic of conversational media, which is much in the news again, thanks to Facebook. We created the document as an addendum to our first ever CM Summit conference, as a way of describing why we were launching the conference. (BTW, the Summit returns to San Francisco next week as Signal SF, check it out.)

It’s interesting to see the topics in the white paper come to life, including chestnuts like “Conversation Over Dictation,” “Platform Over Distribution,” “Engagement Over Consumption,” and “Iteration and Speed Over Perfection and Deliberation.”

Enjoy.

CMManifesto2007.01

Who Controls Our Data? A Puzzle.

By - March 11, 2012

(image) Facebook claims the data we create inside Facebook is ours – that we own it. In fact, I confirmed this last week in an interview with Facebook VP David Fischer on stage at FM’s Signal P&G conference in Cincinnati. In the conversation, I asked Fischer if we owned our own data. He said yes.

Perhaps unfairly  (I’m pretty sure Fischer is not in charge of data policy), I followed up my question with another: If we own our own data, can we therefore take it out of Facebook and give it to, say, Google, so Google can use it to personalize our search results?

Fischer pondered that question, realized its implications, and backtracked. He wasn’t sure about that, and it turns out, it’s more complicated to answer that question – as recent stories about European data requests have revealed.*

I wasn’t planning on asking Fischer that question, but I think it came up because I’ve been pondering the implications of “you as the platform” quite a bit lately. If it’s *our* data in Facebook, why can’t we take it and use it on our terms to inform other services?

Because, it turns out, regardless of any company’s claim around who owns the data, the truth is, even if we could take our data and give it to another company, it’s not clear the receiving company could do anything with it. Things just aren’t set up that way. But what if they were?

The way things stand right now, our data is an asset held by companies, who then cut deals with each other to leverage that data (and, in some cases, to bundle it up as a service to us as consumers). Microsoft has a deal to use our Facebook data on Bing, for example. And of course, the inability of Facebook and Google to cut a data sharing deal back in 2009 is one of the major reasons Google built Google+. The two sides simply could not come to terms, and that failure has driven an escalating battle between major Internet companies to lock all of us into their data silos. With the cloud, it’s only getting worse (more on that in another post).

And it’s not fair to just pick on Facebook. The question should be asked of all services, I think. At least, of all services which claim that the data we give that service is, in fact, ours (many services share ownership, which is fine with me, as long as I don’t lose my rights.)

I have a ton of pictures up on Instagram now, for example (you own your own content there, according to the service’s terms). Why can’t I “share” that data with Google or Bing, so those pictures show up in my searches? Or with Picasa, where I store most of my personal photographs?

I have a ton of data inside an app called “AllSport GPS,” which tracks my runs, rides, and hikes. Why can’t I share that with Google, or Facebook, or some yet-to-be-developed app that monitors my health and well being?

Put another way, why do I have to wait for all these companies to cut data sharing deals through their corporate development offices? Sure, I could cut and paste all my data from one to the other, but really, who wants to do that?!

In the future, I hope we’ll be our own corp dev offices. An office of one, negotiating data deals on the fly, and on our own terms. It’ll take a new architecture and a new approach to sharing, but I think it’d open up all sorts of new vectors of value creation on the web.

This is why I’m bullish on Singly and the Locker Project. They’re trying to solve a very big problem, and worse, one that most folks don’t even realize they have. Not an easy task, but an important one.

—–

*Thanks to European law, Facebook is making copies of users’ data available to them – but it makes exemptions that protect its intellectual property, trade secrets, and it won’t give data that “cannot be extracted from our platform in the absence of is proportionate effort.” What defines Facebook’s “trade secrets” and “intellectual property”? Well, there’s the catch. Just as with Google’s search algorithms, disclosure of the data Facebook is holding back would, in essence, destroy Facebook’s competitive edge, or so the company argues. Catch 22. I predict we’re going to see all this tested by services like Singly in the near future. 

 

A Sad State of Internet Affairs: The Journal on Google, Apple, and “Privacy”

By - February 16, 2012

The news alert from the Wall St. Journal hit my phone about an hour ago, pulling me away from tasting “Texas Bourbon” in San Antonio to sit down and grok this headline: Google’s iPhone Tracking.

Now, the headline certainly is attention-grabbing, but the news alert email had a more sinister headline: “Google Circumvented Web-Privacy Safeguards.”

Wow! What’s going on here?

Turns out, no one looks good in this story, but certainly the Journal feels like they’ve got Google in a “gotcha” moment. As usual, I think there’s a lot more to the story, and while I’m Thinking Out Loud right now, and pretty sure there’s a lot more than I can currently grok, there’s something I just gotta say.

First, the details.  Here’s the lead in the Journal’s story, which requires a login/registration:

Google Inc. and other advertising companies have been bypassing the privacy settings of millions of people using Apple Inc.’s Web browser on their iPhones and computers—tracking the Web-browsing habits of people who intended for that kind of monitoring to be blocked.”

Now, from what I can tell, the first part of that story is true – Google and many others have figured out ways to get around Apple’s default settings on Safari in iOS – the only browser that comes with iOS, a browser that, in my experience, has never asked me what kind of privacy settings I wanted, nor did it ask if I wanted to share my data with anyone else (I do, it turns out, for any number of perfectly good reasons). Apple assumes that I agree with Apple’s point of view on “privacy,” which, I must say, is ridiculous on its face, because the idea of a large corporation (Apple is the largest, in fact) determining in advance what I might want to do with my data is pretty much the opposite of “privacy.”

Then again, Apple decided I hated Flash, too, so I shouldn’t be that surprised, right?

But to the point, Google circumvented Safari’s default settings by using some trickery described in this WSJ blog post, which reports the main reason Google did what it did was so that it could know if a user was a Google+ member, and if so (or even if not so), it could show that user Google+ enhanced ads via AdSense.

In short, Apple’s mobile version of Safari broke with common web practice,  and as a result, it broke Google’s normal approach to engaging with consumers. Was Google’s “normal approach” wrong? Well, I suppose that’s a debate worth having – it’s currently standard practice and the backbone of the entire web advertising ecosystem –  but the Journal doesn’t bother to go into those details. One can debate whether setting cookies should happen by default – but the fact is, that’s how it’s done on the open web.

The Journal article does later acknowledge, though not in a way that a reasonable reader would interpret as meaningful, that the mobile version of Safari has “default” (ie not user activated) settings that prevent Google and others (like ad giant WPP) to track user behavior the way they do on the “normal” Web. That’s a far cry from the Journal’s lead paragraph, which again, states Google bypassed the “the privacy settings of millions of people.” So when is a privacy setting really a privacy setting, I wonder? When Apple makes it so?

Since this story has broken, Google has discontinued its practice, making it look even worse, of course.

But let’s step back a second here and ask: why do you think Apple has made it impossible for advertising-driven companies like Google to execute what are industry standard practices on the open web (dropping cookies and tracking behavior so as to provide relevant services and advertising)? Do you think it’s because Apple cares deeply about your privacy?

Really?

Or perhaps it’s because Apple considers anyone using iOS, even if they’re browsing the web, as “Apple’s customer,” and wants to throttle potential competitors, insuring that it’s impossible to access to “Apple’s” audiences using iOS in any sophisticated fashion? Might it be possible that Apple is using data as its weapon, dressed up in the PR friendly clothing of  “privacy protection” for users?

That’s at least a credible idea, I’d argue.

I don’t know, but when I bought an iPhone, I didn’t think I was singing up as an active recruit in Apple’s war on the open web. I just thought I was getting “the Internet in my pocket” – which was Apple’s initial marketing pitch for the device. What I didn’t realize was that it was “the Internet, as Apple wishes to understand it, in my pocket.”

It’d be nice if the Journal wasn’t so caught up in its own “privacy scoop” that it paused to wonder if perhaps Apple has an agenda here as well. I’m not arguing Google doesn’t have an agenda – it clearly does. I’m as saddened as the next guy about how Google has broken search in its relentless pursuit of beating Facebook, among others.

In this case, what Google and others have done sure sounds wrong – if you’ve going to resort to tricking a browser into offering up information designated by default as private, you need to somehow message the user and explain what’s going on. Then again, in the open web, you don’t have to – most browsers let you set cookies by default. In iOS within Safari, perhaps such messaging is technically impossible, I don’t know. But these shenanigans are predictable, given the dynamic of the current food fight between Google, Apple, Facebook, and others. It’s one more example of the sad state of the Internet given the war between the Internet Big Five. And it’s only going to get worse, before, I hope, it gets better again.

Now, here’s my caveat: I haven’t been able to do any reporting on this, given it’s 11 pm in Texas and I’ve got meetings in the morning. But I’m sure curious as to the real story here. I don’t think the sensational headlines from the Journal get to the core of it. I’ll depend on you, fair readers, to enlighten us all on what you think is really going on.

China Hacking: Here We Go

By - February 13, 2012

(image) Waaaay back in January of this year, in my annual predictions, I offered a conjecture that seemed pretty orthogonal to my usual focus:

“China will be caught spying on US corporations, especially tech and commodity companies. Somewhat oddly, no one will (seem to) care.”

Well, I just got this WSJ news alert, which reports:

Using seven passwords stolen from top Nortel executives, including the chief executive, the hackers—who appeared to be working in China—penetrated Nortel’s computers at least as far back as 2000 and over the years downloaded technical papers, research-and-development reports, business plans, employee emails and other documents.

The hackers also hid spying software so deeply within some employees’ computers that it took investigators years to realize the pervasiveness of the problem.

Now, before I trumpet my prognosticative abilities too loudly, let’s see if … anybody cares. At all. And if you’re wondering why I even bothered to make such a prediction, well, it’s because I think it’s going to prove important….eventually.

In Which I Officially Declare RSS Is Truly Alive And Well.

By - February 02, 2012

I promise, for at least 18 months, to not bring this topic up again. But I do feel the need to report to all you RSS lovin’ freaks out there that the combined interactions on my two posts – 680 and still counting –  have exceeded the reach of my RSS feed (which clocked in at a miserable 664 the day I posted the first missive).

And as I said in my original post:

If I get more comments and tweets on this post than I have “reach” by Google Feedburner status, well, that’s enough for me to pronounce RSS Alive and Well (by my own metric of nodding along, of course). If it’s less than 664, I’m sorry, RSS is Well And Truly Dead. And it’s all your fault.

For those of you who don’t know what on earth I’m talking about, but care enough to click, here are the two posts:

Once Again, RSS Is Dead. But ONLY YOU Can Save It!

RSS Update: Not Dead, But On The Watch List

OK, now move along. Nothing to see here. No web standards have died. Happy Happy! Joy Joy!