free html hit counter Book Related Archives - Page 12 of 31 - John Battelle's Search Blog

Do Not Track Is An Opportunity, Not a Threat

By - June 10, 2012

This past week’s industry tempest centered around Microsoft’s decision to implement “Do Not Track” (known as “DNT”) as a default on Internet Explorer 10, a browser update timed to roll out with the company’s long-anticipated Windows 8 release.

Microsoft’s decision caught much of the marketing and media industry by surprise – after all, Microsoft itself is a major player in the advertising business, and in that role has been a strong proponent of the current self-regulatory regime, which includes, at least until Microsoft tossed its grenade into the marketplace, a commitment to implementation of DNT as an opt-in technology, rather than as a default.*

For most readers I don’t need to explain why this matters, but in case you’re new to the debate, when enabled, DNT sets a “flag” telling websites that you don’t want data about your visit to be used for purposes of creating a profile of your browsing history (or for any other reason). Whether we like it or not, such profiles have driven a very large business in display advertising over the past 15 years. Were a majority of consumers to implement DNT, the infrastructure that currently drives wide swathes of the web’s  monetization ecosystem would crumble, taking a lot of quality content along with it.

Once released, it’s estimated that IE 10 could quickly grab as much as 25-30% of browser market share. The idea that the online advertising industry could lose almost a third of its value due to the actions of one rogue player is certainly cause for alarm. Last week’s press were full of conspiracy theories about why Microsoft was making such a move. The company claims it just wants to protect users’ privacy, which strikes me as disingenuous – it’s far more likely that Microsoft is willing to spike its relatively small advertising business in exchange for striking a lethal blow to Google’s core business model, both in advertising and in browser share.

I’m quite certain the Windows 8 team is preparing to market IE 10 – and by extension, Windows 8 – as the safe, privacy-enhancing choice, capitalizing on Google’s many government woes and consumers’ overall unease with the search giant’s power. I’m also quite certain that Microsoft, like many others, suffers from a case of extreme Apple envy, and wishes it had a pristine, closed-loop environment like iOS that it could completely control. In order to create such an environment, Microsoft needs to gain consumer’s trust. Seen from that point of view, implementing DNT as a default just makes sense.

But the more I think through it, the more I’m somewhat unperturbed by the whole affair. In fact, I’m rather excited by it.

First off, it’s not clear that IE10’s approach to DNT will matter. When it comes to whether or not a site has to comply with browser flags such as DNT, websites and third party look to the standard settings body knows as the WC3. That organization’s proposed draft specification on DNT is quite clear: It says no company may enforce a default DNT setting for a user, one way or the other. In other words, this whole thing could be a tempest in a teapot. Wired recently argued that Microsoft will be forced to back down and change its policy.

But I’m kind of hoping Microsoft will keep DNT in place. I know, that’s a pretty crazy thing for a guy who started an advertising-run business to say, but in this supposed threat I see a major opportunity.

Imagine a scenario, beginning sometime next year, when website owners start noticing significant numbers of visitors with IE10 browsers swinging by their sites. Imagine further that Microsoft has stuck to its guns, an all those IE10 browsers have their flags set to “DNT.”

To me, this presents a huge opportunity for the owner of a site to engage with its readers, and explain quite clearly the fact that good content on the Internet is paid for by good marketing on the Internet. And good marketing often needs to use “tracking” data so as to present quality advertising in context. (The same really can and should be said of content on the web – but I’ll just stick to advertising for now).

Advertising and content have always been bound together – in print, on television, and on the web. Sure, you can skip the ad – just flip the page, or press “ffwd” on your DVR. But great advertising, as I’ve long argued, adds value to the content ecosystem, and has as much a right to be in the conversation as does the publisher and the consumer.

Do Not Track provides our industry with a rare opportunity to speak out and explain this fact, and while the dialog box I’ve ginned up at the top of this post is fake, I’d love to see a day when they are popping up all over the web, reminding consumers that not only does quality content need to be supported, in fact, the marketers supporting it actually deserve our attention as well.

At present, the conversation between content creator, content consumer, and marketer is poorly instrumented and rife with mistrust. Our industry’s “ad choices” self regulatory regime – those little triangle icons you see all over display ads these days – is a good start. But we’ve a long way to go. Perhaps unwittingly, Microsoft may be pushing us that much faster toward a better future.

*I am on the board of the IAB, one of the major industry trade groups which promotes self-regulation. The opinions here are my own, as usual. 

  • Content Marquee

In 1844, Morse Gets The Scoop, Then Tweets His Dinner

By - June 07, 2012

I’m reading a fascinating biography of Samuel Morse – Lightning Man: The Accursed Life Of Samuel F.B. Morse by Kenneth Silverman. I’ll post a review in a week or so, but one scene bears a quick post.

Morse successfully demonstrated his telegraph between Baltimore and Washington DC in May of 1844. Three days later the Democratic party convention commenced in Baltimore. In what turned out to be a masterstroke of “being in the right place at the right time,” Morse’s telegraph line happened to be in place to relay news of the convention back to the political classes in DC.

Recall, this was at a time when news was carried by horseback or, in the best case, by rail. It took hours for messages to travel between cities like Baltimore and DC – and they were just 45 miles apart.

Adding to the sensationalism of the telegraph’s public debut, the Democratic convention of 1844 was fraught with controversy and political implication – candidates’ fortunes turned on nation-changing issues such as whether to reclaim Oregon from the British, and whether to annex Texas into the Union, which had serious implications for a growing movement for the abolition of slavery.

Remember, this was 15 years before the Civil War began, and just 30-odd years after the war of 1812, during which the British torched the House of Representatives.

Morse, who by his fifties had endured nearly a dozen years of false starts, failures, near-bankruptcy, and more, turned out to be a master publicist. He positioned his partner Alfred Vail at the convention and himself near Congress. Vail began sending regular reports on the convention to Morse, who was surrounded by hundreds of reporters, Senators,  and other dignitaries in DC. News came in short bursts familiar to anyone who’s spent time on Twitter or Facebook. In the “conversation,” most likely the first of its kind to report news in real time, all of Washington learned that the “dark horse” candidate James Polk, who supported bringing Texas into the Union, would prevail.

It makes for fascinating reading, with a funny kicker at the end:

V[ail] Mr. Brewster of Pa is speaking in favour of Buchanan

M[orse] yes….

V Mr Brewster says his delegation go for VB but if VB’s friends desert them, the Delegation go for Buchanan…. The vote taken will be nearly unanimous for J K Polk & harmony & union are restored

M Is it a fact or a mere rumor

V Wait till the ballot comes…. Illinois goes for Polk … Mich goes for Polk. Penn asks leave to correct her error so as to give her whole vote for Polk….

M Intense anxiety prevails to … hear the result of last Balloting

V Polk is unanimously nom

M 3 cheers have been given here for Polk and 3 for the Telegraph.

V Have you had your dinner

M yes have you

V yes what had you

M mutton chop and strawberries

And so began a revolution in communications and industry. But even back then, folks shared both the extraordinary and the mundane across the wires….

 

 

On Small, Intimate Data

By - May 29, 2012

Part of the research I am doing for the book involves trying to get my head around the concept of “Big Data,” given the premise that we are in a fundamental shift to a digitally driven society. Big data, as you all know, is super hot – Facebook derives its value because of all that big data it has on you and me, Google is probably the original consumer-facing big data company (though Amazon might take issue with that), Microsoft is betting the farm on data in the cloud, Splunk just had a hot IPO because it’s a Big Data play, and so on.

But I’m starting to wonder if Big Data is the right metaphor for all of us as we continue this journey toward a digitally enhanced future. It feels so – impersonal – Big Data is something that is done to us or without regard for us as individuals. We need a metaphor that is more about the person, and less about the machine. At the very least, it should start with us, no?

Elsewhere I’ve written about the intersection of data and the platform for that data – expect a lot more from me on this subject in the future. But in short, I am unconvinced that the current architecture we’ve adopted is ideal – where all “our” data, along with the data created by that data’s co-mingling with other data – lives in “cloud” platforms controlled by large corporations whose terms and values we may or may not agree with (or even pay attention to, though some interesting folks are starting to). And the grammar and vocabulary now seeping into our culture is equally mundane and bereft of the subject’s true potential – the creation, sharing and intermingling of data is perhaps the most important development of our generation, in terms of potential good it can create in the world.

At Web 2 last year a significant theme arose around the idea of “You Are the Platform,” driven by people and companies like Chris Poole, Mozilla, Singly, and many others. I think this is an under-appreciated and important idea for our industry, and it centers around, to torture a phrase, the idea of “small” rather than Big Data. To me, small means limited, intimate, and actionable by individuals. It’s small in the same sense that the original web was “small pieces loosely joined” (and the web itself was “big.”)  It’s intimate in that it’s data that matters a lot to each of us, and that we share with much the same kind of social parameters that might constrain a story at an intimate dinner gathering, or a presentation at a business meeting. And should we choose to share a small amount of intimate data with “the cloud,” it’s important that the cloud understand the nature of that data as distinct from its masses of “Big Data.”

An undeveloped idea, to be sure, but I wanted to sketch this out today before I leave for a week of travel.

The Audacity of Diaspora

By - May 13, 2012

Last Friday Businessweek ran a story on Diaspora, a social platform built from what might be called Facebook anti-matter. It’s a great read that chronicles the project’s extraordinary highs and lows, from Pebble-like Kickstarter success to the loss of a founder to suicide. Given the overwhelming hype around Facebook’s IPO this week, it’s worth remembering such a thing exists – and even though it’s in private beta, Diaspora is one of the largest open source projects going right now, and boasts around 600,000 beta testers.

I’ve watched Diaspora from the sidelines, but anyone who reads this site regularly will know that I’m rooting for it. I was surprised – and pleased – to find out that Diaspora is executing something of a “pivot” – retaining its core philosophy of being a federated platform where “you own your own data” while at the same time adding new Tumblr and Pinterest-like content management features, as well as integration with – gasp! – Facebook.  And this summer, the core team behind the service is joining Y Combinator in the Valley – a move that is sure to accelerate its service from private beta to public platform.

I like Diaspora because it’s audacious, it’s driven by passion, and it’s very, very hard to do. After all, who in their right mind would set as a goal taking on Facebook? That’s sort of like deciding to build a better search engine – very expensive, with a high likelihood of failure. But what’s really audacious is the vision that drives Diaspora – that everyone owns their own data, and everyone has the right to do with it what they want. The vision is supported by a federated technology platform – and once you federate, you lose central control as a business. Then, business models get very, very hard. So you’re not only competing against Facebook, you’re also competing against the reality of the marketplace – centralized domains are winning right now (as I pointed out here).

It seems what Diaspora is attempting to do is take the functionality and delight of the dependent web, and mix it with the freedom and choice offered by the independent web. Of course, that sounds pretty darm good to me.

Given the timing of Facebook’s public debut, the move to Y Combinator, and perhaps just my own gut feel, I think Diaspora is one to watch in coming months. As of two days ago, the site is taking registrations for its public debut. Sign up here.

Larry Lessig on Facebook, Apple, and the Future of “Code”

By - May 09, 2012

Larry Lessig is an accomplished author, lawyer, and professor, and until recently, was one of the leading active public intellectuals in the Internet space. But as I wrote in my review of his last book (Is Our Republic Lost?), in the past few years Lessig has changed his focus from Internet law to reforming our federal government.

But that doesn’t mean Lessig has stopped thinking about our industry, as the dialog below will attest. Our conversation came about last month after I finished reading Code and Other Laws of Cyberspace, Version 2. The original book, written in 1999, is still considered an authoritative text on how the code of computing platforms interacts with our legal and social codes. In 2006, Lessig “crowdsourced” an update to his book, and released it as “Version 2.0.” I’d never read the updated work (and honestly didn’t remember the details of the first book), so finally, six years later, I dove in again.

It’s a worthy dive, but not an easy one. Lessig is a lawyer by nature, and his argument is laid out like proofs in a case. Narrative is sparse, and structure sometimes trumps writing style. But his essential point – that the Internet is not some open “wild west” destined to always be free of regulation, is soundly made. In fact, Lessig argues, the Internet is quite possibly the most regulable technology ever invented, and if we don’t realize that fact, and protect ourselves from it, we’re in for some serious pain down the road. And for Lessig, the government isn’t the only potential regulator. Instead, Lessig argues, commercial interests may become the most pervasive regulators on the Internet.

Indeed, during the seven years between Code’s first version and its second, much had occurred to prove Lessig’s point. But even as Lessig was putting the finishing touches on the second version of his manuscript, a new force was erupting from the open web: Facebook. And a year after that, the iPhone redefined the Internet once again.

In Code, Lessig enumerates several examples of how online services create explicit codes of control – including the early AOL, Second Life, and many others. He takes the reader though important lessons in understanding regulation as more than just governmental – explaining normative (social), market (commercial), and code-based (technological) regulation. He warns that once we commit our lives to commercial services that hold our identity, a major breach of security will most likely force the government into enacting overzealous and anti-constitutional measures (think 9/11 and the Patriot Act). He makes a case for the proactive creation of an intelligent identity layer for the Internet, one that might offer just the right amount of information for the task at hand. In 2006, such an identity layer was a controversial idea – no one wanted the government, for example, to control identity on the web.

But for reasons we’re still parsing as a culture, in the six years since the publication of Code v2, nearly 1 billion of us have become comfortable with Facebook as our defacto identity, and hundreds of millions of us have become inhabitants of Apple’s iOS.

Instead of going into more detail on the book (as I have in many other reviews), I thought I’d reach out to Lessig and ask him about this turn of events. Below is a lightly edited transcript of our dialog. I think you’ll find it provocative.

As to the book: If you consider yourself active in the core issues of the Internet industry, do yourself a favor and read it. It’s worth your time.

Q: After reading your updated Code v2, which among many other things discusses how easily the Internet might become far more regulated than it once was, I found myself scribbling one word in the margins over and over again. That word was “Facebook.”

You and your community updated your 1999 classic in 2006, a year or two before Facebook broke out, and several years before it became the force it is now. In Code you cover the regulatory architectures of places where people gather online, including MUDS, AOL, and the then-hot darling known as Second Life. But the word Facebook isn’t in the text.

What do you make of Facebook, given the framework of Code v2?

Lessig: If I were writing Code v3, there’d be a chapter — right after I explained the way (1) code regulates, and (2) commerce will use code to regulate — titled: “See, e.g., Facebook.” For it strikes me that no phenomena since 2006 better demonstrates precisely the dynamic I was trying to describe. The platform is dominant, and built into the platform are a million ways in which behavior is regulated. And among those million ways are 10 million instances of code being use to give to Facebook a kind of value that without code couldn’t be realized. Hundreds of millions from across the world live “in” Facebook. It more directly (regulating behavior) than any government structures and regulates their lives while there. There are of course limits to what Facebook can do. But the limits depend upon what users see. And Facebook has not yet committed itself to the kind of transparency that should give people confidence. Nor has it tied itself to the earlier and enabling values of the internet, whether open source or free culture.

Q: Jonathan Zittrain wrote his book two years after Code v2, and warned of non-generative systems that might destroy the original values of the Internet. Since then, Apple iOS (the “iWorld”) and Facebook have blossomed, and show no signs of slowing down. Do you believe we’re in a pendulum swing, or are you more pessimistic – that consumers are voting with their dollars, devices, and data for a more closed ecosystem?

Lessig: The trend JZ identified is profound and accelerating, and most of us who celebrate the “free and open” net are simply in denial. Facebook now lives oblivious to the values of open source software, or free culture. Apple has fully normalized the iNannyState. And unless Google’s Android demonstrates how open can coexist with secure, I fear the push away from our past will only continue. And then when our i9/11 event happens — meaning simply a significant and destructive cyber event, not necessarily tied to any particular terrorist group — the political will to return to control will be almost irresistible.

The tragedy in all this is that it doesn’t have to be this way. If we could push to a better identity layer in the net, we could get both better privacy and better security. But neither side in this extremist’s battle is willing to take the first step towards this obvious solution. And so in the end I fear the extremists I like least will win.

Q: You seem profoundly disappointed in our industry. What can folks who want to make a change do?

Lessig: Not at all. The industry is doing what industry does best — doing well, given the rules as they are. What industry is never good at (and is sometimes quite evil at) is identifying the best mix of rules. Government is supposed to do something with that. Our problem is that we have today such a fundamentally dysfunctional government that we don’t even recognize the idea that it might have a useful role here. So we get stuck in these policy-dead-ends, with enormous gains to both sides left on the table.

And that’s only to speak about the hard problems — which security in the Net is. Much worse (and more frustrating) are the easy problems which the government also can’t solve, not because the answer isn’t clear (again, these are the easy problems) but because the incumbents are so effective at blocking the answer that makes more sense so as to preserve the answer that makes them more dollars. Think about the “copyright wars” — practically every sane soul is now focused on a resolution of that war that is almost precisely what the disinterested souls were arguing a dozen years ago (editor’s note: abolishing DRM). Yet the short-termism of the industry wouldn’t allow those answers a dozen years ago, so we have had an completely useless war which has benefited no one (save the lawyers-as-soldiers in that war). We’ve lost a decade of competitive innovation in ways to spur and spread content in ways that would ultimately benefit creators, because the dinosaurs owned the lobbyists.

—-

I could have gone on for some time with Lessig, but I wanted to stop there, and invite your questions in the comments section. Lessig is pretty busy with his current work, which focuses on those lobbyists and the culture of money in Congress, but if he can find the time, he’ll respond to your questions in the comments below, or to me in email, and I’ll update the post.

###

Other works I’ve reviewed: 

You Are Not A Gadget by Jaron Lanier (review)

Wikileaks And the Age of Transparency  by Micah Sifry (review)

Republic Lost by Larry Lessig (review)

Where Good Ideas Come From: A Natural History of Innovation by Steven Johnson (my review)

The Singularity Is Near: When Humans Transcend Biology by Ray Kurzweil (my review)

The Corporation (film – my review).

What Technology Wants by Kevin Kelly (my review)

Alone Together: Why We Expect More from Technology and Less from Each Other by Sherry Turkle (my review)

The Information: A History, a Theory, a Flood by James Gleick (my review)

In The Plex: How Google Thinks, Works, and Shapes Our Lives by Steven Levy (my review)

The Future of the Internet–And How to Stop It by Jonathan Zittrain (my review)

The Next 100 Years: A Forecast for the 21st Century by George Friedman (my review)

Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives by the Year 2100 by Michio Kaku (my review)

Jaron Lanier: Something Doesn’t Smell Right

By - May 08, 2012

Jaron Lanier’s You Are Not A Gadget has been on my reading list for nearly two years, and if nothing else comes of this damn book I’m trying to write, it’ll be satisfying to say that I’ve made my way through any number of important works that for one reason or another, I failed to read up till now.

I met Jaron in the Wired days (that’d be 20 years ago) but I don’t know him well – as with Sherry Turkle and many others, I encountered him through my role as an editor, then followed his career with interest as he veered from fame as a virtual reality pioneer into his current role as chief critic of all things “Web 2.0.” Given my role in that “movement” – I co-founded the Web 2 conferences with Tim O’Reilly in 2004 – it’d be safe to assume that I disagree with most of what Lanier has to say.

I don’t. Not entirely, anyway. In fact, I came away, as I did with Turkle’s work, feeling a strange kinship with Lanier. But more on that in a moment.

In essence, You Are Not A Gadget is a series of arguments, some concise, others a bit shapeless, centering on one theme: Individual human beings are special, and always will be, and digital technology is not a replacement for our humanity. In particular, Lanier is deeply skeptical of any kind of machine-based mechanism that might be seen as replacing or diminishing our specialness, which over the past decade, Lanier sees happening everywhere.

Lanier is most eloquent when he describes, late in the book, what he believes humans to be: the result of a very long, very complicated interaction with reality (sure, irony alert given Lanier’s VR fame, but it makes sense when you read the book):

I believe humans are the result of billions of years of implicit, evolutionary study in the school of hard knocks. The cybernetic structure of a person has been refined by a very large, very long, and very deep encounter with physical reality.

Lanier worries we’re losing that sense of reality. From crowdsourcing and Wikipedia to the Singularity movement, he argues that we’re starting to embrace a technological philosophy that can only lead to loss. Early in the book, he writes:

“…certain specific, popular internet designs of the moment…tend to pull us into life patterns that gradually degrade the ways in which each of us exists as an individual. These unfortunate designs are more oriented toward treating people as relays in a global brain….(this) leads to all sorts of maladies….”

Lanier goes on to specific examples, including the online tracking associated with advertising, the concentration of power in the hands of the “lords of the clouds” such as Microsoft, Facebook, Google, and even Goldman Sachs, the loss of analog musical notation, the rise of locked in, fragile, and impossibly complicated software programs; and ultimately, the demise of the middle class. It’s a potentially powerful argument, and one I wish Lanier had made more completely. Instead, after reading his book, I feel forewarned, but not quite forearmed.

Lanier singles out many of our shared colleagues – the leaders of the Web 2.0 movement – as hopelessly misguided, labeling them “cynernetic totalists” who believe technology will solve all problems, including that of understanding humanity and consciousness. He worries about the fragmentation of our online identity, and warns that Web 2 services – from blogs to Facebook – lead us to leave little pieces of ourselves everywhere, feeding a larger collective, but resulting in no true value to the individual.

If you read my recent piece On Thneeds and the “Death of Display”, this might sound familiar, but I’m not sure I’d be willing to go as far as Lanier does in claiming all this behavior of ours will end up impoverishing our culture forever. I tend to be an optimist, Lanier, less so. He rues the fact that the web never implemented Ted Nelson’s vision of true hypertext – where the creator is remunerated via linked micro-transactions, for example. I think there were good reasons this system didn’t initially win, but there’s no reason to think it never will.

Lanier, an accomplished musician – though admittedly not a very popular one – is convinced that popular culture has been destroyed by the Internet. He writes:

Pop culture has entered into a nostalgic malaise. Online culture is dominated by trivial mashups of the culture that existed before the onset of mashups, and by fandom responding to the dwindling outposts of centralized mass media. It is a culture of reaction without action.

As an avid music fan, I’m not convinced. But Lanier goes further:

Spirituality is committing suicide. Consciousness is attempting to will itself out of existence…the deep meaning of personhood is being reduced by illusions of bits.

Wow! That’s some powerful stuff. But after reading the book, I wasn’t convinced about that, either, though Lanier raises many interesting questions along the way. One of them boils down to the concept of smell – the one sense that we can’t represent digitally. In a section titled “What Makes Something Real Is That It Is Impossible to Represent It To Completion,” Lanier writes:

It’s easy to forget that the very idea of a digital expression involves a trade-off with metaphysical overtones. A physical oil painting cannot convey an image created in another medium; it is impossible to make an oil painting look just like an ink drawing, for instance, or vice versa. But a digital image of sufficient resolution can capture any kind of perceivable image—or at least that’s how you’ll think of it if you believe in bits too much. Of course, it isn’t really so. A digital image of an oil painting is forever a representation, not a real thing. A real painting is a bottomless mystery, like any other real thing. An oil painting changes with time; cracks appear on its face. It has texture, odor, and a sense of presence and history.

This really resonates with me. In particular, the part about the odor. Turns out, odor is a pretty interesting subject. Our sense of smell is inherently physical – actual physical molecules of matter are required to enter our bodies and “mate” with receptors in our nervous system in order for us to experience an odor:

Olfaction, like language, is built up from entries in a catalog, not from infinitely morphable patterns. …the world’s smells can’t be broken down into just a few numbers on a gradient; there is no “smell pixel.”

Lanier suspects – and I find the theory compelling – that olfaction is deeply embedded in what it means to be human. Certainly such a link presents a compelling thought experiment as we transition to a profoundly digital world. I am very interested in what it means for our culture that we are truly “becoming digital,” that we are casting shadows of data in nearly everything we do, and that we are struggling to understand, instrument, and respond socially to this shift. I’m also fascinated by the organizations attempting to leverage that data, from the Internet Big Five to the startups and behind the scenes players (Palantir, IBM, governments, financial institutions, etc) who are profiting from and exploiting this fact.

But I don’t believe we’re in early lockdown mode, destined to digital serfdom. I still very much believe in the human spirit, and am convinced that if any company, government, or leader pushes too hard, we will “sniff them out,” and they will be routed around. Lanier is less complacent: he is warning that if we fail to wake up, we’re in for a very tough few decades, if not worse.

Lanier and I share any number of convictions, regardless. His prescriptions for how to insure we don’t become “gadgets” might well have been the inspiration for my post Put Your Taproot Into the Independent Web, for example (he implores us to create, deeply, and not be lured into expressing ourselves solely in the templates of social networking sites). And he reminds readers that he loves the Internet, and pines, a bit, for the way it used to be, before Web 2 and Facebook (and one must assume, Apple), rebuilt it into forms he now decries.

I pine a bit myself, but remain (perhaps foolishly) optimistic that the best of what we’ve created together will endure, even as we journey onward to discover new ways of valuing what it means to be a person. And I feel lucky to know that I can reach out to Jaron – and I have – to continue this conversation, and report the results of our dialog on this site, and in my own book.

Next up: A review (and dialog with the author) of Larry Lessig’s Code And Other Laws of Cyberspace, Version 2.

Other works I’ve reviewed:

Wikileaks And the Age of Transparency  by Micah Sifry (review)

Republic Lost by Larry Lessig (review)

Where Good Ideas Come From: A Natural History of Innovation by Steven Johnson (my review)

The Singularity Is Near: When Humans Transcend Biology by Ray Kurzweil (my review)

The Corporation (film – my review).

What Technology Wants by Kevin Kelly (my review)

Alone Together: Why We Expect More from Technology and Less from Each Other by Sherry Turkle (my review)

The Information: A History, a Theory, a Flood by James Gleick (my review)

In The Plex: How Google Thinks, Works, and Shapes Our Lives by Steven Levy (my review)

The Future of the Internet–And How to Stop It by Jonathan Zittrain (my review)

The Next 100 Years: A Forecast for the 21st Century by George Friedman (my review)

Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives by the Year 2100 by Michio Kaku (my review)

On Thneeds and the “Death of Display”

By - May 07, 2012

It’s all over the news these days: Display advertising is dead. Or put more accurately, the world of “boxes and rectangles” is dead. No one pays attention to banner ads, the reasoning goes, and the model never really worked in the first place (except for direct response). Brand marketers are demanding more for their money, and “standard display” is simply not delivering. After nearly 20 years*, it’s time to bury the banner, and move on to….

…well, to something else. Mostly, if you believe the valuations these days, to big platforms that have their own proprietary ad systems.

All over the industry, you’ll find celebration of new advertising-driven platforms that have eschewed the “boxes and rectangles” model. Twitter makes money off its native “promoted” suite of marketing tools. Tumblr just this week rolled out a similar offering. Pinterest recently hired Facebook’s original monetization wizard to create its own advertising model, separate from standard display. And of course there’s Facebook, which has gone so far as to call its new products “Featured Stories” (as opposed to “Ads” – which is what they are.) Lastly, we mustn’t forget the grandaddy of native advertising platforms, Google, whose search ads redefined the playing field more than a decade ago (although AdSense, it must be said, is very much in the “standard display” business).

Together, these platforms comprise what I’ve come to call the “dependent web,” and they live in a symbiotic relationship with what I call the “independent web.”

But there’s a very big difference between the two when it comes to revenue and perceived value. Dependent web companies are, in short, killing it. Facebook is about to go public at a valuation of $100 billion. Twitter is valued at close to $10 billion. Pinterest is rumored to be worth $4 billion, and who knows what Tumblr’s worth now – it was nearly $1 billion, on close to no revenues, last Fall. And of course Google has a market cap of around $200 billion.

Independent web publishers? With a few exceptions, they’re not killing it. They aren’t massively scaled platforms, after all, they’re often one or two person shops. If “display is dead,” then, well – they’re getting killed.

That’s because, again with few exceptions, independent web sites rely on the “standard display” model to scratch out at least part of a living. And that standard display model was not built to leverage the value of great content sites: engagement with audience. Boxes and rectangles on the side or top of a website simply do not deliver against brand advertising goals. Like it or not, boxes and rectangles have for the most part become the province of direct response advertising, or brand advertising that pays, on average, as if it’s driven by direct response metrics. And unless you’re running a high-traffic site about asbestos lawsuits, that just doesn’t pay the bills for content sites.

Hence, the rolling declaration of display’s death – often by independent industry news sites plastered with banners, boxes and rectangles.

But I don’t think online display is dead. It just needs to be rethought, re-engineered, and reborn. Easy, right?

Well, no, because brand marketers want scale and proof of ROI – and given that any new idea in display has to break out of the box-and-rectangle model first, we’ve got a chicken and egg problem with both scale and proof of value.

But I’ve noticed some promising sprigs of green pushing through the earth of late. First of all, let’s not forget the growth and success of programmatic buying across those “boxes and rectangles.” Using data and real time bidding, demand- and supply-side platforms are growing very quickly, and while the average CPM is low, there is a lot of promise in these new services – so much so, that FMP recently joined forces with one of the best, Lijit Networks. Another promising development is the video interstitial. Once anathema to nearly every publisher on the planet, this full page unit is now standard on the New York Times, Wired, Forbes, and countless other publishing sites. And while audiences may balk at seeing a full-page video ad after clicking from a search engine or other referring agent, the fact is, skipping the ad is about as hard as turning the page in a magazine. And in magazines, full page ads work for marketers.

Another is what many are now calling “native advertising” (sure to be confused with Twitter, Tumblr, and others’ native advertising solutions…) Over at Digiday, which has been doing a bang up job covering the display story, you’ll see debate about the growth of  publisher-based “native advertising units,” which are units that run in the editorial well, and are often populated with advertiser-sponsored content. FMP has been doing this kind of advertising for nearly three years, and of course it pioneered the concept of content marketing back in 2006. The key to success here, we’ve found, is getting context right, at scale, and of course providing transparency (IE, don’t try to fool an audience, they’re far smarter than that.)

And lastly, there are the new “Rising Star” units from the IAB (where I am a board member). These are, put quite simply, reimagined, larger and more interactive boxes and rectangles. A good step, but not a panacea.

So as much as I am rooting for these new approaches to display, and expect that they will start to be combined in ways that really pay off for publishers, they have a limitation: they’re focused on what I’ll call a “site-specific” model: for a publisher to get rewarded for creating great content, that publisher must endeavor to bring visitors to their site so those visitors can see the ads.  If we look toward the future, that’s not going to be enough. In an ideal Internet world, great content is rewarded for being shared, reposted,  viewed elsewhere and yes, even “liked.”

Up to now, that reward has had one single currency: Traffic back to the site.

Think of the largest referrers of traffic to the “rest of the web” – who are they? Yep – the very same companies with huge valuations – Google, Facebook, Twitter, and now Pinterest. What do they have in common? They’ve figured out a way to leverage the content created by the “rest of the web” and resell it to marketers at scale and for value (or, at least VCs believe they will soon). It’s always been an implicit deal, starting with search and moving into social: We cull, curate, and leverage your content, and in return, we’ll send traffic back to your site.

But given that we’re in for an extended transition from boxes and rectangles to ideas that, we hope, are better over time, well, that traffic deal just isn’t enough. It’s time to imagine bigger things.

Before we do, let’s step back for a moment and consider the independent web site. The…content creator. The web publisher. The talent, if you will. The person with a voice, an audience, a community. The hundreds of thousands (millions, really) of folks who, for good or bad, have plastered banners all over their site in the hope that perhaps the checks might get a bit bigger next month. (Of course this includes traditional media sites, like publishers who made their nut in print, for example). To me, these people comprise the equivalent of forests in the Internet’s ecosystem. They create the oxygen that feeds much of our world: Great content, great engagement, and great audiences.

Perhaps I’m becoming a cranky old man, a Lorax, if you must, but I’m going to jump up on a stump right now and say it: curation-based platform models that harvest the work of great content creators, creating “Thneeds” along the way, are failing to see the forest for the trees. Their quid pro quo deal to “send more traffic” ain’t enough.**

It’s time that content creators derived real value from the platforms they feed. A new model is needed, and if one doesn’t emerge (or is obstructed by the terms of service of large platforms), I worry about the future of the open web itself. If we, as an industry, don’t get just a wee bit better at taking care of content creators, we’re going to destroy our own ecosystem – and we’ll watch the Pinterests, Twitters, and yes, even the Google and Facebooks of the world deteriorate for lack of new content to curate.

Put another way: Unless someone cares, a whole awful lot…it isn’t going to get better. It’s not.

Cough.

So I’m here to say not only do I care, so do the hundreds of people working at Federated Media Publishing and Lijit, and at a burgeoning ecosystem of companies, publishers, and marketers who are coming to realize it’s time to wake up from our “standard display” dream and create some new models. It’s not the big platforms’ job to create that model – but it will be their job to not stand in the way of it.

So what might a new approach look like? Well first and foremost, it doesn’t mean abandoning the site-specific approach. Instead, I suggest we augment that revenue stream with another, one that ties individual “atomic units” of content to similar “atomic units” of marketing messaging, so that together they can travel the Seussian highways of the social web with a business model intact.

Because if the traffic referral game has proven anything to us as publishers, it’s that great content doesn’t want to be bound to one site. The rise of Pinterest, among others, proves this fact. Ideally, content should be shared, mixed, mashed, and reposted – it wants to flow through the Internet like water. This was the point of RSS, after all – a technology that has actually been declared dead more often than the lowly display banner. (For those of you too young to recall RSS, it’s a technology that allows publishers to share their content as “feeds” to any third party.)

RSS has, in the main, “failed” as a commercial entity because publishers realized they couldn’t make money by allowing people to consume their content “offsite.” The tyranny of the site-specific model forced most commercial publishers to use RSS only for display of headlines and snippets of text – bait, if you will, to bring audiences back to the site.

I’ve written about the implications of RSS and its death over and over again, because I love its goal of weaving content throughout the Internet. But and each time I’ve considered RSS, I’ve found myself wanting for a solution to its ills. I love the idea of content flowing any and everywhere around the Internet, but I also understand and sympathize with the content creator’s dilemma: If my content is scattered to the Internet’s winds, consumed on far continents with no remuneration to me, I can’t make a living as a content creator. So it’s no wonder that the creator swallows hard, and limits her RSS feed in the hopes that traffic will rise on her site (a few intrepid souls, like me, keep their RSS feeds “full text.” But I don’t rely on this site, directly, to make a living.)

So let’s review. We now have three broken or limping models in independent Internet publishing: the traffic-hungry site-specific content model, the “standard display” model upon which it depends, and the RSS model, which failed due to lack of “monetization.”

But inside this seeming mess, if you stare long and hard enough, there are answers staring back at you. In short, it’s time to leverage the big platforms for more than just traffic. It’s time to do what the biggest holders of IP (the film and TV folks) have already done – go where the money is. But this time, the approach needs to be different.

I’ve already hinted at it above: Wrap content with appropriate underwriting, and set it free to roam the Internet. Of course, such a system will have to navigate business process rules (the platforms’ Terms of Service), and break free of scale and ROI constraints. I believe this can be done.

But given that I’m already at 2500 words, I think I’ll be writing about that approach in a future post. Stay tuned, and remember – “Unless….”

———

*As a co-founder of Wired, I had a small part to play in the banner’s birth – the first banner ran on HotWired in 1994. It had a 78% clickthrough rate. 

**Using ad networks, the average small publisher earns about seventy-five cents per thousand on her display ads. Let’s do the math. Let’s say Molly the Scone Blogger gets an average of 50,000 page views a month, pretty good for a food blogger. We know the average ad network pays about 65 to 85 cents per thousand page views at the moment (for reasons explained above, despite the continuing focus of the industrial ad technology complex, which is working to raise those prices with data and context). And let’s say Molly puts two ads per page on her site. That means she has one hundred “thousands” to sell, at around 75 cents a thousand. This means Molly gets a check for about $75 each month. Now, Molly loves her site, and loves her audience and community, and wants to make enough to do it more. Since her only leverage is increased traffic, she will labor at Pinterest, Twitter, Facebook, and Google+, promoting her content and doing her best to gain more audience.

Perhaps she can double her traffic, and her monthly income might go from $75 to $150. That helps with the groceries, but it’s a terrible return on invested time. So what might Molly do? Well, if she can’t join a higher-paying network like FMP, she may well decide to abandon content creation all together. And when she stops investing in her own site, guess what happens? She’s not creating new content for Pinterest, Twitter, Facebook and Google to harvest, and she’s not using those platforms to distribute her content.

For the past seven years, it’s been FMP’s business to get people like Molly paid a lot more than 75 cents per thousand audience members. We’re proud of the hundred plus million dollars we’ve injected into the Independent web, but I have to be honest with you. There are way more Mollys in the world than FMP can help – at least under current industry conditions. And while we’ve innovated like crazy to create value beyond standard banners, it’s going to take more to insure content creators get paid appropriately. It’s time to think outside the box.

—-

Special thanks to folks who have been helping me think through this issue, including Deanna Brown and the FMP team, Randall Rothenberg of the IAB, Brian Monahan, Chas Edwards, Jeff Dachis, Brian Morrissey, and many, many more. 

 

What Doesn’t the Valley Understand About Washington?

By - April 17, 2012

A few weeks ago I ventured to our nation’s capital to steep in its culture a bit, and get some first hand reporting done for the book. I met with about a dozen or so folks, including several scholars, the heads of the FCC and FTC, and senior folks in the Departments of Commerce and State. I also spoke to a lobbyist from the Internet industry, as well as people from various “think tanks” that populate the city. It was my first such trip, but it certainly won’t be my last.

Each of the conversations was specific to the person I was interviewing, but I did employ one device to tie them together – I asked each person the same set of questions toward the end of the conversation. And as I was on the plane home, I wrote myself a little reminder to post about the most interesting set of answers I got, which was to this simple question: What doesn’t the Valley understand about Washington?

It’s not a secret that the Valley, as a whole, has an ambivalent attitude toward DC. Until recently, the prevailing philosophy has trended libertarian – just stay out of the way, please, and let us do what we do best. Just about every startup CEO I’ve ever known – including myself – ignores Washington in the early years of a company’s lifecycle. Government is treated like plumbing – it’s dirty, it costs too much, it’s preferably someone else’s job, and it’s ignored until it stops working the way we want it to.

SOPA and PIPA is the classic example of the plumbing going out – and the Internet’s response to it was the topic of much of my conversations last month. Sure, “we” managed to stop some stupid legislation from passing, but the fact is, we almost missed it, and Lord knows what else we’re missing due to our refusal to truly engage with the instrument of our shared governance.

To be fair, in the past few years a number of major Internet companies have gotten very serious about joining the conversation in DC – Google is perhaps the most serious of them all (I’m not counting Micrsoft, which got pretty serious back in 1997 when it lost an antitrust suit). Now, one can argue that like Microsoft before it, Google’s seriousness is due to how interested Washington has become in Google, but regardless, it was interesting to hear from source after source how they respected Google for at least fully staffing a presence in DC.

Other large Internet companies also have offices in Washington, but from what I hear, they are not that effective beyond very narrow areas of interest. Two of the largest e-commerce companies in the world have a sum total of eight people in DC, I was told by a well-placed source. Eight people can’t get much done when you’re dealing with regulatory frameworks around fraud, intellectual property, international trade, infrastructure and spectrum policy, and countless other areas of regulation that matter to the Internet.

In short, and perhaps predictably, nearly everyone I spoke to in Washington told me that the Valley’s number one issue was its lack of engagement with the government. But the answers were far more varied and interesting than that simple statement. Here they are, without attribution, as most of my conversations were on background pending clearance of actual quotes for the book:

– The Valley doesn’t understand the threat that comes from Washington. Put another way, our industry figures it out too late. The Valley doesn’t understand how much skin it already has in the game. “When things are bent in the right direction here, it can be a really good thing,” one highly placed government source told me. Washington is “dismissed, and when it’s dismissed you neither realize the upside nor mitigate the downside.”

– When the Valley does engage, it’s too lightly, and too predictably. Larger Valley companies get an office on K Street (where the lobbyists live) and hire an ex-Congressperson to lobby on that company’s core issues. But “that’s not where the magic is,” one source told me. The real magic is for companies to use their own platforms to engage with their customers in authentic conversations that get the attention of lawmakers. This happened – albeit very late – with SOPA/PIPA, and it got everyone’s attention in Washington. Imagine if this was an ongoing conversation, and not a one-off “Chicken Little” scenario?  Counter to what many believe about Washington, where money and lobbying connections are presumed to always win the day, “Fact-based arguments matter, a lot,” one senior policymaker told me. “Fact-based debates occur here, every day. If you take yourself out of that conversation, it’s like going into litigation without a lawyer.” Internet companies are uniquely positioned to change the approach to how lawmakers “hear” their constituents, but have done very little to actually leverage that fact.

– The Valley is too obsessed with the issue of privacy, one scholar told me. Instead, it should look to regulations around whether or not harm is being done to consumers. This was an interesting insight – and perhaps a way to think about protecting our data and our identities. There are already a thicket of regulations and law around keeping consumers safe from the harmful effects of business practices. Perhaps we are paying attention to the wrong thing, this scholar suggested.

– The Valley assumes that bad legislation will be rooted out and defeated in the same way that SOPA and PIPA were. But that’s a faulty assumption. “The Valley is techno-deterministic, and presumes ‘we can engineer around it,'” one scholar told me. “They don’t realize they’ve already been blinkered – a subset of possible new technological possibilities has already been removed that they are not even aware of.” One example of this is the recent “white spaces” spectrum allocation, which while promising avenues of new market opportunity, was severely retarded by forces in Washington far more powerful than the Internet industry (more on this in another post).

– The framework of “us vs. them” is unproductive and produces poor results. The prevailing mentality in the valley, one well-connected scholar told me, is the “heroic techie versus the wicked regulator…Rather than just having libertarian abstractions about regulations versus freedom,” this source continued,  “it’s important to realize that in every single debate there are… regulations that strike better or worse balances between competing values. You just have to engage enough to defend the good ones.”

Put another way, as another senior government official told me, “The Valley doesn’t understand there are good and decent people here who really want to get things done.”

If I were to sum up the message from all my conversations in Washington, it’d be this: We’re here because as a society, we decided we needed people to help manage values we hold in common. Increasingly, the Internet is how we express those values. So stop ignoring us and hoping we’ll go away, and start engaging with us more. Decidedly better results will occur if you do.

I don’t pretend that one trip to DC makes me an expert on the subject (it surely does not), but I left DC energized and wanting to engage more than I have in the past. I hope you’ll feel the same.

(image: traveldk.com)

Larry Page Makes His Case

By - April 05, 2012

Given the headlines, questions, and legal actions Google has faced recently, many folks, including myself, have been wondering when Google’s CEO Larry Page would take a more public stance in outlining his vision for the company.

Well, today marks a shift of sorts, with the publication of a lenthy blog post from Larry titled, quite uninterestingly, 2012 Update from the CEO.

I’ve spent the past two days at Amazon and Microsoft, two Google competitors (and partners), and am just wrapping up a last meeting. I hope to read Page’s post closely and give you some analysis as soon as I can. Meanwhile, a few top line thoughts and points:

– Page pushes Google+ as a success, citing more than 100 million users, but still doesn’t address the question of whether the service is truly being used organically, rather than as a byproduct of interactions with other Google products. I’m not sure it matters, but it’s a question many have raised. He also doesn’t address, directly, the tempest over the integration of G+ into search.

– Page also does not directly address the issue of FTC privacy investigations into the company, not surprising, given any company’s response to these investigations is usually “no comment.” However, Google might have explained with a bit more gusto the reasons for its recent changes.

– Page tosses out another big number, this one around Android: 850K activations a day. Take that, Apple!

– Page uses the words “love” and “beauty” – which I find both refreshing and odd.

– Page also talks about making big bets, focusing on fewer products, and how it’s OK to not be exactly sure how big bets are going to make money. This is a topic where Google has a ton of experience, to be sure.

More when I get out of my last meeting….

On The Future of The Web 2 Summit

By - April 04, 2012

By around this time of year, most of you are used to hearing about this year’s Web 2 Summit theme, its initial lineup of speakers, and any other related goings on, like our annual VIP dinners or perhaps some crazy map I’ve dreamt up. It’s become a familiar ritual in early spring, and many of you have been asking what’s up with this year’s event, in particular given the success of both last year’s theme (The Data Frame) and its amazing lineup of speakers and attendees.

Truth is, we’re not going to do the Web 2 Summit this year, and I’m writing this post to explain why. For the most part, it has to do with my book, the subject of which was outlined in my previous post. As the person who focuses on the core product – the programming on the stage – I just could not pull off both writing a book and creating a pitch-perfect onstage program. It takes months and months of hard work to execute a conference like Web 2 (and not just by me). My partners at O’Reilly and UBM TechWeb are full to the brink with other conferences, and after months of discussions about how we might route around this problem, we all agreed there really wasn’t a way to do it. It’s not fun being the guy who stops the party, but in this case, I have to step up and take responsibility.

That’s not to say we won’t be back – we’re keeping our options open there. For now, the Web 2 Summit is on hiatus. Each of the partners will continue to produce conferences (I am doing five for FM this year alone, and have ideas about others in the works). We’re just letting the Web 2 Summit lay fallow for a year.

I want to note that the partnership the three of us have enjoyed these past eight years has been nothing short of extraordinary. It’s quite unusual for a three-way venture to work, much less thrive as Web 2 Summit has. I am deeply grateful to Tim O’Reilly, Tony Uphoff, and their teams. I also want to note that this decision has nothing to do with any debate or disagreement between us – it’s really due to my desire to focus my time on FM and my new book.

Taking this year off will give all of us a chance to reflect on what we’ve done, consider our options going forward, and then take action. Expect to hear from us again in the next few months, and thanks for being part of the Web 2 Summit community.