free html hit counter Book Related Archives | Page 8 of 29 | John Battelle's Search Blog

Time To Begin, Again

By - October 19, 2012

Family, colleagues, and friends knew this day was coming, I knew it was coming, but here it is: I’ve rented a new place to write, a small, remote house directly on the beach, about 12 miles as the crow flies from my home in Marin county. It’s not a direct 12 miles – that crow would have to fly up about 2500 feet so as to clear the peak of Mt. Tamalpais. And that mountainous impediment is intentional – it takes close to the same time to ride a mountain bike from my home to this office as it does to drive one of several winding routes between here and there. I’m hoping that will spur me to take my commute by bicycle. I won’t be here every day, but I certainly hope to spend a fair bit of time here over the coming months.

I’ve added this new address to my long list of offices for one reason: To complete the book I’ve been talking about for nearly half a decade. That book began as an idea I called “The Conversation Economy,” but grew in both scope and ambition to encompass a much larger idea: an archaeology of the future, as seen through the digital artifacts of the present. Along the way, it’s changed a lot – 18 months ago, its title was “What We Hath Wrought.” Now, I’m thinking it’ll be called “If/Then.” I may yet call it “If/Then…Else” – or, as I wander through this journey, it might end up as something entirely different.

At this moment, I’m not certain. And that’s a bit scary.

I’ve made many false starts at this book, and I’ve failed on more than one occasion to truly commit to it. There are many reasons why, but I think the main one is that I believe this project requires that I place it first, ahead of anything else. And until recently, that’s simply been impossible. As readers know, up until this year, I ran the Web 2 Summit, which I put on hiatus this year so I could focus on the book. I’m also founder and Executive Chair of an Internet media startup, now in its seventh year. Federated Media Publishing has undergone many changes since 2005, and doubtless will see many more as it navigates what is an exciting and tumultuous media market. And because I’m a founder, I’ve always placed FMP ahead of anything else – even as I handed over CEO duties to a far more competent executive than myself 18 months ago.

In the past few months, I’ve been getting ready to put the book first, and it’s not an easy thing to do.  Not just because of the rapid evolution in the media business  (for more on that, see my “Death of Display” post), but because committing to a book project is an act of faith – faith that isn’t necessarily going to be rewarded.

Staring at a blank screen, knowing you have things to say, but not being certain how to say them, that’s just hard. I’ve been practicing for nearly a year. It’s time to get in the game.

I’ll still be a very active Chair at FMP, and I’ve got a few more long-planned trips to take, but for the most part, my calendar is cleared, and I’m ready to start. I’ve already spent the past year doing scores of interviews, reporting trips, and research on the book. I’ve got literally thousands of pages of notes and clips and sketches to go through. I’ve got many, many drafts of outlines and just as many questions to answer about where this book might take me. And of course, I’ll be writing out loud, right here, as I wander in the woods. I hope you’ll come along for the trip.

  • Content Marquee

Super Sad True Love Story: A Review

By -

In my continuing quest to reflect on books which I have found important to my own work, I give you a work of fiction, first published in mid-2011:  Super Sad True Love Story: A Novel, by Gary Shteyngart, an acclaimed writer born in Russia, now living in the US. This is my first read of Shteyngart, known also for his previous works Absurdistan and Russian Debutante’s Handbook, both of which established him as an important new literary voice (Ten Best Books – NYT, Book of the Year – Time, etc. etc….). Of course, I was barely aware of Shteyngart until a friend insisted I read “Super Sad” and I will forever be grateful for the recommendation.

Based in a future that feels to be about thirty years from now (the same timeframe as my pending book),  Shteyngart’s story stars one Lenny Abramov, a schlumpy 39-year-old son of Jewish Russian immigrants who lives in New York City. Abramov works at a powerful corporation that sells promises of immortality to “High Net Worth” individuals. But he’s not your typical corporate climber: The book begins in Italy, where Abramov has taken a literary vacation of sorts – he’s left an America he no longer loves to be closer to a world that he does – a dying world of art, literature, and slower living. But Abramov’s duty to his parents and his need for money drive him back to America, where most of the action occurs.

It turns out the future hasn’t been very kind to America. Just about every possible concern one might have about our nation’s decline has played out – the economy is in a death spiral, the Chinese pretty much control our institutions, large corporations control what the Chinese don’t, books and intelligent discourse have disappeared, shallowness and rough sex are glorified, and the Constitution has pretty much been suspended. Oh, and while the book doesn’t exactly put it this way, Facebook and Apple have won – everyone is addicted to their devices, and to the social reflections they project.

It doesn’t take long for a reader to realize Super Sad True Love Story: A Novel is also a work of science fiction, but somehow, that construct doesn’t get in the way. In fact, it’s rather fascinating to watch an accomplished literary novelist tackle “the future,” and do a pretty damn good job at it. I’m no science fiction expert, but Shteyngart projects our present day obsessions with devices, data, social networking, and the like into a dystopia that feels uncomfortably possible. Everyone is judged by their credit scores, their youthful appearance, and their ability to gather attention from denizens of an always on, always connected datasphere (those that are particularly good at getting attention are dubbed “very Media!”). Shteyngart is clearly working fields well sown by Dick, Gibson, Stephenson, Doctorow, and many others, but it works for me anyway.

The story is indeed a love story – an improbable and poignant one at that – between Lenny, a middle-aged man beset by insecurities, and a young Korean woman caught between familial duty and the pointless, consumer-driven world of shopping and social networking. The narrative is driven by America’s collapse into a security state, and I won’t give away any more of the plot than that. I’ll leave it here: By the end of this often hilarious novel, you will feel super sad, and you may also come to question the path we are on as it relates to data. I know that’s a pretty odd thing to say about a love story, but data, in fact, plays a central role in the novel’s meaning.  Here are a few of the passages I highlighted:

“Shards of data all around us, useless rankings, useless streams, useless communiqués from a world that was no longer to a world that would never be.”

“I’m learning to worship my new äppärät’s screen, the colorful pulsating mosaic of it, the fact that it knows every last stinking detail about the world, whereas my books only know the minds of their authors.”

“Streams of data were now fighting for time and space around us.”

“And all these emotions, all these yearnings, all these data, if that helps to clinch the enormity of what I’m talking about, would be gone.”

“I wanted to be in a place with less data, less youth, and where old people like myself were not despised simply for being old, where an older man, for example, could be considered beautiful.”

That last passage is from near the end of the book, when the fate of our protagonist has resolved – I won’t tell you how, in case you haven’t read the book. And if that is the case….I certainly recommend that you do.

—–

Other works I’ve reviewed:

The Victorian Internet: The Remarkable Story of the Telegraph and the Nineteenth Century’s On-line Pioneers by Tom Standage (review)

Year Zero: A Novel by Rob Reid (review)

Lightning Man: The Accursed Life of Samuel F. B. Morse by Kenneth Silverman (review)

Code: And Other Laws of Cyberspace, Version 2.0 by Larry Lessig (review)

You Are Not a Gadget: A Manifesto (Vintage) by Jaron Lanier (review)

WikiLeaks and the Age of Transparency by Micah Sifry (review)

Republic, Lost: How Money Corrupts Congress–and a Plan to Stop It by Larry Lessig (review)

Where Good Ideas Come From: A Natural History of Innovation by Steven Johnson (review)

The Singularity Is Near: When Humans Transcend Biology by Ray Kurzweil (review)

The Corporation (film – review).

What Technology Wants by Kevin Kelly (review)

Alone Together: Why We Expect More from Technology and Less from Each Other by Sherry Turkle (review)

The Information: A History, a Theory, a Flood by James Gleick (review)

In The Plex: How Google Thinks, Works, and Shapes Our Lives by Steven Levy (review)

The Future of the Internet–And How to Stop It by Jonathan Zittrain (review)

The Next 100 Years: A Forecast for the 21st Century by George Friedman (review)

Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives by the Year 2100 by Michio Kaku (review)

 

On Data

By - October 11, 2012

A glimpse of some of the thinking I’ve been doing about the impact of “data” on our culture. I am close (so damn close) to sealing myself off and into only thinking about this, for my book (OpenCoSF is my last big project till I do). But thanks to the Vibrant Data project for taking an interview I did at TED earlier this year, and making it into something that almost makes me look like I have my shit together. I attest, I do not. I hope soon, I will.

Data Wildcatters on the Wild Swiss Range

By - October 02, 2012

You want to put your sensor *where*?!!!!!

(image shutterstock) I’ve been watching the news for tidbits which illuminate a thesis I’ve been working up for my book. Today the New York Times provided a doozy: Swiss Cows Send Texts to Announce They’re in Heat. As James Gleick, author of The Information, noted in a Twitter response to me: That’s one heckuva headline.

So what’s my thesis? It starts with one of the key takeaways from Gleick’s book, which is that we are, as individuals and a society, becoming information. That might seem a rather puzzling statement, because one could argue that we’ve always been information, it’s only recently that we’re realizing that fact. So perhaps a better way of putting it is that we’re exploring the previously unmapped world of information. In the 1400s, the physical world was out there, much as it is today (perhaps it had a few more glaciers…). But we hadn’t discovered it, at least, not in any unified fashion. Now that we’ve discovered, named, and declared the outlines of most of the physical world, we are rapidly moving into a new era, one where we are coloring in the most interesting bits of information in our world with what we now call “data.”

As we survey, chart, and claim this new territory, a truth is emerging: when we discover some set of information might be valuable, we turn that information into data. Information is a slippery concept – one that gives Gleick “the willies.” But data? That’s information we can manipulate.

So here’s my thesis:  We create new data wherever we can find value. Put another way: If it’s valuable to know, new data will flow. Not to everyone, of course – as with oil, control of data is power. But the world is hell-bent on finding new data resources that unleash value. We’ve got wildcatters, we’ve got Exxon/Mobiles (think Facebook, Google, Amazon, the NSA, etc.), we’ve got pipes. And we’ve got incredible stories of the things folks will do to unlock the value of data.

Which takes us back to the cows of Switzerland. As the Times’ piece explains, a Swiss research team has created a system, comprised of implanted sensors and radio beacons, that measures a cow’s movement and internal body temperature. It converts these measurements into data, runs the data through an algorithm, and when the resulting computation indicates the cows are in heat, it sends a text message to the rancher. The net result: The rancher has a better chance of getting that cow pregnant (er, that didn’t quite come out right – but you know what I mean).

Net net: a  pregnant cow is a more valuable cow. And to get a cow pregnant more reliably, one needs the data. Previously, that data was buried in a bovine’s unexplored nether regions (literally – the sensor is placed in the cow’s genitals). But given the value that data carries, these Swiss data wildcatters have tapped a new gusher. This data exploration is now happening over and over, in nearly every imaginable corner of our world. We’ve just tapped the tip of this data iceberg, of course; we’re just stepping onto the shores of the New World. We’d be wise to remember that as we move forward.

Tweets Belong To The User….And Words Are Complicated

By - September 06, 2012

(image GigaOm) Like many of you, I’ve been fascinated by the ongoing drama around Twitter over the past few months (and I’ve commented on part of it here, if you missed it). But to me, one of the most interesting aspects of Twitter’s evolution has gone mostly unnoticed: its ongoing legal battle with a Manhattan court over the legal status of tweets posted by an Occupy Wall St. protestor.

In this case, the State of New York is arguing that a tweet, once uttered, becomes essentially a public statement, stripped of any protections. The judge in the case concurs: In this Wired coverage, for example, he is quoted as writing “If you post a tweet, just like if you scream it out the window, there is no reasonable expectation of privacy.”

Twitter disagrees, based on its own Terms of Service, which state “what’s yours is yours – you own your Content.”

As the NYT puts it:

Twitter informed the (Occupy protestor) that the judge had ruled his words no longer belonged to him: (he) had turned them over to Twitter, in other words, to be spread across the world.

(Twitter’s) legal team appealed on Monday of last week. Tweets belong to the user, the company argued.

I find this line of argument compelling. Twitter is arguing that its users do not “turn over” their words to Twitter, instead, they license their utterances to the service, but retain rights of ownership, those rights remain with the person who tweets. It’s a classic digital argument – sure, my words are out there on Twitter, but those are a licensed  copy of my words. The words – the ineffable words –  are still *mine.*  I still have rights to them! One of those rights may well be privacy (interesting given Twitter’s public nature, but arguable), but I can imagine this builds a case for other ownership rights as well, such as the right to repurpose those words in other contexts.

If that is indeed the case, I can imagine a time in the not too distant future when people may want to extract some or all their tweets, and perhaps license them to others as well. Or, they may want to use a meta-service (there’s that idea again) which allows them to mix and mash their tweets in various ways, and into any number of different containers. Imagine for a minute that one of those meta services gets Very Big, and challenges Twitter on its own turf. Should that occur, well, the arguments made in this Manhattan case may well come into very sharp focus. And it’s just those kind of services that are nervous about where Twitter is going.

Just noodling it out. I may be missing some key legal concept here, but this strikes me as a potentially important precedent. I plan to speak with folks at Twitter about all this soon, and hopefully, I’ll have some clarity. Stay tuned.

The Victorian Internet – The Technology That Started It All

By - September 01, 2012

I’m at least three books behind in my reviews, so I figured I’d bang out a fun one today: The Victorian Internet: The Remarkable Story of the Telegraph and the Nineteenth Century’s On-line Pioneers by Tom Standage. This 1998 book is now a classic – written as the Web was exploding on the scene, it reminded us that this movie has run before, 150 years in the past, with the rise of the telegraph. He writes:

The rise and fall of the telegraph is a tale of scientific discovery, technological cunning, personal rivalry, and cutthroat competition. It is also a parable about how we react to new technologies: For some people, they tap a deep vein of optimism, while others find in them new ways to commit crime, initiate romance, or make a fast buck age- old human tendencies that are all too often blamed on the technologies themselves.

Standage chronicles the history of the telegraph’s many inventors (Morse was just the most famous “father” of the device), and the passions it stirred across the world. Nowhere, however, did the invention stir more excitement (or bad poetry) than in the United States, where it can be convincingly argued that the telegraph’s ability to conquer distance and time almost perfectly matched the young country’s need to marshall its vast geography and resources. Were it not for the telegraph, the United States may never have become a world power.

Expansion was fastest in the United States, where the only working line at the beginning of 1846 was Morse’s experimental line, which ran 40 miles between Washington and Baltimore. Two years later there were approximately 2,000 miles of wire, and by 1850 there were over 12,000 miles operated by twenty different companies. The telegraph industry even merited twelve pages to itself in the 1852 U.S. Census. “The telegraph system [in the United States] is carried to a greater extent than in any other part of the world,” wrote the superintendent of the Census, “and numerous lines are now in full operation for a net-work over the length and breadth of the land.” Eleven separate lines radiated out from New York, where it was not uncommon for some bankers to send and receive six or ten messages each day. Some companies were spending as much as $1,000 a year on telegraphy. By this stage there were over 23,000 miles of line in the United States, with another 10,000 under construction; in the six years between 1846 and 1852 the network had grown 600-fold.

Standage writes with the amused eye of a British citizen – he currently works for the Economist as digital editor. One can sense a bit of English envy as he tells the telegraph’s tale – just as with television, the telegraph had early roots in his native country, but found its full expression in the United States. Thomas Edison started his career as a “telegraph man,” Alexander Graham Bell was inspired by the invention, the Associated Press grew out of the telegraph’s impact on newspapers, “e-commerce” was invented across the device’s wires, and huge corporations were born from its industries – Cable & Wireless, for example, began as a company that sourced insulation for telegraph lines.

The Victorian Internet is a must read for anyone interested in the history of technology, and in the cycles of hype, boom, and bust that seem to only quicken with each new wave of innovation. Highly recommended.

Other works I’ve reviewed:

Year Zero: A Novel by Rob Reid (review)

Lightning Man: The Accursed Life of Samuel F. B. Morse by Kenneth Silverman (review)

Code: And Other Laws of Cyberspace, Version 2.0 by Larry Lessig (review)

You Are Not a Gadget: A Manifesto (Vintage) by Jaron Lanier (review)

WikiLeaks and the Age of Transparency by Micah Sifry (review)

Republic, Lost: How Money Corrupts Congress–and a Plan to Stop It by Larry Lessig (review)

Where Good Ideas Come From: A Natural History of Innovation by Steven Johnson (my review)

The Singularity Is Near: When Humans Transcend Biology by Ray Kurzweil (my review)

The Corporation (film – my review).

What Technology Wants by Kevin Kelly (my review)

Alone Together: Why We Expect More from Technology and Less from Each Other by Sherry Turkle (my review)

The Information: A History, a Theory, a Flood by James Gleick (my review)

In The Plex: How Google Thinks, Works, and Shapes Our Lives by Steven Levy (my review)

The Future of the Internet–And How to Stop It by Jonathan Zittrain (my review)

The Next 100 Years: A Forecast for the 21st Century by George Friedman (my review)

Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives by the Year 2100 by Michio Kaku (my review)

 

 

What Is Search Now? Disjoined.

By - August 31, 2012

(image shutterstock)

Today I answered a question in email for a reporter who works for Wired UK. He asked smart questions, as I would expect from a Wired writer. (Some day I’ll tell you all my personal story of Wired UK – I lived over there for the better part of a year back in 1997, trying to make that magazine work. I mostly failed – but it’s up and running strong now.)

In any case, one question in particular struck me. The writer is preparing a piece on the future of search. (I’ll link to it when it comes out). What big problems, he asked, still plague search?

That got me thinking. Here’s my answer:

The largest issue with search is that we learned about it when the web was young, when the universe was “complete” – the entire web was searchable! Now our digital lives are utterly fractured – in apps, in walled gardens like Facebook, across clunky interfaces like those in automobiles or Comcast cable boxes. Re-uniting our digital lives into one platform that is “searchable” is to me the largest problem we face today. 

It may be worth expanding on that sentiment. When it broke out in the mid 1990s, the web was society’s first at-scale digital artifact.  It spread in orders of ten, first thousands, then millions, then hundreds of millions of pages – and on it went, to the billions it now encompasses. Everybody wanted to “be” on the web – a creator class started making pages and companies and services, a consumer class started “surfing” this vast new digital object, and our collective conscience marvelled at what we had created together: millions of small pieces loosely joined. And the key and unappreciated point is this: those pieces were indeed joined.

It was that joining – through links, of course – that made search possible, that created what is unquestionably the most powerful and lasting new company of the past 20 years – Google.* But as I wrote in Why Hath Google Forsaken Us? A Meditation, Google’s core model – built on the open, linked world of the web – is under threat from the advance of the iPhone and the app, the Facebook and the Path, the automobile console, the Xbox, the cable box, and countless other “unlinked” digital artifacts.

Google knows this. Why else invest so much in Android, in Google+, in Motorola (it’s not just phones, it’s also cable boxes), in self-driving cars, for goodness sake? Google wants a foothold wherever digital information is created and shared, and man, are we creating a sh*t ton of it. Problem is, we’re not making it easy – or even possible – to link all this stuff together, should we care to.

Which takes me back to that core question the Wired reporter asked me: What’s the biggest problem plaguing search? In short, it’s that our digital world is no longer small pieces loosely joined. It’s also big chunks separate and apart. And that makes search – in its most broadest interpretation – damn near impossible.

Which leads to another question: What then, is search? Of course, the Wired reporter asked me that as well. My answer:

Search is now more than a web destination and a few words plugged into a box. Search is a mode, a method of interaction with the physical and virtual worlds. What is Siri but search? What are apps like Yelp or Foursquare, but structured search machines? Search has become embedded into everything, and has reached well beyond its web-based roots.

So we all search now, all the time, across all manner of artifacts, large and small. But our searches are not federated – we can’t search across these repositories, as we could across that wonderful, vast, loosely joined early world of the web. We’ve lost the connection.

Call me a fool, but I think the need for that connection will be so strong, that in time, we’ll sew all our digital artifacts back together again. At least, I certainly hope we will. Right now, it ain’t looking so likely – what with patent wars, wagon circling by big platforms, and the like. But I’m an optimist – and I hope you are as well.

* Sorry but Facebook isn’t there – yet. And Microsoft and Apple, well, they may make a play for that crown either 20 years ago or 20 years hence, but if you ask me for the most important company ever that launched as a native web business, the answer is indisputably Google.

 

 

 

The Future Is Cloudy

By - August 29, 2012

I’ve spent a lot of time thinking about data recently. It’s not just reading books like The Information or Mirror Worlds (or Super Sad True Love Story, a science fiction novel that is both compelling and scary), it’s my day to day work, both at FM (where we deal with literally 25 billion ad calls and associated data a month), and in reporting the book (I’ve been to MIT, Yale, Amazon, Microsoft, Facebook, Google, and many other places, and the one big theme everyone is talking about is data…).

We are, as a society and as individuals, in the process of becoming data, of describing and detailing and burnishing our dataselves. And yet, we haven’t really joined the conversation about what this all means, in the main because it’s so damn abstract. We talk about privacy and fear of big brother, or big corporations. We talk about Facebook and whether we’re sharing too much. But we aren’t really talking – in any scaled, framed way – about what this means to being humans connected in a shared society, to be in relationships, to be citizens and consumers and lovers and haters….

There are so many wonderful micro conversations going on about this topic, spread out all over the place. I’m hoping that when my book appears, it might be a small step in joining some of these conversations into a larger framework. That’s the dream anyway.

Meanwhile, this report caught my eye (hat tip to the ever interesting newsletter guru Dave Pell and his NextDraft): Can’t Define “The Cloud”? Who Cares? It quotes a study that found:

….most people have no idea what the cloud is, have pretended to know what it means on first dates, and yet effectively all respondents are active cloud computing users.

It continues:

And that’s the way this stuff should work.

I get the point, but in a sense, I utterly disagree. If we as as society do not understand “the cloud,” in all its aspects – what data it holds, how it works, what the bargains are we make as we engage with it, we’ll all be the poorer for it, I believe. (For one aspect of this see my post on the Cloud Commit Conundrum). More on this as the Fall approaches, and I settle into a regular habit of writing out loud for the book.

 

Here We Go Again: The Gray Market in Twitter and Facebook

By - August 07, 2012

So, casually reading through this Fast Company story about sexy female Twitter bots, I come across this astounding, unsubstantiated claim:

My goal was to draw a straight line from a Twitter bot to the real, live person whose face the bot had stolen. In the daily bot wars–the one Twitter fights every day, causing constant fluctuations in follower counts even as brands’ followers remain up to 48% bot–these women are the most visible and yet least acknowledged victims…

There it was, tossed in casually, almost as if it was a simple cost of doing business – nearly half of the followers of major brands could well be “bots.”

The article focuses on finding a pretty woman whose image had been hijacked, sure, but what I found most interesting (but sadly unsurprising) was how it pointed to a site that promises to a thousand  followers to anyone who pays…wait for it…about $17. Yes, the site is real. And no, you shouldn’t be surprised, in the least, that such services exist.

It has always been so.

Back when I was reporting for The Search, I explored the gray market that had sprung up around Google (and still flourishes, despite Google’s disputed attempts to beat it back). Fact is, wherever there is money to be made, and ignorance or desperation exists in some measure, shysters will flourish. And a further fact is this: Marketers, faced with CMO-level directives to “increase my follower/friend counts,” will turn to the gray market. Just as they did back in the early 2000s, when the directive was “make me rank higher in search.”

Earlier this week I got an email from a fellow who has been using Facebook to market his products. He was utterly convinced that nearly all the clicks he’s received on his ad were fake – bots, he thought, that were programmed to make his campaigns look as if they were performing well. He was further convinced that Facebook was running a scam – running bot networks to drive performance metrics. I reminded him that Facebook was a public company run by people I believed were well intentioned, intelligent people who knew that such behavior, if discovered, would ruin both their reputation as well as that of the company.

Instead, I suggested, he might look to third parties he might be working with – or, hell, he might just be the victim of a drive-by shooting – poorly coded bots that just click on ad campaigns, regardless of whose they might be.

In short, I very much doubt Facebook (or Twitter) are actively driving fraudulent behavior on their networks. In fact, they have legions of folks devoted to foiling such efforts.Yet there is absolutely no doubt that an entire, vibrant ecosystem is very much engaged in gaming these services. And just like Google had at the dawn of search marketing, Twitter and Facebook have a very – er – complicated relationship with these fraudsters. On the one hand, the gray hats are undermining the true value of these social networks. But on the other, well, they seem to help important customers hit their Key Performance Indicators, driving very real money into company coffers, either directly or indirectly.

I distinctly recall a conversation with a top Google official in 2005, who – off the record – defended AdSense-splattered domain-squatters as “providing a service to folks who typed the wrong thing into the address bar.” Uh huh.

As long as marketers are obsessed with hollow metrics like follower counts, Likes, and unengaged “plays,” this ecosystem will thrive.

What truly matters, of course, is engagement that can be measured beyond the actions of bots. It is coming. But not before millions of dollars are siphoned off by the opportunists who have always lived on the Internet’s gray edge.

Who’s On First? (A Modest Proposal To Solve The Problem with First- and Third-Party Marketing)

By - July 26, 2012

Early last month I wrote a piece entitled Do Not Track Is An Opportunity, Not a Threat. In it I covered Microsoft’s controversial decision to incorporate a presumptive “opt out of tracking” flag in the next release of its browser, which many in the ad industry see as a major blow to the future of our business.

In the piece, I argued that Microsoft’s move may well force independent publishers (you know, like Searchblog, as well as larger sites like CNN or the New York Times) to engage in a years-overdue dialog with their readers about the value exchange between publisher, reader, and marketer. I laid out a scenario and proposed some language to kick that dialog off, but I gave short shrift to a problematic and critical framing concept. In this post, I hope to lay that concept out and offer, by way of example, a way forward. (Caveat: I am not an expert in policy or tech. I’ll probably get some things wrong, and hope readers will correct me if and when I do.)

The “concept” has to do with the idea of a first-party relationship – a difficult to define phrase that, for purposes of this post, means the direct relationship a publisher or a service has with its consumer.  This matters, a lot, because in the FTC’s recently released privacy framework, “first-party marketing” has been excluded from proposed future regulation around digital privacy and the use of data. However, “third-party” marketing, the framework suggests, will be subject to regulation that could require “consumer choice.”

OK, so in that last sentence alone are three terms, which I’ve put in quotes, that need definition if we are going to understand some pretty important issues. The most important is “first-party marketing,” and it’s damn hard to find a definition of that in the FTC document. But if you go back to the FTC’s *preliminary* report, issued in December of 2010, you can find this:

First-party marketing: Online retailers recommend products and services based upon consumers’ prior purchases on the website.

Later in the report, the term is further defined:

Staff proposes that first-party marketing include only the collection of data from a consumer with whom the company interacts directly for purposes of marketing to that consumer.

And in a footnote:

Staff also believes that online contextual advertising should fall within the “commonly accepted practices” category (Ed. note: Treated as OK, like first party marketing). Contextual advertising involves the delivery of advertisements based upon a consumer’s current visit to a web page or a single search query, without the collection and retention of data about the consumer’s online activities over time. As staff concluded in its 2009 online behavioral advertising report, contextual advertising is more transparent to consumers and presents minimal privacy intrusion as compared to other forms of online advertising. See OBA Report, supra note 37, at 26-27 (where a consumer has a direct interface with a particular company, the consumer is likely to understand, and to be in a position to control, the company’s practice of collecting and using the consumer’s data).

The key issue here for publishers, as far as I can tell, is this: “the delivery of advertisements based upon a consumer’s current visit to a web page or a single search query, without the collection and retention of data about the consumer’s online activities over time…where a consumer has a direct interface with a particular company, the consumer is likely to understand, and to be in a position to control, the company’s practice of collecting and using the consumer’s data.”

Whew. OK. We’re getting somewhere. Now, when that 2010 report came out, many in our industry freaked out, because of the next sentence, one which refers to – wait for it – third party marketing:

If a company shares data with a third party other than a service provider acting on the company’s behalf – including a business affiliate unless the affiliate relationship is clear to consumers through common branding or similar means – the company’s practices would not be considered first-party marketing and thus they would fall outside of “commonly accepted practices” … Similarly, if a website publisher allows a third party, other than a service provider, to collect data about consumers visiting the site, the practice would not be “commonly accepted.”

Now, this was a preliminary report, and the final report, which as I said earlier came out this past Spring, incorporates a lot of input from companies engaged in what the FTC described as “third party” marketing – companies like Google that were very concerned that the FTC was about to wipe out entire swathes of their business. And the fact is, it’s still not clear what’s going to be OK, and what isn’t. For now, my best summary is this: it’s OK for websites that have a “first party” relationship to use data collected on the site to market to consumers. If, however, those sites was to let “third parties” market to consumers, then, at some point soon, the sites need to figure out a way to give “consumers a choice” to opt out. If they don’t, they may be subject to regulation down the road.

Which brings us back to “Do Not Track,” or DNT. Now DNT has been held up as the easiest way to give consumer a choice about this issue – if a consumer has DNT enabled on their browser, then that consumer has very clearly made a choice – they don’t want third-party advertisements or data collection, thank you very much. See how easy that was?

Wrong, wrong, wrong!!! As implemented by Microsoft in IE 10, DNT is an extremely blunt instrument, one that, in fact, does *not* constitute a choice. It’s defaulted to “on,” which means that a consumer is not ever given a choice one way or the other. And once it’s on, it’s the same for every single site – which means you can’t say that you’re fine with third-party ads on a site you love (say, Searchblog, naturally), but not fine with a site you don’t like so much (say, I dunno,  You Got Rick Rolled).

That’s pretty lame. Shouldn’t we, as consumers, be able to chose which sites we trust, and which we don’t? That’s pretty much the point of my post on DNT last month.

Fact is, we don’t really have a way to demonstrate that trust. Many in the industry – including the IAB, where I am a board member – are working to clarify all this with the FTC. The working assumption is that it’s far too much to ask of most publishing sites to give consumers a choice, much less give them access to the data used to “target” them.

Well, I’m not so sure about that.

Check out this screen shot from independent site GigaOm (yes, FM works with GigaOm):

A few other sites are starting to do similar notices – and I applaud them (this is already becoming standard practice in the UK, due to strict regulations around cookies). GigOm is saying, in essence, that by simply continuing to read the site, you agree to their privacy policies. Now, take a look at what GigaOm’s policy has to say about “third party advertising:”

GigaOM may allow third party advertising serving companies, including ad networks (“Advertisers”), to display advertisements or provide other advertising services on GigaOM. These third party Advertisers may use techniques other than HTTP cookies to recognize your computer or device and/or to collect and record demographic and other Information about you, including your activities on or off GigaOM. These techniques may be used directly on GigaOM….Advertisers may use the information collected to display advertisements that are tailored to your interests or background and/or associate such information with your subsequent visits, purchases or other activities on other websites. Advertisers may also share this information with their clients or other third parties.

GigaOM has no responsibility for the technologies, tools or practices of the Advertisers that provide advertising and related services on GigaOM. This Privacy Policy does not cover any collection, use or disclosure of Information by Advertisers who provide advertising and related services on GigaOM. These Advertisers’ information gathering practices, including their policies regarding the use of cookies and other tracking technologies, are governed entirely by their own privacy policies, not this one.

To summarize: By reading GigaOm, you’ve made a choice, and that choice is to let GigaOm use third-party advertising. It’s a nifty move, and one I applaud: GigaOm has just established you as a first party to its content and services just like….

….Facebook, which just announced revenue of more than a billion dollars last quarter. Facebook, of course, has a first-party relationship with 955 million or so of us – we’ve already “opted in” to its service, through the Terms of Service we’ve all agreed to (and probably not read.) We’ve made a choice as consumers, and we’ve chosen to be marketed to on Facebook’s terms.

The same is true of Apple, Amazon, eBay, Yahoo, and any number of other large services which require registration and acceptance of Terms of Service in order for us to gain any value from their platforms. Google and Microsoft have been frantically catching up, getting as many of us as they can to register our identity and agree to a unified TOS in some way.

But what about independent publishers? You know, the rest of the web? Well, save folks like GigaOm (and AllThingsD, which warns its audience about cookies), we’ve never really paid attention to this issue. In the past, publishers have avoided doing anything that might get in the way of an audience consuming their content – it’s a death sentence if you’re engaged in the high holy art of Increasing Page Views. And bigger publishers like Time or Conde Nast don’t want to rock the boat, they’ll wait till a consensus forms, and then follow it.

But I like what GigaOm has done. It’s a very clear notice, it goes away after the first visit, and it reappears only if you’ve cleared your cookies (which happens a lot if you run an anti-virus program).

I think it’s time the “rest of the web” follows their lead. We rely on third-party advertising services (like FM) to power our sites. We live in uncertain times as it relates to regulation. And certainly we have direct relationships of trust with our audiences – or you wouldn’t be reading this far down the page. It’s time the independent web declares the value of our first-party relationships with audiences, and show the government – and our readers – that we have nothing to hide.

I plan to look into ways we might make easily available the code and language necessary to enact these policies. I’ll be back with more as I have it….

*Now, the other two terms bear some definition as well. I think it’s fair to say “consumer choice” means “give the consumer the ability to decide if they want their data used, and for what purposes,” and “third party marketing” means the use of data and display of commercial messages on a first party site by third-party companies – companies that are not the owner of the site or service you are using.