free html hit counter Policy Archives - Page 3 of 62 - John Battelle's Search Blog

Facebook Is Now Making Its Own Weather

By - November 09, 2012

(image) The past month or so has seen the rise and fall of an interesting Internet tempest – the kind of story that gets widely picked up, then quickly amplified into storms of anger, then eventually dies down as the folks who care enough to dig into the facts figure out that the truth is somewhere outside the lines of the original headline-grabbing story.

The topic this time around centers on Facebook’s native ad unit called “Sponsored Stories,” and allegations that the company is gaming its “Edgerank” algorithm such that folks once accustomed to free promotion of their work on Facebook must now pay for that distribution.

Edgerank determines the posts you see in your Facebook newsfeed, and many sites noticed that sometime early this Fall, their traffic from Facebook shrank dramatically. Others claimed traffic had been declining since the Spring, but it wasn’t until this Fall that the story gained significant traction.

I’ve been watching all this play out – first via an angry post on the New York Observer site in which the author posits that Facebook is “broken on purpose” so as to harvest Sponsored Story revenue. An even angrier post on the same theme came five weeks later on a site called Dangerous Minds. From it:

Spring of 2012 was when bloggers, non-profits, indie bands, George Takei, community theaters, photographers, caterers, artists, mega-churches, high schools, tee-shirt vendors, campus coffee shops, art galleries, museums, charities, food trucks, and a near infinite variety of organizations; individuals from all walks of life; and businesses, both large and small, began to detect—for it was almost imperceptible at first—that the volume was getting turned down on their Facebook reach. Each post was now being seen only by a fraction of their total “fans” who would previously have seen them.

The author goes on to argue that Facebook was breaking the implicit contract between himself – an independent blogger – and Facebook, the corporation.

…as a publisher of a medium readership blog, I used to get a great deal from using Facebook—but I understood it to be a two-way reciprocal arrangement because I was driving traffic back to Facebook as well, and reinforcing their brand awareness with prominent widgets on our blog.

Now, if you’ve read my Thneeds post, you know I’m sympathetic to this point of view. I believe large social platforms like Facebook and Twitter “harvest” content from the Indpendent Web, and leverage the traffic and engagement that this content creates on their platforms to their own benefit via scaled advertising offerings. Most of us are fine with the deal – we promote our work on social sites, social sites drive traffic back to us. We like that traffic, either just because we like more folks reading our work, or, in the case of commercial sites like this one, because we serve ads against it.

Now, as I’ve noted many times over the past six months, this bargain is breaking down, because it’s getting harder and harder to monetize traffic using standard display advertising units. That’s not Facebook’s problem, per se, it’s ours. (See here for my suggestions as to how to solve it).

Nevertheless, for many sites, the spectre of losing significant traffic from Facebook means a serious blow to revenues. And from the point of view of the Dangerous Minds blogger, Facebook first cut his traffic off, then began asking him to pay to get it back (in the form of promoting his posts via Sponsored Stories).

This makes for a very good narrative: corporate greed laid bare. It got picked up by a lot of sites, including Ars Technica and even the aforementioned George Takei, who is upset that he’s lost the ability to push his posts to all 2.9 million of his Facebook fans.

Turns out, the truth is a lot more complicated. I’ve done some reporting on this issue, but not nearly as much as TechCrunch did. In a follow up to the Dangerous Minds story, TechCrunch claimed to have debunked the entire story. Titled Killing Rumors With Facts: No, Facebook Didn’t Decrease Page Feed Reach To Sell More Promoted Posts, the story argues that Facebook didn’t change its algorithms to drive up revenue, but rather to cull “spammy posts” from folks’ newsfeeds.

Facebook has always shown just a percentage of all possible posts in a given person’s newsfeed. Anyone paying attention already knew that. The company uses its Edgerank algorithm to determine what it thinks might be interesting to an individual, and sometime in the past few months, I can confirm through sources which wish to remain anonymous that Facebook made a pretty significant change to Edgerank that penalized posts that it felt were not high quality.

Of course, that begs the question: How does Facebook determine what “quality” is? The answer, in the main, is by measuring engagement – is the post shared, liked, clicked on, etc? If so, then it is seen as quality. If not, it’s demoted in value.

Is this sounding familiar to anyone yet? In short, Facebook just executed a Panda.

I held back from writing anything till this predictable cycle played out, because I had a theory, one that I believe is now confirmed: Facebook is now making its own weather, just like Google, and in the past couple months, we’ve witnessed the first widespread instance of a Facebook weather event.

For those of you who don’t know quite what I’m talking about, a bit of history. Ten or so years ago, the ecosystem around search began to notice shifts in how Google drove traffic around the web. Google would make a change to its algorithms, and all of a sudden some sites would see their traffic plummet (other sites sometimes saw the opposite occur). It seemed to those injured that the only way to get their Google traffic back was to buy Google AdWords – corporate greed laid bare. This story played out over and over, to the point where the weather events started to get names, just like hurricanes do. (The first was called Boston).

Early last year Google made a major change to its algorithms that penalized what it believed was lower quality content. Dubbed “Panda,” the changes targeted “content farms” that cranked out SEO friendly pages as AdWords bait. This had dramatic effects on many sites that specialized in “gaming” Google. It also hit sites that weren’t necessarily playing that game – updates like Panda often create collateral damage. Over time, and as it always does, Google fine-tuned Panda until the ecosystem stabilized.

I believe that Facebook is now learning how to manage its own weather. I don’t know the Dangerous Minds website well enough to know if it deserved the drop in traffic that occurred when Facebook had its Panda moment. But one thing does strike me as interesting to note: A significant drop in traffic means a particular site is losing audience that has proactively decided to click on a link inside their newsfeed. That click means the person leaves Facebook and goes to the the Dangerous Minds site. To me, that’s a pretty serious sign of engagement.

However, one might argue that such a signal is not as important to Facebook as internal ones such as “liking” or “sharing” across the Facebook network. To that end, I am sure we’ve not heard the last round of serious grumbling that Facebook is gaming its own Edgerank algorithm to benefit Facebook’s internal goals – to the detriment of the “rest of the web.” Be they publishers or folks like George Takei, who after all wants to push his Facebook fans to any  number of external links where they might buy his books or sign up to meet him at the next Comic Con, the rest of the web depends on “social traffic” from Facebook. The question is, should they optimize for that traffic, or will their efforts be nullified in the next Edgerank update?

Facebook is learning how to tread the delicate line between its own best interests, and those of its users – and the Internet That Is Not Facebook. Google does this every day – but it has a long history as a distributor of traffic off its main site. Facebook, not so much. Over time, the company will have to decide what kind of a relationship it wants to have with the “rest of the web.” It will probably have to start engaging more openly with its own ecosystem, providing guidance on best practices and how to avoid being penalized. This is a practice that took Google years to hone, and many still think the company has a lot of work to do.

Regardless, Facebook is now making its own weather. Now comes the fun part: Trying to predict it.

  • Content Marquee

Tweets Belong To The User….And Words Are Complicated

By - September 06, 2012

(image GigaOm) Like many of you, I’ve been fascinated by the ongoing drama around Twitter over the past few months (and I’ve commented on part of it here, if you missed it). But to me, one of the most interesting aspects of Twitter’s evolution has gone mostly unnoticed: its ongoing legal battle with a Manhattan court over the legal status of tweets posted by an Occupy Wall St. protestor.

In this case, the State of New York is arguing that a tweet, once uttered, becomes essentially a public statement, stripped of any protections. The judge in the case concurs: In this Wired coverage, for example, he is quoted as writing “If you post a tweet, just like if you scream it out the window, there is no reasonable expectation of privacy.”

Twitter disagrees, based on its own Terms of Service, which state “what’s yours is yours – you own your Content.”

As the NYT puts it:

Twitter informed the (Occupy protestor) that the judge had ruled his words no longer belonged to him: (he) had turned them over to Twitter, in other words, to be spread across the world.

(Twitter’s) legal team appealed on Monday of last week. Tweets belong to the user, the company argued.

I find this line of argument compelling. Twitter is arguing that its users do not “turn over” their words to Twitter, instead, they license their utterances to the service, but retain rights of ownership, those rights remain with the person who tweets. It’s a classic digital argument – sure, my words are out there on Twitter, but those are a licensed  copy of my words. The words – the ineffable words –  are still *mine.*  I still have rights to them! One of those rights may well be privacy (interesting given Twitter’s public nature, but arguable), but I can imagine this builds a case for other ownership rights as well, such as the right to repurpose those words in other contexts.

If that is indeed the case, I can imagine a time in the not too distant future when people may want to extract some or all their tweets, and perhaps license them to others as well. Or, they may want to use a meta-service (there’s that idea again) which allows them to mix and mash their tweets in various ways, and into any number of different containers. Imagine for a minute that one of those meta services gets Very Big, and challenges Twitter on its own turf. Should that occur, well, the arguments made in this Manhattan case may well come into very sharp focus. And it’s just those kind of services that are nervous about where Twitter is going.

Just noodling it out. I may be missing some key legal concept here, but this strikes me as a potentially important precedent. I plan to speak with folks at Twitter about all this soon, and hopefully, I’ll have some clarity. Stay tuned.

Twitter Drops Other Shoe, Which You All Saw Coming, Right?

By - August 30, 2012

Way back in the spring of 2010, when Twitter was constantly under siege for “not having a business model,” I co-hosted “Chirp,” Twitter’s first (and I think only) developer conference. This was just two and half years ago, but it seems like a decade. But it was at that conference, in an interview with me, that then-COO (now CEO) Dick Costolo first laid out the vision for “the Interest Graph.” I wrote about this concept extensively (herehere, here), because I felt that understanding the interests of its users would be the core driver of Twitter’s long-term monetization strategy.

Fast forward to now. Twitter today announced its “promoted” suite of ad units may now be targeted by user interest, which to me is a long-expected move that should clarify to anyone confused by the company’s recent announcements (cue link to recent tempest). Twitter’s statements around its decision to sever ties with Instagram and Tumblr couldn’t be more clear:

We understand that there’s great value associated with Twitter’s follow graph data, and we can confirm that it is no longer available to (insert company here)…

In short, if you are a potential competitor, and have the resources, motivation, and potential to harvest the connections between Twitter users at scale, well, expect to get cut off. You’re a threat to Twitter’s revenue stream.

None of this should come as a surprise, if you’ve been paying attention. Back in 2010, the second autocomplete answer for the statement “I don’t get…” in Google was “I don’t get Twitter”:

Interestingly, today, the same search today shows Twitter has only managed to drop down to third, even though the company now sports 140 million active users:

And while one could argue that in 2010, it was consumers who didn’t “get” Twitter, perhaps the folks scratching their heads via Google now are developers, who of late have been concerned that building on top of Twitter’s APIs might be dangerous for their long-term livelihood.

Twitter’s announcement today clarifies things quite a bit. Twitter has already declared its distaste for any business that manages how people consume tweets. Today, the other shoe dropped: Don’t build your business leveraging Twitter if you plan to run interest-based advertising at scale. Of course, the entire traditional media business is driven by interest-based advertising, which means Twitter’s business development group has a lot of work ahead. Interesting times ahead, to be sure.

Who’s On First? (A Modest Proposal To Solve The Problem with First- and Third-Party Marketing)

By - July 26, 2012

Early last month I wrote a piece entitled Do Not Track Is An Opportunity, Not a Threat. In it I covered Microsoft’s controversial decision to incorporate a presumptive “opt out of tracking” flag in the next release of its browser, which many in the ad industry see as a major blow to the future of our business.

In the piece, I argued that Microsoft’s move may well force independent publishers (you know, like Searchblog, as well as larger sites like CNN or the New York Times) to engage in a years-overdue dialog with their readers about the value exchange between publisher, reader, and marketer. I laid out a scenario and proposed some language to kick that dialog off, but I gave short shrift to a problematic and critical framing concept. In this post, I hope to lay that concept out and offer, by way of example, a way forward. (Caveat: I am not an expert in policy or tech. I’ll probably get some things wrong, and hope readers will correct me if and when I do.)

The “concept” has to do with the idea of a first-party relationship – a difficult to define phrase that, for purposes of this post, means the direct relationship a publisher or a service has with its consumer.  This matters, a lot, because in the FTC’s recently released privacy framework, “first-party marketing” has been excluded from proposed future regulation around digital privacy and the use of data. However, “third-party” marketing, the framework suggests, will be subject to regulation that could require “consumer choice.”

OK, so in that last sentence alone are three terms, which I’ve put in quotes, that need definition if we are going to understand some pretty important issues. The most important is “first-party marketing,” and it’s damn hard to find a definition of that in the FTC document. But if you go back to the FTC’s *preliminary* report, issued in December of 2010, you can find this:

First-party marketing: Online retailers recommend products and services based upon consumers’ prior purchases on the website.

Later in the report, the term is further defined:

Staff proposes that first-party marketing include only the collection of data from a consumer with whom the company interacts directly for purposes of marketing to that consumer.

And in a footnote:

Staff also believes that online contextual advertising should fall within the “commonly accepted practices” category (Ed. note: Treated as OK, like first party marketing). Contextual advertising involves the delivery of advertisements based upon a consumer’s current visit to a web page or a single search query, without the collection and retention of data about the consumer’s online activities over time. As staff concluded in its 2009 online behavioral advertising report, contextual advertising is more transparent to consumers and presents minimal privacy intrusion as compared to other forms of online advertising. See OBA Report, supra note 37, at 26-27 (where a consumer has a direct interface with a particular company, the consumer is likely to understand, and to be in a position to control, the company’s practice of collecting and using the consumer’s data).

The key issue here for publishers, as far as I can tell, is this: “the delivery of advertisements based upon a consumer’s current visit to a web page or a single search query, without the collection and retention of data about the consumer’s online activities over time…where a consumer has a direct interface with a particular company, the consumer is likely to understand, and to be in a position to control, the company’s practice of collecting and using the consumer’s data.”

Whew. OK. We’re getting somewhere. Now, when that 2010 report came out, many in our industry freaked out, because of the next sentence, one which refers to – wait for it – third party marketing:

If a company shares data with a third party other than a service provider acting on the company’s behalf – including a business affiliate unless the affiliate relationship is clear to consumers through common branding or similar means – the company’s practices would not be considered first-party marketing and thus they would fall outside of “commonly accepted practices” … Similarly, if a website publisher allows a third party, other than a service provider, to collect data about consumers visiting the site, the practice would not be “commonly accepted.”

Now, this was a preliminary report, and the final report, which as I said earlier came out this past Spring, incorporates a lot of input from companies engaged in what the FTC described as “third party” marketing – companies like Google that were very concerned that the FTC was about to wipe out entire swathes of their business. And the fact is, it’s still not clear what’s going to be OK, and what isn’t. For now, my best summary is this: it’s OK for websites that have a “first party” relationship to use data collected on the site to market to consumers. If, however, those sites was to let “third parties” market to consumers, then, at some point soon, the sites need to figure out a way to give “consumers a choice” to opt out. If they don’t, they may be subject to regulation down the road.

Which brings us back to “Do Not Track,” or DNT. Now DNT has been held up as the easiest way to give consumer a choice about this issue – if a consumer has DNT enabled on their browser, then that consumer has very clearly made a choice – they don’t want third-party advertisements or data collection, thank you very much. See how easy that was?

Wrong, wrong, wrong!!! As implemented by Microsoft in IE 10, DNT is an extremely blunt instrument, one that, in fact, does *not* constitute a choice. It’s defaulted to “on,” which means that a consumer is not ever given a choice one way or the other. And once it’s on, it’s the same for every single site – which means you can’t say that you’re fine with third-party ads on a site you love (say, Searchblog, naturally), but not fine with a site you don’t like so much (say, I dunno,  You Got Rick Rolled).

That’s pretty lame. Shouldn’t we, as consumers, be able to chose which sites we trust, and which we don’t? That’s pretty much the point of my post on DNT last month.

Fact is, we don’t really have a way to demonstrate that trust. Many in the industry – including the IAB, where I am a board member – are working to clarify all this with the FTC. The working assumption is that it’s far too much to ask of most publishing sites to give consumers a choice, much less give them access to the data used to “target” them.

Well, I’m not so sure about that.

Check out this screen shot from independent site GigaOm (yes, FM works with GigaOm):

A few other sites are starting to do similar notices – and I applaud them (this is already becoming standard practice in the UK, due to strict regulations around cookies). GigOm is saying, in essence, that by simply continuing to read the site, you agree to their privacy policies. Now, take a look at what GigaOm’s policy has to say about “third party advertising:”

GigaOM may allow third party advertising serving companies, including ad networks (“Advertisers”), to display advertisements or provide other advertising services on GigaOM. These third party Advertisers may use techniques other than HTTP cookies to recognize your computer or device and/or to collect and record demographic and other Information about you, including your activities on or off GigaOM. These techniques may be used directly on GigaOM….Advertisers may use the information collected to display advertisements that are tailored to your interests or background and/or associate such information with your subsequent visits, purchases or other activities on other websites. Advertisers may also share this information with their clients or other third parties.

GigaOM has no responsibility for the technologies, tools or practices of the Advertisers that provide advertising and related services on GigaOM. This Privacy Policy does not cover any collection, use or disclosure of Information by Advertisers who provide advertising and related services on GigaOM. These Advertisers’ information gathering practices, including their policies regarding the use of cookies and other tracking technologies, are governed entirely by their own privacy policies, not this one.

To summarize: By reading GigaOm, you’ve made a choice, and that choice is to let GigaOm use third-party advertising. It’s a nifty move, and one I applaud: GigaOm has just established you as a first party to its content and services just like….

….Facebook, which just announced revenue of more than a billion dollars last quarter. Facebook, of course, has a first-party relationship with 955 million or so of us – we’ve already “opted in” to its service, through the Terms of Service we’ve all agreed to (and probably not read.) We’ve made a choice as consumers, and we’ve chosen to be marketed to on Facebook’s terms.

The same is true of Apple, Amazon, eBay, Yahoo, and any number of other large services which require registration and acceptance of Terms of Service in order for us to gain any value from their platforms. Google and Microsoft have been frantically catching up, getting as many of us as they can to register our identity and agree to a unified TOS in some way.

But what about independent publishers? You know, the rest of the web? Well, save folks like GigaOm (and AllThingsD, which warns its audience about cookies), we’ve never really paid attention to this issue. In the past, publishers have avoided doing anything that might get in the way of an audience consuming their content – it’s a death sentence if you’re engaged in the high holy art of Increasing Page Views. And bigger publishers like Time or Conde Nast don’t want to rock the boat, they’ll wait till a consensus forms, and then follow it.

But I like what GigaOm has done. It’s a very clear notice, it goes away after the first visit, and it reappears only if you’ve cleared your cookies (which happens a lot if you run an anti-virus program).

I think it’s time the “rest of the web” follows their lead. We rely on third-party advertising services (like FM) to power our sites. We live in uncertain times as it relates to regulation. And certainly we have direct relationships of trust with our audiences – or you wouldn’t be reading this far down the page. It’s time the independent web declares the value of our first-party relationships with audiences, and show the government – and our readers – that we have nothing to hide.

I plan to look into ways we might make easily available the code and language necessary to enact these policies. I’ll be back with more as I have it….

*Now, the other two terms bear some definition as well. I think it’s fair to say “consumer choice” means “give the consumer the ability to decide if they want their data used, and for what purposes,” and “third party marketing” means the use of data and display of commercial messages on a first party site by third-party companies – companies that are not the owner of the site or service you are using.

What We Lose When We Glorify “Cashless”

By - July 24, 2012

Look, I’m not exactly a huge fan of grimy greenbacks, but I do feel a need to point out something that most coverage of current Valley darling Square seems to miss: The “Death of Cash” also means the “death of anonymous transactions” – and no matter your view of the role of  government and corporations in our life, the very idea that we might lose the ability to transact without the creation of a record merits serious discussion. Unfortunately, this otherwise worthy cover story in Fortune about Square utterly ignores the issue.

And that’s too bad. A recent book called “The End of Money” does get into some of these issues – it’s on my list to read – but in general, I’ve noticed a lack of attention to the anonymity issue in coverage of hot payment startups. In fact, in interviews I’ve read, the author of “The End of Money” makes the point that cash is pretty much a blight on our society – in that it’s the currency of criminals and a millstone around the necks of the poor.

Call it a hunch, but I sense that many of us are not entirely comfortable with a world in which every single thing we buy creates a cloud of data. I’d like to have an option to not have a record of how much I tipped, or what I bought at 1:08 am at a corner market in New York City. Despite protections of law, technology, and custom, that data will remain forever, and sometimes, we simply don’t want it to.

What do you think?  (And yes, I am aware of bitcoin…)

BTW, this mini-rant is very related to my last post: First, Software Eats the World, Then, The Mirror World Emerges.

Google’s “Mute” Button: Why Didn’t I Think Of That? Oh, Wait…

By - June 30, 2012

One of my pet peeves about our industry is how slowly we change – I understand it takes a long time to gather consensus (it took three years to get AdChoices rolled out, for example) – but man, why don’t the big players, like Google, innovate a bit more when it comes to display advertising?

Well, yesterday Google did just that, announcing a “mute this ad” feature that it will roll out across its network over the next few months. The feature does what you might expect it to do – it stops a particular ad from “following” you around the web. It will look like this:

 

As you can see, the “mute this ad” is right next to the AdChoice icon, adding a bit more clutter to the creative, but also, more control for consumers, in particular those who find the practice of “retargeting” irritating.

All I can say is, it’s about time. Back in August of 2010, I wrote about my own experience: On Retargeting: Fix The Conversation. In the post, I suggested:

…as I’ve said a million times, marketing is a conversation. And retargeted ads are part of that conversation. I’d like to suggest that retargeted ads acknowledge, with a simple graphic in a consistent place, that they are in fact a retargeted ad, and offer the consumer a chance to tell the advertiser “Thanks, but for now I’m not interested.” Then the ad goes away, and a new one would show up.

Well, it looks like Google has gotten with the program. Of course, Facebook already has that “X” on all of its display ads, but so far, retargeting hasn’t come to Facebook – yet. Watch that space, because I gotta believe it will soon.

Google’s Transparency Report: A Good And Troubling Thing

By - June 19, 2012

A couple of days ago Google released its latest “Transparency Report,” part of the company’s ongoing commitment to disclose requests by individuals, corporations, and governments to change what users see in search results and other Google properties such as YouTube.

The press coverage of Google’s report was copious – far more than the prior two years, and for good reason. This week’s disclosure included Google’s bi-annual report of government takedown requests (corporate and individual requests are updated in near real time). The news was not comforting.

As the Atlantic wrote:

The stories Google tells to accompany the broad-brush numbers (found in the “annotations” section and its blog) paint a picture to accompany those numbers that Google calls “alarming” — noting, in particular, that some of the requests for removal of political speech come from “Western democracies not typically associated with censorship.”

The number of takedown requests from governments is on the rise – up about 100% year to year for the US alone. Part of this, perhaps, can be explained by what might be called a “catchup effect” – governments are coming to terms with the pervasive power of digital information, and finally getting their heads around trying to control it, much as governments have attempted to control more analog forms of information like newspapers, television stations, and books.

But as we know, digital information is very, very different. It’s one thing to try to control the press, it’s quite another to do the same with the blog postings, YouTube videos, Twitter feeds, and emails of an entire citizenry. Given the explosion of arguably illegal or simply embarrassing information available to Google’s crawlers (cough, cough, Wikileaks), I’m rather surprised that worldwide government takedown requests haven’t grown at an exponential rate.

But to me, the rise of government takedown requests isn’t nearly as interesting as the role Google and other companies play in all of this. As I’ve written elsewhere, it seems that as we move our public selves into the digital sphere, we seem to be also moving our trust from the institutions of government to the institution of the corporation. For example, our offline identity is established by a government ID like a driver’s license. Online, many of us view Facebook as our identity service. Prior to email, our private correspondance was secured by a government institution called the postal service. Today, we trust AOL, Microsoft, Yahoo, Facebook, or Gmail with our private utterances. When documents were analog, they were protected by government laws against unreasonable search and seizure. When they live in the cloud….the ground is shifting. I could go on, but I think you get my point.

As we move ourselves into the realm of digital information, a realm mediated by private corporations, those corporations naturally become the focus of government attention. I find Google’s Transparency Report to be a refreshing response to this government embrace – but it’s an exercise that almost no other corporation completes (Twitter has a record of disclosing, but on a case by case basis). Where is Amazon’s Transparency Report? Yahoo’s? Microsoft’s? And of course, the biggest question in terms of scale and personal information – where is Facebook’s? Oh, and of course, where is Apple’s?

Put another way: If we are shifting our trust from the government to the corporation, who’s watching the corporations? With government, we’ve at least got clear legal recourse – in the United States, we’ve got the Constitution, the Freedom of Information Act, and a deep legal history protecting the role of the press – what Jefferson called the Fourth Estate. With corporations, we’re on far less comforting ground – most of us have agreed to Terms of Services we’ve never read, much less studied in sixth grade civics class.

As the Atlantic concludes:

Google is trying to make these decisions responsibly, and the outcome, as detailed in the report, is reason to have confidence in Google as an arbiter of these things if, as is the case, Google is going to be the arbiter of these issues. But unlike a US Court, we don’t see the transcripts of oral arguments, or the detailed reasoning of a judge. …The Transparency Report sheds more light on the governments Google deals with than with its own internal processes for making judgments about compliance….Google’s Transparency Report is the work of a company that is grappling with its power and trying to show its work.

I applaud Google’s efforts here, but I’m wary of placing such an important public trust in the hands of private corporations alone. Google is a powerful company, with access to a wide swath of the world’s information. But with the rise of walled gardens like iOS and Facebook, an increasing amount of our information doesn’t touch Google’s servers. We literally are in the dark about how this data is being accessed by governments around the world.

Google is setting an example I hope all corporations with access to our data will follow. So far, however, most companies don’t. And that should give all of us pause, and it should be the basis of an ongoing conversation about the role of government in our digital lives.

Do Not Track Is An Opportunity, Not a Threat

By - June 10, 2012

This past week’s industry tempest centered around Microsoft’s decision to implement “Do Not Track” (known as “DNT”) as a default on Internet Explorer 10, a browser update timed to roll out with the company’s long-anticipated Windows 8 release.

Microsoft’s decision caught much of the marketing and media industry by surprise – after all, Microsoft itself is a major player in the advertising business, and in that role has been a strong proponent of the current self-regulatory regime, which includes, at least until Microsoft tossed its grenade into the marketplace, a commitment to implementation of DNT as an opt-in technology, rather than as a default.*

For most readers I don’t need to explain why this matters, but in case you’re new to the debate, when enabled, DNT sets a “flag” telling websites that you don’t want data about your visit to be used for purposes of creating a profile of your browsing history (or for any other reason). Whether we like it or not, such profiles have driven a very large business in display advertising over the past 15 years. Were a majority of consumers to implement DNT, the infrastructure that currently drives wide swathes of the web’s  monetization ecosystem would crumble, taking a lot of quality content along with it.

Once released, it’s estimated that IE 10 could quickly grab as much as 25-30% of browser market share. The idea that the online advertising industry could lose almost a third of its value due to the actions of one rogue player is certainly cause for alarm. Last week’s press were full of conspiracy theories about why Microsoft was making such a move. The company claims it just wants to protect users’ privacy, which strikes me as disingenuous – it’s far more likely that Microsoft is willing to spike its relatively small advertising business in exchange for striking a lethal blow to Google’s core business model, both in advertising and in browser share.

I’m quite certain the Windows 8 team is preparing to market IE 10 – and by extension, Windows 8 – as the safe, privacy-enhancing choice, capitalizing on Google’s many government woes and consumers’ overall unease with the search giant’s power. I’m also quite certain that Microsoft, like many others, suffers from a case of extreme Apple envy, and wishes it had a pristine, closed-loop environment like iOS that it could completely control. In order to create such an environment, Microsoft needs to gain consumer’s trust. Seen from that point of view, implementing DNT as a default just makes sense.

But the more I think through it, the more I’m somewhat unperturbed by the whole affair. In fact, I’m rather excited by it.

First off, it’s not clear that IE10’s approach to DNT will matter. When it comes to whether or not a site has to comply with browser flags such as DNT, websites and third party look to the standard settings body knows as the WC3. That organization’s proposed draft specification on DNT is quite clear: It says no company may enforce a default DNT setting for a user, one way or the other. In other words, this whole thing could be a tempest in a teapot. Wired recently argued that Microsoft will be forced to back down and change its policy.

But I’m kind of hoping Microsoft will keep DNT in place. I know, that’s a pretty crazy thing for a guy who started an advertising-run business to say, but in this supposed threat I see a major opportunity.

Imagine a scenario, beginning sometime next year, when website owners start noticing significant numbers of visitors with IE10 browsers swinging by their sites. Imagine further that Microsoft has stuck to its guns, an all those IE10 browsers have their flags set to “DNT.”

To me, this presents a huge opportunity for the owner of a site to engage with its readers, and explain quite clearly the fact that good content on the Internet is paid for by good marketing on the Internet. And good marketing often needs to use “tracking” data so as to present quality advertising in context. (The same really can and should be said of content on the web – but I’ll just stick to advertising for now).

Advertising and content have always been bound together – in print, on television, and on the web. Sure, you can skip the ad – just flip the page, or press “ffwd” on your DVR. But great advertising, as I’ve long argued, adds value to the content ecosystem, and has as much a right to be in the conversation as does the publisher and the consumer.

Do Not Track provides our industry with a rare opportunity to speak out and explain this fact, and while the dialog box I’ve ginned up at the top of this post is fake, I’d love to see a day when they are popping up all over the web, reminding consumers that not only does quality content need to be supported, in fact, the marketers supporting it actually deserve our attention as well.

At present, the conversation between content creator, content consumer, and marketer is poorly instrumented and rife with mistrust. Our industry’s “ad choices” self regulatory regime – those little triangle icons you see all over display ads these days – is a good start. But we’ve a long way to go. Perhaps unwittingly, Microsoft may be pushing us that much faster toward a better future.

*I am on the board of the IAB, one of the major industry trade groups which promotes self-regulation. The opinions here are my own, as usual. 

The Audacity of Diaspora

By - May 13, 2012

Last Friday Businessweek ran a story on Diaspora, a social platform built from what might be called Facebook anti-matter. It’s a great read that chronicles the project’s extraordinary highs and lows, from Pebble-like Kickstarter success to the loss of a founder to suicide. Given the overwhelming hype around Facebook’s IPO this week, it’s worth remembering such a thing exists – and even though it’s in private beta, Diaspora is one of the largest open source projects going right now, and boasts around 600,000 beta testers.

I’ve watched Diaspora from the sidelines, but anyone who reads this site regularly will know that I’m rooting for it. I was surprised – and pleased – to find out that Diaspora is executing something of a “pivot” – retaining its core philosophy of being a federated platform where “you own your own data” while at the same time adding new Tumblr and Pinterest-like content management features, as well as integration with – gasp! – Facebook.  And this summer, the core team behind the service is joining Y Combinator in the Valley – a move that is sure to accelerate its service from private beta to public platform.

I like Diaspora because it’s audacious, it’s driven by passion, and it’s very, very hard to do. After all, who in their right mind would set as a goal taking on Facebook? That’s sort of like deciding to build a better search engine – very expensive, with a high likelihood of failure. But what’s really audacious is the vision that drives Diaspora – that everyone owns their own data, and everyone has the right to do with it what they want. The vision is supported by a federated technology platform – and once you federate, you lose central control as a business. Then, business models get very, very hard. So you’re not only competing against Facebook, you’re also competing against the reality of the marketplace – centralized domains are winning right now (as I pointed out here).

It seems what Diaspora is attempting to do is take the functionality and delight of the dependent web, and mix it with the freedom and choice offered by the independent web. Of course, that sounds pretty darm good to me.

Given the timing of Facebook’s public debut, the move to Y Combinator, and perhaps just my own gut feel, I think Diaspora is one to watch in coming months. As of two days ago, the site is taking registrations for its public debut. Sign up here.

Larry Lessig on Facebook, Apple, and the Future of “Code”

By - May 09, 2012

Larry Lessig is an accomplished author, lawyer, and professor, and until recently, was one of the leading active public intellectuals in the Internet space. But as I wrote in my review of his last book (Is Our Republic Lost?), in the past few years Lessig has changed his focus from Internet law to reforming our federal government.

But that doesn’t mean Lessig has stopped thinking about our industry, as the dialog below will attest. Our conversation came about last month after I finished reading Code and Other Laws of Cyberspace, Version 2. The original book, written in 1999, is still considered an authoritative text on how the code of computing platforms interacts with our legal and social codes. In 2006, Lessig “crowdsourced” an update to his book, and released it as “Version 2.0.” I’d never read the updated work (and honestly didn’t remember the details of the first book), so finally, six years later, I dove in again.

It’s a worthy dive, but not an easy one. Lessig is a lawyer by nature, and his argument is laid out like proofs in a case. Narrative is sparse, and structure sometimes trumps writing style. But his essential point – that the Internet is not some open “wild west” destined to always be free of regulation, is soundly made. In fact, Lessig argues, the Internet is quite possibly the most regulable technology ever invented, and if we don’t realize that fact, and protect ourselves from it, we’re in for some serious pain down the road. And for Lessig, the government isn’t the only potential regulator. Instead, Lessig argues, commercial interests may become the most pervasive regulators on the Internet.

Indeed, during the seven years between Code’s first version and its second, much had occurred to prove Lessig’s point. But even as Lessig was putting the finishing touches on the second version of his manuscript, a new force was erupting from the open web: Facebook. And a year after that, the iPhone redefined the Internet once again.

In Code, Lessig enumerates several examples of how online services create explicit codes of control – including the early AOL, Second Life, and many others. He takes the reader though important lessons in understanding regulation as more than just governmental – explaining normative (social), market (commercial), and code-based (technological) regulation. He warns that once we commit our lives to commercial services that hold our identity, a major breach of security will most likely force the government into enacting overzealous and anti-constitutional measures (think 9/11 and the Patriot Act). He makes a case for the proactive creation of an intelligent identity layer for the Internet, one that might offer just the right amount of information for the task at hand. In 2006, such an identity layer was a controversial idea – no one wanted the government, for example, to control identity on the web.

But for reasons we’re still parsing as a culture, in the six years since the publication of Code v2, nearly 1 billion of us have become comfortable with Facebook as our defacto identity, and hundreds of millions of us have become inhabitants of Apple’s iOS.

Instead of going into more detail on the book (as I have in many other reviews), I thought I’d reach out to Lessig and ask him about this turn of events. Below is a lightly edited transcript of our dialog. I think you’ll find it provocative.

As to the book: If you consider yourself active in the core issues of the Internet industry, do yourself a favor and read it. It’s worth your time.

Q: After reading your updated Code v2, which among many other things discusses how easily the Internet might become far more regulated than it once was, I found myself scribbling one word in the margins over and over again. That word was “Facebook.”

You and your community updated your 1999 classic in 2006, a year or two before Facebook broke out, and several years before it became the force it is now. In Code you cover the regulatory architectures of places where people gather online, including MUDS, AOL, and the then-hot darling known as Second Life. But the word Facebook isn’t in the text.

What do you make of Facebook, given the framework of Code v2?

Lessig: If I were writing Code v3, there’d be a chapter — right after I explained the way (1) code regulates, and (2) commerce will use code to regulate — titled: “See, e.g., Facebook.” For it strikes me that no phenomena since 2006 better demonstrates precisely the dynamic I was trying to describe. The platform is dominant, and built into the platform are a million ways in which behavior is regulated. And among those million ways are 10 million instances of code being use to give to Facebook a kind of value that without code couldn’t be realized. Hundreds of millions from across the world live “in” Facebook. It more directly (regulating behavior) than any government structures and regulates their lives while there. There are of course limits to what Facebook can do. But the limits depend upon what users see. And Facebook has not yet committed itself to the kind of transparency that should give people confidence. Nor has it tied itself to the earlier and enabling values of the internet, whether open source or free culture.

Q: Jonathan Zittrain wrote his book two years after Code v2, and warned of non-generative systems that might destroy the original values of the Internet. Since then, Apple iOS (the “iWorld”) and Facebook have blossomed, and show no signs of slowing down. Do you believe we’re in a pendulum swing, or are you more pessimistic – that consumers are voting with their dollars, devices, and data for a more closed ecosystem?

Lessig: The trend JZ identified is profound and accelerating, and most of us who celebrate the “free and open” net are simply in denial. Facebook now lives oblivious to the values of open source software, or free culture. Apple has fully normalized the iNannyState. And unless Google’s Android demonstrates how open can coexist with secure, I fear the push away from our past will only continue. And then when our i9/11 event happens — meaning simply a significant and destructive cyber event, not necessarily tied to any particular terrorist group — the political will to return to control will be almost irresistible.

The tragedy in all this is that it doesn’t have to be this way. If we could push to a better identity layer in the net, we could get both better privacy and better security. But neither side in this extremist’s battle is willing to take the first step towards this obvious solution. And so in the end I fear the extremists I like least will win.

Q: You seem profoundly disappointed in our industry. What can folks who want to make a change do?

Lessig: Not at all. The industry is doing what industry does best — doing well, given the rules as they are. What industry is never good at (and is sometimes quite evil at) is identifying the best mix of rules. Government is supposed to do something with that. Our problem is that we have today such a fundamentally dysfunctional government that we don’t even recognize the idea that it might have a useful role here. So we get stuck in these policy-dead-ends, with enormous gains to both sides left on the table.

And that’s only to speak about the hard problems — which security in the Net is. Much worse (and more frustrating) are the easy problems which the government also can’t solve, not because the answer isn’t clear (again, these are the easy problems) but because the incumbents are so effective at blocking the answer that makes more sense so as to preserve the answer that makes them more dollars. Think about the “copyright wars” — practically every sane soul is now focused on a resolution of that war that is almost precisely what the disinterested souls were arguing a dozen years ago (editor’s note: abolishing DRM). Yet the short-termism of the industry wouldn’t allow those answers a dozen years ago, so we have had an completely useless war which has benefited no one (save the lawyers-as-soldiers in that war). We’ve lost a decade of competitive innovation in ways to spur and spread content in ways that would ultimately benefit creators, because the dinosaurs owned the lobbyists.

—-

I could have gone on for some time with Lessig, but I wanted to stop there, and invite your questions in the comments section. Lessig is pretty busy with his current work, which focuses on those lobbyists and the culture of money in Congress, but if he can find the time, he’ll respond to your questions in the comments below, or to me in email, and I’ll update the post.

###

Other works I’ve reviewed: 

You Are Not A Gadget by Jaron Lanier (review)

Wikileaks And the Age of Transparency  by Micah Sifry (review)

Republic Lost by Larry Lessig (review)

Where Good Ideas Come From: A Natural History of Innovation by Steven Johnson (my review)

The Singularity Is Near: When Humans Transcend Biology by Ray Kurzweil (my review)

The Corporation (film – my review).

What Technology Wants by Kevin Kelly (my review)

Alone Together: Why We Expect More from Technology and Less from Each Other by Sherry Turkle (my review)

The Information: A History, a Theory, a Flood by James Gleick (my review)

In The Plex: How Google Thinks, Works, and Shapes Our Lives by Steven Levy (my review)

The Future of the Internet–And How to Stop It by Jonathan Zittrain (my review)

The Next 100 Years: A Forecast for the 21st Century by George Friedman (my review)

Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives by the Year 2100 by Michio Kaku (my review)