free html hit counter Policy Archives | Page 4 of 62 | John Battelle's Search Blog

Architectures of Control: Harvard, Facebook, and the Chicago School

By - April 02, 2012

Early in Lessig’s “Code v2,” which at some point this week I hope to review in full, Lessig compares the early campus networks of two famous educational institutions. Lessig knew them well – in the mid 1990s, he taught at both Harvard and the University of Chicago. Like most universities, Harvard and Chicago provided Internet access to their students. But they took quite different approaches to doing so. True to its philosophy of free and anonymous speech, Chicago simply offered an open connection to its students – plug in anywhere on campus, and start using the net.

Harvard’s approach was the polar opposite, as Lessig explains:

At Harvard, the rules are different….You cannot plug your machine to the Net at Harvard unless the machine is registered – licensed, approved, verified. Only members of the university community can register their machines. Once registered, all interactions with the network are monitored and identified to a particular machine. To join the network, users have to “sign” a user agreement. The agreement acknowledges this pervasive practice of monitoring. Anonymous speech on this network is not permitted – it is against the rules. Acceess can be controlled based on who you are, and interactions can be traced based on what you did.

In the preceding paragraph, change “Harvard” and “university” to “Facebook” and – there you have it. Facebook was the product of a Harvard mindset – and probably could never have come from a place like Chicago or Berkeley (where I taught).

I called up Harvard’s IT department to see if the policy had changed since Lessig’s experiences in the 1990s, or Mark Zuckerbeg’s six or so years ago. The answer was no – machines still must be registered, and all actions across Harvard’s network are trackable.

There are many benefits associated with a “real names” identity policy, including personalized services and a far greater likelihood of civil discourse. But the reverse is also true: without the right to speak anonymously (or pseudonymously), dissent and exploration are often muted. And of course, there’s that tracking/monitoring/data issue as well…

In Code, Lessig goes on to predict that while the original Internet began with a very Chicago-like approach to the world, architectures of regulation and control will ultimately end up winning if we don’t pay close attention.

He wrote the original Code in 1999, and updated it in 2006. The word Facebook is not in either version of the text. Just thought that a curious anecdote worth sharing.

  • Content Marquee

China To Bloggers: Stop Talking Now. K Thanks Bye.

By - March 31, 2012

(image) Yesterday I finished reading Larry Lessig’s updated 1999 classic, Code v2. I’m five years late to the game, as the book was updated in 2006 by Lessig and a group of fans and readers (I tried to read the original in 1999, but I found myself unable to finish it. Something to do with my hair being on fire for four years running…). In any event, no sooner had I read the final page yesterday when this story breaks:

Sina, Tencent Shut Down Commenting on Microblogs (WSJ)

In an odd coincidence, late last night I happened to share a glass of wine with a correspondent for the Economist who is soon to be reporting from Shanghai. Of course this story came up, and an interesting discussion ensued about the balance one must strike to cover business in a country like China. Essentially, it’s the same balance any Internet company must strike as it attempts to do business there: Try to enable conversation, while at the same time regulating that conversation to comply with the wishes of a mercurial regime.

Those of us who “grew up” in Internet version 1.0 have a core belief in the free and open exchange of ideas, one unencumbered by regulation. We also tend to think that the Internet will find a way to “route around” bad law – and that what happens in places like China or Iran will never happen here.

But as Lessig points out quite forcefully in Code v2, the Internet is, in fact, one of the most “regulable” technologies ever invented, and it’s folly to believe that only regimes like China will be drawn toward leveraging the control it allows. In addition, it need not be governments that create these regulations, it could well be the platforms and services we’ve come to depend on instead. And while those services and platforms might never be as aggressive as China or Iran, they are already laying down the foundation for a slow erosion of values many of us take for granted. If we don’t pay attention, we may find ourselves waking up one morning and asking…Well, How Did I Get Here?

More on all of this soon, as I’m in the midst of an interview (via email) with Lessig on these subjects. Once I’ll post the dialog here once we’re done.

 

Will Transparency Trump Secrecy In The Digital Age?

By - March 22, 2012

Next week I travel to Washington DC.  While I am meeting with a wide swath of policymakers, thinkers, and lobbyists, I don’t have a well-defined goal – I’m not trying to convince anyone of my opinion on any particular issue (though certainly I’m sure I’ll have some robust debates), nor am I trying to pull pungent quotes from political figures for my book. Rather I am hoping to steep in the culture of the place, make a number of new connections, and perhaps discover a bit more about how this unique institution called “the Federal Government” really works.

To prepare, I’ve been reading a fair number of books, including Larry Lessig’s Republic Lost, which I reviewed last month, and The Future of the Internet–And How to Stop It by Jonathan Zittrain, which I reviewed last year.

Wikileaks And the Age of Transparency by Micah Sifry is the latest policy-related book to light up my Kindle. I finished it four weeks ago, but travel and conferences have gotten in the way of my writing it up here. But given I’ve already moved on to Lessig’s updated Code: And Other Laws of Cyberspace, Version 2.0 (highly recommended), and am about to dive into McKinnon’s new book Consent of the Networked: The Worldwide Struggle For Internet Freedom, I figured I better get something up, and quick. I’m way behind on my writing about my reading, so to speak.

Sifry’s book turns on this question, raised early in the work: “Is Wikileaks a symptom of decades of governmental and institutional opacity, or is it a disease that needs to be stopped at all costs?”

Put another way, if we kill Wikileaks (as many on both the left and right wish we would), what do we lose in the process?

Sifry argues that for all its flaws (including that of its founder and mercurial leader Julian Assange, who Sifry has met), Wikileaks – or at least what Wikileaks represents, is proving a crucial test of democracy in an age where our most powerful institutions are  increasingly unaccountable.

Sifry argues that the rise (and potential fall) of Wikileaks heralds an “age of transparency,” one that can’t come fast enough, given the digital tools of control increasingly in the hands of our largest social institutions, both governmental and corporate (not to mention religious). And while it’s easy to fall into conspiratorial whispers given the subject, Sifry wisely does not – at least, not too much. He clearly has a point of view, and if you don’t agree with it, I doubt his book will change your mind. But it’s certainly worth reading, if your mind is open.

Sifry’s core argument: We can’t trust institutions if that trust doesn’t come with accountability. To wit:

“We should be demanding that the default setting for institutional power be “open,” and when needed those same powers should be forced to argue when things need to remain closed. Right now, the default setting is “closed.”

Sifry gives an overview of the Wikileaks case, and points out the US government’s own position of hypocrisy:

“If we promote the use of the Internet to overturn repressive regimes around the world, then we have to either accept the fact that these same methods may be used against our own regime—or make sure our own policies are beyond reproach.”

Sifry is referring to Wikileaks much covered release of State department cables, which has been condemned by pretty much the entire power structure of the US government (Assange and others face serious legal consequences, which are also detailed in the book). Even more chilling was the reaction by corporate America, which quickly closed ranks and cut off Wikileaks’ funding sources (Visa, Mastercard, Paypal) and server access (Amazon).

In short, Wikileaks stands accused, but not proven guilty. But from the point of view of large corporations eager to stay in the good graces of government, Wikileaks is guilty till proven innocent. And that’s a scary precedent. As Sifry puts it:

“If WikiLeaks can be prosecuted and convicted for its acts of journalism, then the foundations of freedom of the press in America are in serious trouble.”

and, quoting scholar Rebecca McKinnon:

“Given that citizens are increasingly dependent on privately owned spaces for our politics and public discourse … the fight over how speech should be governed in a democracy is focused increasingly on questions of how private companies should or shouldn’t control speech conducted on and across their networks and platforms.”

But not all is lost. Sifry also chronicles a number of examples of how institutional misconduct has been uncovered and rectified by organizations similar to Wikileaks. Sifry believes that the Wikileaks genie is out of the bottle, and that transparency will ultimately win over secrecy.

But the book is a statement of belief, rather than a proof. Sifry argues that the open culture of the Internet must trump the closed, control-oriented culture of power-wielding institutions. And while I certainly agree with him, I also share his clear anxiety about whether such a world will actually come to be.

 

Other works I’ve reviewed:

Republic Lost by Larry Lessig (review)

Where Good Ideas Come From: A Natural History of Innovation by Steven Johnson (my review)

The Singularity Is Near: When Humans Transcend Biology by Ray Kurzweil (my review)

The Corporation (film – my review).

What Technology Wants by Kevin Kelly (my review)

Alone Together: Why We Expect More from Technology and Less from Each Other by Sherry Turkle (my review)

The Information: A History, a Theory, a Flood by James Gleick (my review)

In The Plex: How Google Thinks, Works, and Shapes Our Lives by Steven Levy (my review)

The Future of the Internet–And How to Stop It by Jonathan Zittrain (my review)

The Next 100 Years: A Forecast for the 21st Century by George Friedman (my review)

Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives by the Year 2100 by Michio Kaku (my review)

Who Controls Our Data? A Puzzle.

By - March 11, 2012

(image) Facebook claims the data we create inside Facebook is ours – that we own it. In fact, I confirmed this last week in an interview with Facebook VP David Fischer on stage at FM’s Signal P&G conference in Cincinnati. In the conversation, I asked Fischer if we owned our own data. He said yes.

Perhaps unfairly  (I’m pretty sure Fischer is not in charge of data policy), I followed up my question with another: If we own our own data, can we therefore take it out of Facebook and give it to, say, Google, so Google can use it to personalize our search results?

Fischer pondered that question, realized its implications, and backtracked. He wasn’t sure about that, and it turns out, it’s more complicated to answer that question – as recent stories about European data requests have revealed.*

I wasn’t planning on asking Fischer that question, but I think it came up because I’ve been pondering the implications of “you as the platform” quite a bit lately. If it’s *our* data in Facebook, why can’t we take it and use it on our terms to inform other services?

Because, it turns out, regardless of any company’s claim around who owns the data, the truth is, even if we could take our data and give it to another company, it’s not clear the receiving company could do anything with it. Things just aren’t set up that way. But what if they were?

The way things stand right now, our data is an asset held by companies, who then cut deals with each other to leverage that data (and, in some cases, to bundle it up as a service to us as consumers). Microsoft has a deal to use our Facebook data on Bing, for example. And of course, the inability of Facebook and Google to cut a data sharing deal back in 2009 is one of the major reasons Google built Google+. The two sides simply could not come to terms, and that failure has driven an escalating battle between major Internet companies to lock all of us into their data silos. With the cloud, it’s only getting worse (more on that in another post).

And it’s not fair to just pick on Facebook. The question should be asked of all services, I think. At least, of all services which claim that the data we give that service is, in fact, ours (many services share ownership, which is fine with me, as long as I don’t lose my rights.)

I have a ton of pictures up on Instagram now, for example (you own your own content there, according to the service’s terms). Why can’t I “share” that data with Google or Bing, so those pictures show up in my searches? Or with Picasa, where I store most of my personal photographs?

I have a ton of data inside an app called “AllSport GPS,” which tracks my runs, rides, and hikes. Why can’t I share that with Google, or Facebook, or some yet-to-be-developed app that monitors my health and well being?

Put another way, why do I have to wait for all these companies to cut data sharing deals through their corporate development offices? Sure, I could cut and paste all my data from one to the other, but really, who wants to do that?!

In the future, I hope we’ll be our own corp dev offices. An office of one, negotiating data deals on the fly, and on our own terms. It’ll take a new architecture and a new approach to sharing, but I think it’d open up all sorts of new vectors of value creation on the web.

This is why I’m bullish on Singly and the Locker Project. They’re trying to solve a very big problem, and worse, one that most folks don’t even realize they have. Not an easy task, but an important one.

—–

*Thanks to European law, Facebook is making copies of users’ data available to them – but it makes exemptions that protect its intellectual property, trade secrets, and it won’t give data that “cannot be extracted from our platform in the absence of is proportionate effort.” What defines Facebook’s “trade secrets” and “intellectual property”? Well, there’s the catch. Just as with Google’s search algorithms, disclosure of the data Facebook is holding back would, in essence, destroy Facebook’s competitive edge, or so the company argues. Catch 22. I predict we’re going to see all this tested by services like Singly in the near future. 

 

A Funny Thing Happened As I Was “Tracked”

By - February 27, 2012

I’m still in recovery mode after the wave of Apple-defenders inundated me with privacy-related comments over this past weekend, and I promise to continue the dialog – and admit where I may be wrong – once I feel I’ve properly grokked the story. The issue of privacy as it relates to the Intenet is rather a long piece of yarn, and I’m only a small part of the way toward unraveling this particular sweater. (And yes, I know there are plenty of privacy absolutists rolling their eyes at me right now, but if you don’t want to hear my views after some real reporting and thinking on the subject, just move along….). lf you want to peruse some of the recent stories on the subject I’ve been reading, you can start with the Signal post I just finished.

Meanwhile, I want to tell you a little story about advertising and tracking, which is at the heart of much of the current tempest.

While skiing last week at my home mountain of Mammoth (the only place in California with a decent snowpack), my family and I stayed at a Westin property. It’s a relatively new place, and pretty nice for Mammoth – which is more of  a “throw the kids in the station wagon and drive up” kind of resort. It’s not exactly Vail or Aspen – save for the skiing, which I dare say rivals any mountain in the US.

Anyway, I stayed at the Westin, as as such, I visited the Westin site many a time during my stay for various reasons (I also visited before I came, of course, to research stuff like whether it had a gym, restaurant menus, and the like).

Now, besides visiting the Westin site while at Mammoth, I also visited Amazon.com, because I was looking to buy a particular adapter for my SIM card. I’m eager to try out the new Nexus Galaxy, but the SIM in my iPhone is a different size, and to use it in the Nexus, I need an adapter.

I didn’t end up buying the adapter, because I got distracted, but I did visit Amazon’s page for the device.

Now, why am I telling you all this? Because after visiting those two sites (Westin and Amazon), I noticed the ads I was seeing as I cruised the web changed. A lot. In particular, on my own site, which is powered by the company I chair, FM:

The ad at the top is from Amazon, with a picture of the very thing I almost bought. Now, is that creepy, or is that useful?

The ad on the side is from Westin, offering me a free night or $500 credit if I book another Westin vacation. Again, creepy, or …potentially a benefit?

This is “tracking” at work, and while some of us find it creepy, I find it rather benign. Both those ads are very pertinent to *me*, and one (the Westin) might even save me a lot of money – I love the idea of getting a free night at a place I’ve already stayed at and enjoyed (and I am a Starwood member, and stay at a lot of other Westins, so heck, I might just use that offer sometime soon).

Regardless of those specifics, it’s hard to argue that these ads are *worse* than the undifferentiated slop that once filled up ad space across the web. And that’s pretty much the point of cookie-driven advertising – that it use our data to offer up marketing messages that are, in the end, better than if the advertisers didn’t have the data in the first place.

After all, Facebook and Google offer up exactly the same kind of ads on their owned and operated domains – ads that are relevant to you – based on data you provided to them (the search term, or your Facebook profile). Somehow that’s OK, but when it’s done across the open web – well, then it’s “creepy.”

The problem, I think, is that we generally don’t trust these third-party advertising networks – we think they are doing nefarious things with our data, somehow screwing us, tracking us like hunted animals, creating vast profiles that could fall into the wrong hands. And the ad industry needs to address this issue of trust.

If you look at both those ads, it turns out the industry is doing just that. Each of the ads have an “ad choices” logo you can click to find out what’s going on behind the ad. Here’s what I saw when I clicked on Amazon’s “privacy” link:

This page clearly explains why I’m seeing the ad, and offers me an explicit choice to opt out of seeing ads like this in the future. Seems fair to me.

Here’s what I see when I clicked on the “ad choices” logo for the Westin ad:

That’s a popover, telling me that my browsing activity (I assume my multiple visits to Westin.com) has informed the offer. It tells me that an ad network owned by Akamai is behind the tracking and trafficking of the ad. And it offers me more links, should I want to learn more. I clicked on the “More information & opt out options” link, and saw this from Evidon,which Akamai uses to power its opt out and other programs:

This page offers a prominent opt out for the companies who served me the ad. it even offers a link to Ghostery, a service which I’ve used in the past to track who’s dropping cookies and such on my browser.

Now, I’m not arguing that this system is perfect, but it’s certainly quite a step forward from where we were a year ago.

And no, I didn’t opt out of anything. Not because I founded an Independent web advertising and content company (FM), but because frankly, I think the ads I’m getting are better as a result of this ecosystem. And I’m getting benefits I wouldn’t have had before (a free night at the Westin, a reminder to go get that SIM adapter I hadn’t yet bought). And, frankly, because this is all happening on the Independent web, insuring that small sites like mine get a chance to benefit from the same kind of value that Facebook and Google already have as “first party” websites – the value of my data. (More on this point in later posts, I am sure).

Now, if these companies end up doing evil, wrongheaded, or plain stupid things with my data, I’m going to be the first to opt out. And there’s plenty more we have to do to get this ecosystem right. But I thought it instructive to lay out how it’s working so far. And so far, I don’t find it anything but benign, if not actually useful. What do you think?

Obama’s Framework for “Consumer Data Privacy” And My “Data Bill of Rights”

By - February 26, 2012

It sort of feels like “wayback week” for me here at Searchblog, as I get caught up on the week’s news after my vacation. Late last week the Obama administration announced “Consumer Data Privacy In A Networked World: A Framework for Protecting Privacy and Promoting Innovation in the Global Digital Economy.”

The document runs nearly 50 pages, but turns on a “Privacy Bill of Rights” – and when I read that phrase, it reminded me of a post I did four years ago: The Data Bill of Rights.

I thought I’d compare what I wrote with what the Obama administration is proposing.

First, the Administrations’ key points:

Individual Control: Consumers have a right to exercise control over what personal data companies collect from them and how they use it.

Transparency: Consumers have a right to easily understandable and accessible information about privacy and security practices.

Respect for Context: Consumers have a right to expect that companies will collect, use, and disclose personal data in ways that are consistent with the context in which consumers provide the data.

Security: Consumers have a right to secure and responsible handling of personal data.

Access and Accuracy: Consumers have a right to access and correct personal data in usable formats, in a manner that is appropriate to the sensitivity of the data and the risk of adverse consequences to consumers if the data is inaccurate.

Focused Collection: Consumers have a right to reasonable limits on the personal data that companies collect and retain.

Accountability: Consumers have a right to have personal data handled by companies with appropriate measures in place to assure they adhere to the Consumer Privacy Bill of Rights.

And now my “Data Bill of Rights” from 2007:

- Data Portability. We can take copies of that data out of the company’s coffers and offer it to others or just keep copies for ourselves.

- Data Editing. We can request deletions, editing, clarifications of our data for accuracy and privacy.

- Data Anonymity. We can request that our data not be used, cognizant of the fact that that may mean services are unavailable to us.

- Data Use. We have rights to know how our data is being used inside a company.

- Data Value. The right to sell our data to the highest bidder.

- Data Permissions. The right to set permissions as to who might use/benefit from/have access to our data.

Comparing the two, it seems the Administration has not addressed the issue of what I call portability, at all, which I think is a bummer. Nor does it consider the idea of Value, which I think the market is going to address over time. It does address what I call editing, anonymity (what I should have called “opt out”), use, and permissions.

What the administration added that I did not have is “security” – the right to know your data is secure (I think I took that for granted), and “Focused Collection” and “Respect For Context,” which I agree with – don’t collect data for data’s sake, and we should have the right that data collected about us is being used in proper context.

Given how much this issue is in the news lately, as well as the overwhelming response to my post last Friday about Google and Apple, I’m getting as smart as I can on these issues.

Further coverage of the Administration’s move at RWW: Obama Administration Sides with Consumers in Online Privacy Debate and Paid Content Big Tech, Obama And The Politics Of Privacy as well as Ad Age, which is skeptical: Did The White House Just Thread The Needle On Privacy?

A Sad State of Internet Affairs: The Journal on Google, Apple, and “Privacy”

By - February 16, 2012

The news alert from the Wall St. Journal hit my phone about an hour ago, pulling me away from tasting “Texas Bourbon” in San Antonio to sit down and grok this headline: Google’s iPhone Tracking.

Now, the headline certainly is attention-grabbing, but the news alert email had a more sinister headline: “Google Circumvented Web-Privacy Safeguards.”

Wow! What’s going on here?

Turns out, no one looks good in this story, but certainly the Journal feels like they’ve got Google in a “gotcha” moment. As usual, I think there’s a lot more to the story, and while I’m Thinking Out Loud right now, and pretty sure there’s a lot more than I can currently grok, there’s something I just gotta say.

First, the details.  Here’s the lead in the Journal’s story, which requires a login/registration:

Google Inc. and other advertising companies have been bypassing the privacy settings of millions of people using Apple Inc.’s Web browser on their iPhones and computers—tracking the Web-browsing habits of people who intended for that kind of monitoring to be blocked.”

Now, from what I can tell, the first part of that story is true – Google and many others have figured out ways to get around Apple’s default settings on Safari in iOS – the only browser that comes with iOS, a browser that, in my experience, has never asked me what kind of privacy settings I wanted, nor did it ask if I wanted to share my data with anyone else (I do, it turns out, for any number of perfectly good reasons). Apple assumes that I agree with Apple’s point of view on “privacy,” which, I must say, is ridiculous on its face, because the idea of a large corporation (Apple is the largest, in fact) determining in advance what I might want to do with my data is pretty much the opposite of “privacy.”

Then again, Apple decided I hated Flash, too, so I shouldn’t be that surprised, right?

But to the point, Google circumvented Safari’s default settings by using some trickery described in this WSJ blog post, which reports the main reason Google did what it did was so that it could know if a user was a Google+ member, and if so (or even if not so), it could show that user Google+ enhanced ads via AdSense.

In short, Apple’s mobile version of Safari broke with common web practice,  and as a result, it broke Google’s normal approach to engaging with consumers. Was Google’s “normal approach” wrong? Well, I suppose that’s a debate worth having – it’s currently standard practice and the backbone of the entire web advertising ecosystem –  but the Journal doesn’t bother to go into those details. One can debate whether setting cookies should happen by default – but the fact is, that’s how it’s done on the open web.

The Journal article does later acknowledge, though not in a way that a reasonable reader would interpret as meaningful, that the mobile version of Safari has “default” (ie not user activated) settings that prevent Google and others (like ad giant WPP) to track user behavior the way they do on the “normal” Web. That’s a far cry from the Journal’s lead paragraph, which again, states Google bypassed the “the privacy settings of millions of people.” So when is a privacy setting really a privacy setting, I wonder? When Apple makes it so?

Since this story has broken, Google has discontinued its practice, making it look even worse, of course.

But let’s step back a second here and ask: why do you think Apple has made it impossible for advertising-driven companies like Google to execute what are industry standard practices on the open web (dropping cookies and tracking behavior so as to provide relevant services and advertising)? Do you think it’s because Apple cares deeply about your privacy?

Really?

Or perhaps it’s because Apple considers anyone using iOS, even if they’re browsing the web, as “Apple’s customer,” and wants to throttle potential competitors, insuring that it’s impossible to access to “Apple’s” audiences using iOS in any sophisticated fashion? Might it be possible that Apple is using data as its weapon, dressed up in the PR friendly clothing of  ”privacy protection” for users?

That’s at least a credible idea, I’d argue.

I don’t know, but when I bought an iPhone, I didn’t think I was singing up as an active recruit in Apple’s war on the open web. I just thought I was getting “the Internet in my pocket” – which was Apple’s initial marketing pitch for the device. What I didn’t realize was that it was “the Internet, as Apple wishes to understand it, in my pocket.”

It’d be nice if the Journal wasn’t so caught up in its own “privacy scoop” that it paused to wonder if perhaps Apple has an agenda here as well. I’m not arguing Google doesn’t have an agenda – it clearly does. I’m as saddened as the next guy about how Google has broken search in its relentless pursuit of beating Facebook, among others.

In this case, what Google and others have done sure sounds wrong – if you’ve going to resort to tricking a browser into offering up information designated by default as private, you need to somehow message the user and explain what’s going on. Then again, in the open web, you don’t have to – most browsers let you set cookies by default. In iOS within Safari, perhaps such messaging is technically impossible, I don’t know. But these shenanigans are predictable, given the dynamic of the current food fight between Google, Apple, Facebook, and others. It’s one more example of the sad state of the Internet given the war between the Internet Big Five. And it’s only going to get worse, before, I hope, it gets better again.

Now, here’s my caveat: I haven’t been able to do any reporting on this, given it’s 11 pm in Texas and I’ve got meetings in the morning. But I’m sure curious as to the real story here. I don’t think the sensational headlines from the Journal get to the core of it. I’ll depend on you, fair readers, to enlighten us all on what you think is really going on.

China Hacking: Here We Go

By - February 13, 2012

(image) Waaaay back in January of this year, in my annual predictions, I offered a conjecture that seemed pretty orthogonal to my usual focus:

“China will be caught spying on US corporations, especially tech and commodity companies. Somewhat oddly, no one will (seem to) care.”

Well, I just got this WSJ news alert, which reports:

Using seven passwords stolen from top Nortel executives, including the chief executive, the hackers—who appeared to be working in China—penetrated Nortel’s computers at least as far back as 2000 and over the years downloaded technical papers, research-and-development reports, business plans, employee emails and other documents.

The hackers also hid spying software so deeply within some employees’ computers that it took investigators years to realize the pervasiveness of the problem.

Now, before I trumpet my prognosticative abilities too loudly, let’s see if … anybody cares. At all. And if you’re wondering why I even bothered to make such a prediction, well, it’s because I think it’s going to prove important….eventually.

Is Our Republic Lost?

By -

Over the weekend I finished Larry Lessig’s most recent (and ambitious) book, Republic, Lost: How Money Corrupts Congress–and a Plan to Stop It. Amongst those of us who considered Lessig our foremost voice on issues of Internet policy, his abrupt pivot to focus on government corruption was both disorienting and disheartening: here was our best Internet thinker, now tilting at government windmills. I mean, fix government? Take the money out of politics? Better to treat all that as damage, and route around it, right? Isn’t that what the Internet is supposed to be all about?

Well, maybe. But after the wake up call that was PIPA/SOPA, it’s become clear why Lessig decided to stop focusing on battles he felt he couldn’t win (reforming copyright law, for example), and instead aim his intellect at the root causes of why those battles were fruitless. As he writes in his preface:

I was driven to this shift when I became convinced that the questions I was addressing in the fields of copyright and Internet policy depended upon resolving the policy questions – the corruption – that I address (in Republic Lost).

Lessig, ever the lawyer at heart, presents his book as an argument, as well as a call to arms (more on that at the end). Early on he declares our country ruined, “poisoned” by an ineffective government, self-serving corporations, and an indifferent public. To be honest, it was hard to get through the first couple of chapters of Republic Lost without feeling like I was being lectured to on a subject I already acknowledged: Yes, we have a corrupt system, yes, lobbyists are in league with politicians to bend the law toward their client’s bottom lines, and yes, we should really do something about it.

But Lessig does make a promise, and in the book he keeps it: To identify and detail the “root” of the problem, and offer a prescription (or four) to address it. And yes, that root is corruption, in particular the corruption of money, but Lessig takes pains to define a particular kind of corruption. Contrary to popular sentiment, Lessig argues, special interest money is not directly buying votes (after all, that is illegal). Instead, an intricate “gift economy” has developed in Washington, one that is carefully cultivated by all involved, and driven by the incessant need of politicians to raise money so as to insure re-election.

Lessig calls this “dependency corruption” – politicians are dependent on major donors not only to be elected, but to live a lifestyle attendant with being a US Congressperson.  Lessig also points out how more than half of our representatives end up as lobbyists after serving – at salaries two to ten times those of a typical Congressperson (he also points out that we grossly underpay our representatives, compared to how they’d be remunerated for their talents in the private sector).

Lessig likens this dependency corruption to alcoholism – it “develops over time; it sets a patter of interaction that builds upon itself; it develops a resistance to breaking that pattern; it feeds a need that some find easier to resist than others; satisfying that need creates its own reward; that reward makes giving up the dependency difficult; for some, it makes it impossible.”

In short, Lessig says Washington DC is full of addicts, and if we’re to fix anything – health care, energy policy, education, social security, financial markets – we first have to address our politicians’ addiction to money, and our economic system’s enablement of that addiction. Because, as Lessig demonstrates in several chapters devoted to broken food and energy markets, broken schools, and broken financial systems, the problem isn’t that we can’t fix the problem. The problem, Lessig argues, is that we’re paying attention to the wrong problem.

Lessig’s argument essentially concludes that we’ve created a system of government that rewards policy failure – the bigger the issue, the stronger the lobbyists on one or even both sides, forcing Congress into a position of moral hazard – it can insure the most donations if it threatens regulation one way or the other, this way collecting from both sides. Lessig salts his argument with example after example of how the system fails at real reform due to the “money dance” each congressperson must perform.

It’s pretty depressing stuff. And yet – there are no truly evil characters here. In fact, Lessig makes quite the point of this: we face a corruption of “decent souls,” of “good people working in a corrupted system.”

Despite Lessig’s avowed liberal views (combined with his conservative, Reagan-era past), I could imagine that  Republic Lost could as easily be embraced by Tea Party fanatics as by Occupy Wall Street organizers. He focuses chapters on how “so damn much money” defeats the ends of both the left and the right, for example. And at times the book reads like an indictment of the Obama administration – Lessig, like many of us, believed that Obama was truly going to change Washington, then watched aghast as the new administration executed the same political playbook as every other career politician.

In the final section of his book, Lessig offers several plans to force fundamental campaign finance reform – the kind of reform that the majority of us seem to want, but that never seems to actually happen. Lessig acknowledges how unlikely it is that Congress would vote itself out of a system to which it is addicted, and offers some political gymnastics that have almost no chance of working (running a candidate for President who vetoes everything until campaign finance reform is passed, then promises to quit, for example).

The plan that has gotten the most attention is the “Grant and Franklin Project” – a plan to finance all candidacies for Congressional office through public funds. He suggests that the first fifty dollars of any Federal tax revenue (per person per year) be retained to fund political campaigns, then allocated by each of us as a voucher of sorts. In addition, we’d all be able to commit another $100 of our own money to any candidate we choose. Uncommitted funds go to our parties (if we do not actively wish to use our voucher). Any candidate can tap these resources, but only if that candidate agrees to take only vouchers and $100 contributions (bye bye, corporate and PAC money).  Lessig calculates the revenues of this plan would be well above the billions spent to elect politicians in our current system, and argues that the savings in terms of government pork would pay forward the investment many times over.

Lessig ends his book with a call to action – asking us to become “rootstrikers,” to get involved in bringing about the Grant and Franklin Project, or something like it (he goes into detail on a Constitutional convention as a means to the end, for example). And it’s here where I begin to lose the thread. On the one hand, I’m deeply frustrated by the problem Lessig outlines (I wrote about it here On The Problem of Money, Politics, and SOPA), but I’m also suspicious of any new “group” that I need to join – I find “activist” organizations tend to tilt toward unsustainable rhetoric. I’m not an activist by nature, but then again, perhaps it’s not activism Lessig is asking for. Perhaps it’s simply active citizenship.

I could see myself getting behind that. How about you?

####

Other works I’ve reviewed:

Where Good Ideas Come From: A Natural History of Innovation by Steven Johnson (my review)

The Singularity Is Near: When Humans Transcend Biology by Ray Kurzweil (my review)

The Corporation (film – my review).

What Technology Wants by Kevin Kelly (my review)

Alone Together: Why We Expect More from Technology and Less from Each Other by Sherry Turkle (my review)

The Information: A History, a Theory, a Flood by James Gleick (my review)

In The Plex: How Google Thinks, Works, and Shapes Our Lives by Steven Levy (my review)

The Future of the Internet–And How to Stop It by Jonathan Zittrain (my review)

The Next 100 Years: A Forecast for the 21st Century by George Friedman (my review)

Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives by the Year 2100 by Michio Kaku (my review)

Do You Think The US Government Is Monitoring Social Media?

By - February 03, 2012

I had the news on in the background while performing morning ablutions. It was tuned to CBS This Morning – Charlie Rose has recently joined the lineup and my wife, a former news producer, favors both Rose and the Tiffany Network. But the piece that was running as I washed the sleep from my eyes was simply unbelievable.

It was about the two unfortunate british tourists detained by Homeland Security over jokes on Twitter about “destroying America” (a colloquialism for partying – think “tear up the town”) and “digging up Marilyn Monroe” whilst in Hollywood. DHS cuffed the poor kids and tossed them in a detention center with “inner city criminals,” according to reports, then sent them back home. Access denied.(I tweeted the story when it happened, then forgot about it.)

Silly stuff, but also serious – I mean, if DHS can’t tell a 140-character colloquialism from a real threat….(Slap Forehead Now). CBS had managed to get an interview with the unfortunate couple, who were back in the UK and most likely never able to travel here again.

The interview wasn’t what woke me up this morning, it was what CBS’s “Terrorism Expert” had to say afterwards. Apparently Homeland Security claims it is NOT monitoring Twitter and other social media, instead, it got a “tip” about the tweets, and that’s why the couple was detained. The on-air “expert,” who used to run counter-terror for the LAPD and was an official at DHS as well, was asked point blank if the US Government was “monitoring social media.” He flatly denied it. (His comments, oddly, were cut out of the piece that’s now on the web and embedded above).

I do not believe him. Do you? And if they really are not – why not? Shouldn’t they be? I was curious to your thoughts, so here’s a poll:

And then, here’s the next one. Regardless of whether you think it actually IS monitoring….