The PRISMner’s Dilemma

prismSometimes when you aren’t sure what you have to say about something, you should just start talking about it. That’s how I feel about the evolving PRISM story – it’s so damn big, I don’t feel like I’ve quite gotten my head around it. Then again, I realize I’ve been thinking about this stuff for more than two decades – I assigned and edited a story about massive government data overreach in the first issue of Wired, for God’s sake, and we’re having our 20th anniversary party this Saturday. Shit howdy, back then I felt like I was pissing into the wind –  was I just a 27-year-old conspiracy theorist?

Um, no. We were just a bit ahead of ourselves at Wired back in the day.  Now, it feels like we’re in the middle of a hurricane. Just today I spoke to a senior executive at a Very Large Internet Company who complained about spending way too much time dealing with PRISM. Microsoft just posted a missive which said, in essence, “We think this sucks and we sure wish the US government would get its shit together.” I can only imagine the war rooms at Facebook, Amazon, Google, Twitter, and other major Internet companies – PRISM is putting them directly at odds with the very currency of their business: Consumer trust.

And I’m fucking thrilled about this all. Because finally, the core issue of data rights is coming to the fore of societal conversation. Here’s what I wrote about the issue back in 2005, in The Search:

The fact is, massive storehouses of personally identifiable information now exist. But our culture has yet to truly grasp the implications of all that information, much less protect itself from potential misuse….

Do you trust the companies you interact with to never read your mail, or never to examine your clickstream without your permission? More to the point, do you trust them to never turn that information over to someone else who might want it—for example, the government? If your answer is yes (and certainly, given the trade-offs of not using the service at all, it’s a reasonable answer), you owe it to yourself to at least read up on the USA PATRIOT Act, a federal law enacted in the wake of the 9/11 tragedy.

I then go into the details of PATRIOT, which has only strengthened since 2005, and conclude:

One might argue that while the PATRIOT Act is scary, in times of war citizens must always be willing to balance civil liberties with national security. Most of us might be willing to agree to such a framework in a presearch world, but the implications of such broad government authority are chilling given the world in which we now live—a world where our every digital track, once lost in the blowing dust of a presearch world, can now be tagged, recorded, and held in the amber of a perpetual index.

So here we are, having the conversation at long last. I plan to start posting about it more, in particular now that my co-author Sara M. Watson is about to graduate from Oxford and join the Berkman Center at Harvard (damn, I keep good company.).

I’ve got so many posts brewing in me about all of this. But I wanted to end this one with another longish excerpt from my last book, one I think encapsulates the issues major Internet platforms are facing now that programs like PRISM have become the focal point of a contentious global conversation.

In early 2005, I sat down with Sergey Brin and asked what he thinks of the PATRIOT Act, and whether Google has a stance on its implications. His response: “I have not read the PATRIOT Act.” I explain the various issues at hand, and Brin listens carefully. “I think some of these concerns are overstated,” he begins. “There has never been an incident that I am aware of where any search company, or Google for that matter, has somehow divulged information about a searcher.” I remind him that had there been such a case, he would be legally required to answer in just this way. That stops him for a moment, as he realizes that his very answer, which I believe was in earnest, could be taken as evasive. If Google had indeed been required to give information over to the government, certainly he would not be able to tell either the suspect or an inquiring journalist. He then continues. “At the very least, [the government] ought to give you a sense of the nature of the request,” he said. “But I don’t view this as a realistic issue, personally. If it became a problem, we could change our policy on it.”

It’s Officially Now A Problem, Sergey. But it turns out, it’s not so easy to just change policy.

I can’t wait to watch this unfold. It’s about time we leaned in, so to speak.

16 thoughts on “The PRISMner’s Dilemma”

  1. The poor thinking that went into these initiatives and their implementation is telling for a number of reasons:

    1. Its unconstitutional on its face (rationalizations won’t work)

    2. Thinking that a complex of programs requiring large numbers of personnel for operation (including sub-contractors) could stay secret is beyond naive.

    3. Once known it was inevitable it would (quite rightly) damage U.S. credibility.

    And perhaps more importantly than these… it reflects a complete disconnect from what so much of the world is recogninzing (whether we believe it true or not)…

    From “Confessions of an Economic Hit Man”:

    A conversation with a young Indonesian student:

    “Stop being so greedy,” she said, “and so selfish. Realize that there is more to the world than your big houses and fancy stores. People are starving and you worry about oil for your cars. Babies are dying of thirst and you search the fashion magazines for the latest styles. Nations like ours are drowning in poverty, but your people don’t even hear our cries for help. You shut your ears to the voices of those who try to tell you these things. You label them radicals or Communists. You must open your hearts to the poor and downtrodden, instead of driving them further into poverty and servitude.”

    To deny that this perspective is prevalent… and has at least some validity… whatever the claimed intentions for our previous or current interventions may have been… is foolish.

    If it can be argued with some legitimacy that there should be some form of monitoring of Internet activity then the question becomes… “Who could we trust to conduct it?” and “What are its boundaries”.

    What could be called a justice imperative* arising with the advance of technology and communications makes narrow control of these capabilities very, very dangerous… and stupid.

    *technology makes consensus increasingly necessary and an informed and involved, enlightened electorate even more so! And this is for a simple reason… there’s a relationship between the Ultimatum game, civilization and technology which suggests that as technology and complexity increase it takes fewer and fewer discontented to ‘tip-over-the-chessboard’.

    1. And… monomaniacly… I believe an un-burdened microtransaction and the universal network (outside and independent of any banking and credit-creation system via a particularly designed Internet wallet) is a vital element for both oversight and feedback in the complex, chaotic system known as human civilization…. which, frankly, DOESN’T SCALE VERY WELL… and won’t without serious attention.

  2. So many thoughts here… (apologies for length)

    Yes, there’s officially a problem. Many problems. And even for people who think it’s ok that “privacy is dead” we have a problem bcs David Brin’s dream of symmetrical transparency feels a very long way off. The current situation re data is deeply asymmetrical. A few parties have something approximating Total Information Awareness and the rest are flying blind.

    First step done. We’ve admitted there’s a problem. But it’s not clear how they get fixed. Business models, politics and code/technical infrastructures all support (or are vulnerable to) mass surveillance. And all three are heavily entrenched.

    Politics?
    I had Hope… but now it seems naive to think that a single person could be the answer to our democracy problems, of which our surveillance/privacy politics are just one troubling offshoot. Lessig and others are working in the right direction – deep reform. The once a century house cleaning that every democracy/empire needs in order to re-adjust. It’ll be hell to make this happen, worse if it doesn’t.

    Code?
    I might as well skip straight to business models. $$$ drives code. That may be an overstatement but how many startups — with any traction — aren’t focused on vacuuming up as much data as possible? And established players? Discussions I’ve had with folks like Yonatan Zunger, Chief Architect at Google (who I like and respect) make me think that Google and similar cos won’t be leading the way on technical changes that will give users a high degree of confidence that their data is technically — mathematically secure. I’ve looked at alternatives to the major internet players (Apple/Goog/MSFT) and making an attempt at becoming the equivalent of a vegan re privacy (a prigan? 01gan?) but it’s daunting… Have you looked at the setup that Stallman is running recently?!? I value privacy but I’d have to give up my sanity to stay engaged and run a system like that. I’d also have to go back and major in CompSci. Whatever the answer is, my mom will need to be able to use it – and she’s no crypto-anarchist coder.

    Business models?
    I don’t buy into the privacy is dead meme. At a conference a few years ago I suggested that people will respect their own privacy to the extent that they get paid for it. The idea was shot down by almost everyone but I think that there’s room for a solution that will reward people with money, not free services. There’s a couple of key dynamics at play: (1) the amount and value of the data are increasing. When google rolled out there wasn’t a ton of value to be had there – just a few people searching on the newly born net. Now they’ve got data on our searches, emails, everything about our phones (location, etc…) and soon Glass. And now that Kurzweil is at the wheel of the engineering team I don’t think it’s too much to introduce previously wild thoughts, Brains in the Cloud and the like. I don’t want a FISA court rubber stamping away the privacy of my consciousness… (2) I believe that the core free services of the web (email, search, social) are getting easier to replicate – like when Japanese cars came on the scene in the 70’s and 80’s. A lot of copying what’s great, improving what’s bad. So there’s room to create a clone but to set it on top of a technical infrastructure that treats data/privacy differently from the start.

    Anyone remember AllAdvantage? Where oh where did the infomediaries go? and when are they coming back? Our current predicament re data/privacy is due to a few decades worth of choices and we have the power to make different choices. There’s another approach to data. An approach that rewards the individual, increases returns for advertisers and keeps data as private as the person wants it. One option: an infomediary that allows people to store all of their data in one place but employs a range of privacy protections that are built in from the start (citation below), ideally building toward mathematically provable privacy and security. Provable privacy – I think that’s the goal we should be shooting for (at least in a world where we don’t trust the government, where we want to trust the Googles of the world, but can’t (gag order)). A data pool that can be queried by anyone with the proper permissions to serve ads, personalize content, etc… but the party making the query doesn’t need to know anything about the person and they’ll be happy so long as they get a cryptographically verified (trusted) response. Our current solution is analogous to handing over a full credit report to a landlord (w/ the NSA looking over their shoulder) – do they really need to know all those details or do they just need to know that I’m sufficiently trustworthy to rent to? Domain specific trust, here the trust to rent a house, can be generated by verified reputation metrics (where parameters could defined by the querying agent, industry standard, whatever) or by a few select pieces of actual data (FICO score + info on (a) Have you ever not paid rent? Y/N, (b) any criminal convictions? Y/N, (c) etc…). Not the entire 23 page report.

    Whether this idea wins or something else saves the day – we clearly can (and must) do better. Thanks for posting on this and… for anyone who made it all the way to the bottom: thanks for reading 🙂

    -Chris

    Some interesting research on decentralized data stores (HT Arvind Narayanan)
    http://dig.csail.mit.edu/2012/WWW-DUMW/papers/dumw2012_submission_5.pdf

    and another one on a privacy-enhancing infomediary business model: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.202.4162&rep=rep1&type=pdf

    1. I for one DID make it to the end of your article and its got a lot of great ideas!

      Speaking of David Brin… I’m in fairly regular contact with him and he’s supportive of my patented model for enabling the micro-transaction….(he’s also expressed a readiness to speak to any potential investors I find re the microtransaction’s import and security issues.) which is a fundamental all by itself.

      But along with that essential capability is the idea that the network facilitating it can perhaps play a role as a repository for certain user-owned and held data.

      I’m starting to wonder if its that fear… (on many levels this network and capability empowers the individual vs. large forces in both commercial and governmental areas.) which is why I’m not hearing from these large players.

      I don’t believe it will ultimately work… the simple idea is not easy to bury. And at least some are paying attention… and I’m getting pretty fed up with it being ignored by the mainstream.

  3. From Salon this morning:

    The Internet’s greatest disruptive innovation: Inequality… The logical consequences of Silicon Valley capitalism: Social stratification and class antagonism

    http://www.salon.com/2013/07/19/the_internets_greatest_disruptive_innovation_inequality/

    Technical innovation is great… but the lack of interest or concern with social innovation (the design of how a technology interacts with the society) is disastrous.

    And this has a lot more to do with the public’s anger about Prism and related programs than is being discussed.

  4. I have been thinking about your conversation with Brin from your 2005 book, ever since all this news broke. Given that the Patriot Act had passed four years earlier, it’s not unreasonable to conclude that data from Google had actually been turned over to PRISM or a PRISM-like entity by 2005. And so it makes sense to not admit to that, as per the law. But Brin didn’t just not admit to anything. He went out of his way to say that he hadn’t even read the law. Does the law say that you have to say that you haven’t read the law? If not, then that statement makes me trust Google even less than if they simply had handed over data and then lied about it, in accordance with the law.

      1. Oh, I agree with you, that he has read it by now. But let’s talk about 2005. Let’s make the now very reasonable assumption that Google was already participating in some form of PRISM in 2005. And no reason not to believe that Brin didn’t know about the participation in 2005. That leaves one of two possibilities. Either (a) Brin had read it, but lied about not having read it even when he didn’t have to, or (b) Brin hadn’t read it, and without having read it, sanctioned the participation in the program anyway (!!).

        I don’t know which one is worse, but both possibilities cause me to lose much more trust in the company than simply the fact that they have participated in the PRISM program. Having to participate is not their fault, and that they did participate does not, personally, cause me to lose any trust in them. Having to deny that they are participating is also not their fault, if that’s what the law says. But saying that you haven’t even read the law is not actually denying that you are participating, so there is no reason to have to say that you hadn’t read the law. So to really not have read it, and be participating anyway.. that to me is where the big trust erosion comes from. Wouldn’t you or your readers concur?

      2. I don’t know the details of how PRISM or other programs like it were operating or disclosing under PATRIOT back then. I imagine in either case he has plausible deniability given he was never CEO.

      3. Mmm, I suppose. Plausible deniability. Ok. But that’s the kind of language one uses when one is worried about getting hauled to court. I’m not trying to haul anyone to court. Rather, I’m talking about what causes me, personally and as an end user, to lose trust. Having plausible deniability is not very confidence- or trust-inspiring. The very phrase “plausible deniability” has its origins in explicit attempts by an organization to shield senior officials from blowback. Plausible deniability means that this is explicitly how the organization was thinking about this.

        Which, again, causes me to lose even more trust in that organization than if they had just flat out lied about it. The flat out lying doesn’t cause me to lose any trust, because that’s the law. But all the cloak and dagger machinations to give people “plausible deniability” is a step beyond simply complying with the law. The latter shows a core untrustworthiness in nature, in style, in thought process.

        Am I the only one who (over)thinks this, though? You do see what I’m saying, doncha?

      4. Heh, very diplomatic way of putting it 😉

        Nevertheless, whether I am right or wrong in my assessment, that 2005 exchange has the feel of Hamlet: “The lady doth protest too much, methinks.”

        Now where did I put that pic-a-nic basket?

Leave a Reply to CulturalEngineer Cancel reply

Your email address will not be published. Required fields are marked *