Google Wins (For The Most Part) Against DOJ

Google was due some good news, and yesterday (sorry, was traveling to Spring Training with my son…) it got some. From the Google Blog: Google will not have to hand over any user's search queries to the government. That's what a federal judge ruled today when he decided to…

Goovdoj-Tm

Google was due some good news, and yesterday (sorry, was traveling to Spring Training with my son…) it got some. From the Google Blog:

Google will not have to hand over any user’s search queries to the government. That’s what a federal judge ruled today when he decided to drastically limit a subpoena issued to Google by the Department of Justice. (You can read the entire ruling here and the government’s original subpoena here.)

The government’s original request demanded billions of URLs and two month’s worth of users’ search queries. Google resisted the subpoena, prompting the judge’s order today. In addition to excluding search queries from the subpoena, Judge James Ware also required the government to limit its demand for URLs to 50,000. We will fully comply with the judge’s order.

Google PR also sent me over this response, from Nicole Wong, associate general counsel:

“This is a clear victory for our users. The subpoena has been drastically limited, most importantly the order excludes search queries.”

I laud Google for going to the mat on this. It could have just rolled over, as did several others. My earlier converage on this, and speculation on why Google was so motivated to fight, here and here (What’s the Big Deal?!).

4 thoughts on “Google Wins (For The Most Part) Against DOJ”

  1. Can somebody explain why the spooks need to go to the search engines to get a list of 50,000 random URLs from the Web?

    Anybody can write a crawler themselves and spider the Web, starting from, say, Yahoo’s homepage and collect all URLs on the entire Web with almost no work. (I know it’s a more difficult programming problem if you want real-time updates and the like.) If they had hired 2-3 good programmers and bought a couple of servers, they could have a complete list of URLs in less time and expense than they have spent in legal fees and court time.

  2. Yes, it can be explained, but it’s complicated. It has to do with all the maneuvering over a legal argument called “least restrictive means”, and testing censorware vs. the law at issue. The idea here was to get a random sample of sites that people might find by searching, which is why they went to the search engines. I *think* they simply didn’t do it themselves because the statistics professor expert wanted a sampling procedure that was more complex than the obvious crawler.

    Remember, nobody know this was going to be such a tempest, and the other search engines didn’t make an issue over it.

  3. I know that the US is much more liberal and open than other countries in the world, but think of the following point: The feds wants the information in order to catch some bad guys. Now, if these bad guys will know that their web activities could be better monitored, they will try and hide them, right? So – this could all be a decoy. Maybe Google did hand over the data to the feds. Judging by recent security issues within the google organization, its even likely that the feds got that information themselves off the googleplex. This is all a decoy, so that the bad buys will feel comfortable while the feds are tracking them down.
    😀

  4. I would just like to correct one quick comment made by paranoid caseofficer. The US is NOT AT ALL more liberal and open than other countries in the world. In fact, even the liberal parties here in the US are generally more conservative than many other countires moderately conservative parties.

Leave a Reply

Your email address will not be published. Required fields are marked *