I’ve long been waiting for something to boil over in the area where politics meets the architecture of search engines. I’m not sure this is it, but it certainly points the way. Someone over at the DNC noticed that the White House website was purposefully blocking various public documents from being crawled by search engines. This usually is standard stuff – a file that webmasters keep called robots.txt instructs spiders to ignore various pages, usually because those pages are internally-focused or the company wishes to keep them private. However, White House-posted documents are clearly in the public record. A quick review of the White House’s .txt file shows that something else is going on. A lot of files about Iraq are in the “disallow” category. It’s pretty easy to assume an intention: Keep past statements about Iraq out of the public consciousness, especially ones that are now – er – out of touch with current reality (ie, it’ll be an easy peace! And inexpensive!). If you believe, as I do, that search engines are becoming a powerful proxy for what the public knows, then this action by the White House extends spin to the level of public records. It’s a fascinating move – clearly they knew they could not expunge the documents from the site, that would be too obvious, and it would mean they had something to hide. But…they seem to be trying to keep those embarassing public documents from being found via the tool most casual (or intentional) searchers use first (and sometimes only) – the search engine. Thanks to Dan Gilmor, who first posted this. Stay tuned, this one might turn into something.