free html hit counter Joints After Midnight & Rants Archives | Page 10 of 43 | John Battelle's Search Blog

China To Bloggers: Stop Talking Now. K Thanks Bye.

By - March 31, 2012

(image) Yesterday I finished reading Larry Lessig’s updated 1999 classic, Code v2. I’m five years late to the game, as the book was updated in 2006 by Lessig and a group of fans and readers (I tried to read the original in 1999, but I found myself unable to finish it. Something to do with my hair being on fire for four years running…). In any event, no sooner had I read the final page yesterday when this story breaks:

Sina, Tencent Shut Down Commenting on Microblogs (WSJ)

In an odd coincidence, late last night I happened to share a glass of wine with a correspondent for the Economist who is soon to be reporting from Shanghai. Of course this story came up, and an interesting discussion ensued about the balance one must strike to cover business in a country like China. Essentially, it’s the same balance any Internet company must strike as it attempts to do business there: Try to enable conversation, while at the same time regulating that conversation to comply with the wishes of a mercurial regime.

Those of us who “grew up” in Internet version 1.0 have a core belief in the free and open exchange of ideas, one unencumbered by regulation. We also tend to think that the Internet will find a way to “route around” bad law – and that what happens in places like China or Iran will never happen here.

But as Lessig points out quite forcefully in Code v2, the Internet is, in fact, one of the most “regulable” technologies ever invented, and it’s folly to believe that only regimes like China will be drawn toward leveraging the control it allows. In addition, it need not be governments that create these regulations, it could well be the platforms and services we’ve come to depend on instead. And while those services and platforms might never be as aggressive as China or Iran, they are already laying down the foundation for a slow erosion of values many of us take for granted. If we don’t pay attention, we may find ourselves waking up one morning and asking…Well, How Did I Get Here?

More on all of this soon, as I’m in the midst of an interview (via email) with Lessig on these subjects. Once I’ll post the dialog here once we’re done.

 

  • Content Marquee

CM Summit White Paper from 2007

By - March 15, 2012

I am in the midst of writing a post on the history of FM (update – here it is), and I thought it’d be fun to post the PDF linked to below. It’s a summary of musings from Searchblog circa 2006-7 on the topic of conversational media, which is much in the news again, thanks to Facebook. We created the document as an addendum to our first ever CM Summit conference, as a way of describing why we were launching the conference. (BTW, the Summit returns to San Francisco next week as Signal SF, check it out.)

It’s interesting to see the topics in the white paper come to life, including chestnuts like “Conversation Over Dictation,” “Platform Over Distribution,” “Engagement Over Consumption,” and “Iteration and Speed Over Perfection and Deliberation.”

Enjoy.

CMManifesto2007.01

Who Controls Our Data? A Puzzle.

By - March 11, 2012

(image) Facebook claims the data we create inside Facebook is ours – that we own it. In fact, I confirmed this last week in an interview with Facebook VP David Fischer on stage at FM’s Signal P&G conference in Cincinnati. In the conversation, I asked Fischer if we owned our own data. He said yes.

Perhaps unfairly  (I’m pretty sure Fischer is not in charge of data policy), I followed up my question with another: If we own our own data, can we therefore take it out of Facebook and give it to, say, Google, so Google can use it to personalize our search results?

Fischer pondered that question, realized its implications, and backtracked. He wasn’t sure about that, and it turns out, it’s more complicated to answer that question – as recent stories about European data requests have revealed.*

I wasn’t planning on asking Fischer that question, but I think it came up because I’ve been pondering the implications of “you as the platform” quite a bit lately. If it’s *our* data in Facebook, why can’t we take it and use it on our terms to inform other services?

Because, it turns out, regardless of any company’s claim around who owns the data, the truth is, even if we could take our data and give it to another company, it’s not clear the receiving company could do anything with it. Things just aren’t set up that way. But what if they were?

The way things stand right now, our data is an asset held by companies, who then cut deals with each other to leverage that data (and, in some cases, to bundle it up as a service to us as consumers). Microsoft has a deal to use our Facebook data on Bing, for example. And of course, the inability of Facebook and Google to cut a data sharing deal back in 2009 is one of the major reasons Google built Google+. The two sides simply could not come to terms, and that failure has driven an escalating battle between major Internet companies to lock all of us into their data silos. With the cloud, it’s only getting worse (more on that in another post).

And it’s not fair to just pick on Facebook. The question should be asked of all services, I think. At least, of all services which claim that the data we give that service is, in fact, ours (many services share ownership, which is fine with me, as long as I don’t lose my rights.)

I have a ton of pictures up on Instagram now, for example (you own your own content there, according to the service’s terms). Why can’t I “share” that data with Google or Bing, so those pictures show up in my searches? Or with Picasa, where I store most of my personal photographs?

I have a ton of data inside an app called “AllSport GPS,” which tracks my runs, rides, and hikes. Why can’t I share that with Google, or Facebook, or some yet-to-be-developed app that monitors my health and well being?

Put another way, why do I have to wait for all these companies to cut data sharing deals through their corporate development offices? Sure, I could cut and paste all my data from one to the other, but really, who wants to do that?!

In the future, I hope we’ll be our own corp dev offices. An office of one, negotiating data deals on the fly, and on our own terms. It’ll take a new architecture and a new approach to sharing, but I think it’d open up all sorts of new vectors of value creation on the web.

This is why I’m bullish on Singly and the Locker Project. They’re trying to solve a very big problem, and worse, one that most folks don’t even realize they have. Not an easy task, but an important one.

—–

*Thanks to European law, Facebook is making copies of users’ data available to them – but it makes exemptions that protect its intellectual property, trade secrets, and it won’t give data that “cannot be extracted from our platform in the absence of is proportionate effort.” What defines Facebook’s “trade secrets” and “intellectual property”? Well, there’s the catch. Just as with Google’s search algorithms, disclosure of the data Facebook is holding back would, in essence, destroy Facebook’s competitive edge, or so the company argues. Catch 22. I predict we’re going to see all this tested by services like Singly in the near future. 

 

A Sad State of Internet Affairs: The Journal on Google, Apple, and “Privacy”

By - February 16, 2012

The news alert from the Wall St. Journal hit my phone about an hour ago, pulling me away from tasting “Texas Bourbon” in San Antonio to sit down and grok this headline: Google’s iPhone Tracking.

Now, the headline certainly is attention-grabbing, but the news alert email had a more sinister headline: “Google Circumvented Web-Privacy Safeguards.”

Wow! What’s going on here?

Turns out, no one looks good in this story, but certainly the Journal feels like they’ve got Google in a “gotcha” moment. As usual, I think there’s a lot more to the story, and while I’m Thinking Out Loud right now, and pretty sure there’s a lot more than I can currently grok, there’s something I just gotta say.

First, the details.  Here’s the lead in the Journal’s story, which requires a login/registration:

Google Inc. and other advertising companies have been bypassing the privacy settings of millions of people using Apple Inc.’s Web browser on their iPhones and computers—tracking the Web-browsing habits of people who intended for that kind of monitoring to be blocked.”

Now, from what I can tell, the first part of that story is true – Google and many others have figured out ways to get around Apple’s default settings on Safari in iOS – the only browser that comes with iOS, a browser that, in my experience, has never asked me what kind of privacy settings I wanted, nor did it ask if I wanted to share my data with anyone else (I do, it turns out, for any number of perfectly good reasons). Apple assumes that I agree with Apple’s point of view on “privacy,” which, I must say, is ridiculous on its face, because the idea of a large corporation (Apple is the largest, in fact) determining in advance what I might want to do with my data is pretty much the opposite of “privacy.”

Then again, Apple decided I hated Flash, too, so I shouldn’t be that surprised, right?

But to the point, Google circumvented Safari’s default settings by using some trickery described in this WSJ blog post, which reports the main reason Google did what it did was so that it could know if a user was a Google+ member, and if so (or even if not so), it could show that user Google+ enhanced ads via AdSense.

In short, Apple’s mobile version of Safari broke with common web practice,  and as a result, it broke Google’s normal approach to engaging with consumers. Was Google’s “normal approach” wrong? Well, I suppose that’s a debate worth having – it’s currently standard practice and the backbone of the entire web advertising ecosystem –  but the Journal doesn’t bother to go into those details. One can debate whether setting cookies should happen by default – but the fact is, that’s how it’s done on the open web.

The Journal article does later acknowledge, though not in a way that a reasonable reader would interpret as meaningful, that the mobile version of Safari has “default” (ie not user activated) settings that prevent Google and others (like ad giant WPP) to track user behavior the way they do on the “normal” Web. That’s a far cry from the Journal’s lead paragraph, which again, states Google bypassed the “the privacy settings of millions of people.” So when is a privacy setting really a privacy setting, I wonder? When Apple makes it so?

Since this story has broken, Google has discontinued its practice, making it look even worse, of course.

But let’s step back a second here and ask: why do you think Apple has made it impossible for advertising-driven companies like Google to execute what are industry standard practices on the open web (dropping cookies and tracking behavior so as to provide relevant services and advertising)? Do you think it’s because Apple cares deeply about your privacy?

Really?

Or perhaps it’s because Apple considers anyone using iOS, even if they’re browsing the web, as “Apple’s customer,” and wants to throttle potential competitors, insuring that it’s impossible to access to “Apple’s” audiences using iOS in any sophisticated fashion? Might it be possible that Apple is using data as its weapon, dressed up in the PR friendly clothing of  “privacy protection” for users?

That’s at least a credible idea, I’d argue.

I don’t know, but when I bought an iPhone, I didn’t think I was singing up as an active recruit in Apple’s war on the open web. I just thought I was getting “the Internet in my pocket” – which was Apple’s initial marketing pitch for the device. What I didn’t realize was that it was “the Internet, as Apple wishes to understand it, in my pocket.”

It’d be nice if the Journal wasn’t so caught up in its own “privacy scoop” that it paused to wonder if perhaps Apple has an agenda here as well. I’m not arguing Google doesn’t have an agenda – it clearly does. I’m as saddened as the next guy about how Google has broken search in its relentless pursuit of beating Facebook, among others.

In this case, what Google and others have done sure sounds wrong – if you’ve going to resort to tricking a browser into offering up information designated by default as private, you need to somehow message the user and explain what’s going on. Then again, in the open web, you don’t have to – most browsers let you set cookies by default. In iOS within Safari, perhaps such messaging is technically impossible, I don’t know. But these shenanigans are predictable, given the dynamic of the current food fight between Google, Apple, Facebook, and others. It’s one more example of the sad state of the Internet given the war between the Internet Big Five. And it’s only going to get worse, before, I hope, it gets better again.

Now, here’s my caveat: I haven’t been able to do any reporting on this, given it’s 11 pm in Texas and I’ve got meetings in the morning. But I’m sure curious as to the real story here. I don’t think the sensational headlines from the Journal get to the core of it. I’ll depend on you, fair readers, to enlighten us all on what you think is really going on.

China Hacking: Here We Go

By - February 13, 2012

(image) Waaaay back in January of this year, in my annual predictions, I offered a conjecture that seemed pretty orthogonal to my usual focus:

“China will be caught spying on US corporations, especially tech and commodity companies. Somewhat oddly, no one will (seem to) care.”

Well, I just got this WSJ news alert, which reports:

Using seven passwords stolen from top Nortel executives, including the chief executive, the hackers—who appeared to be working in China—penetrated Nortel’s computers at least as far back as 2000 and over the years downloaded technical papers, research-and-development reports, business plans, employee emails and other documents.

The hackers also hid spying software so deeply within some employees’ computers that it took investigators years to realize the pervasiveness of the problem.

Now, before I trumpet my prognosticative abilities too loudly, let’s see if … anybody cares. At all. And if you’re wondering why I even bothered to make such a prediction, well, it’s because I think it’s going to prove important….eventually.

In Which I Officially Declare RSS Is Truly Alive And Well.

By - February 02, 2012

I promise, for at least 18 months, to not bring this topic up again. But I do feel the need to report to all you RSS lovin’ freaks out there that the combined interactions on my two posts – 680 and still counting –  have exceeded the reach of my RSS feed (which clocked in at a miserable 664 the day I posted the first missive).

And as I said in my original post:

If I get more comments and tweets on this post than I have “reach” by Google Feedburner status, well, that’s enough for me to pronounce RSS Alive and Well (by my own metric of nodding along, of course). If it’s less than 664, I’m sorry, RSS is Well And Truly Dead. And it’s all your fault.

For those of you who don’t know what on earth I’m talking about, but care enough to click, here are the two posts:

Once Again, RSS Is Dead. But ONLY YOU Can Save It!

RSS Update: Not Dead, But On The Watch List

OK, now move along. Nothing to see here. No web standards have died. Happy Happy! Joy Joy!

What Happens When Sharing Is Turned Off? People Don’t Dance.

By - January 30, 2012

One of only two photos to emerge from last night's Wilco concert, image Eric Henegen

Last night my wife and I did something quite rare – we went to a concert on a Sunday night, in San Francisco, with three other couples (Wilco, playing at The Warfield). If you don’t have kids and don’t live in the suburbs, you probably think we’re pretty lame, and I suppose compared to city dwellers, we most certainly are. But there you have it.

So why am I telling you about it? Because something odd happened at the show: Wilco enforced a “no smartphone” rule. Apparently lead singer Jeff Tweedy hates looking out at the audience and seeing folks waving lit phones back at him. Members of the Warfield staff told me they didn’t like the policy, but they enforced it  - quite strictly, I might add. It created a weird vibe – folks didn’t even take out their phones for fear they might be kicked out for taking a picture of the concert. (A couple of intrepid souls did sneak a pic in, as you can see at left…)

And… no one danced, not till the very end, anyway. I’ve seen Wilco a few times, and I’ve never seen a more, well, motionless crowd. But more on that later.

Now, I have something of a history when it comes to smart phones and concerts. Back in 2008 I was a founding partner in a new kind of social music experiment we called “CrowdFire.” In my post explaining the idea, I wrote:

Over the course of several brainstorming sessions… an idea began to take shape based on a single insight: personal media is changing how we all experience music. (when I was at Bonnaroo in 2007), everyone there had a cel phone with a camera. Or a Flip. Or a digital camera. And when an amazing moment occurred, more folks held up their digital devices than they did lighters. At Bonnaroo, I took a picture that nails it for me – the image at left. A woman capturing an incredible personal memory of an incredible shared experience (in this case, it was Metallica literally blowing people’s minds), the three screens reflecting the integration of physical, personal, and shared experiences. That image informed our logo, as you can see (below).

So – where did all those experiences go (Searchblog readers, of course, know I’ve been thinking about this for a while)? What could be done with them if they were all put together in one place, at one time, turned into a great big feed by a smart platform that everyone could access? In short, what might happen if someone built a platform to let the crowd – the audience – upload their experiences of the music to a great big database, then mix, mash, and meld them into something utterly new?

Thanks to partners like Microsoft, Intel, SuperFly, Federated Media and scores of individuals, CrowdFire actually happened at Outside Lands, both in 2008 and in 2009. It was a massive effort – the first year literally broke AT&T’s network. But it was clear we were onto something. People want to capture and share the experience of being at a live concert, and the smart phone was clearly how they were now doing it.

It was the start of something – brainstorming with several of my friends prior to CrowdFire’s birth, we imagined a world where every shareable experience became data that could be recombined to create fungible alternate realities. Heady stuff, stuff that is still impossible, but I feel will eventually become our reality as we careen toward a future of big data and big platforms.

Since those early days, the idea of CrowdFire has certainly caught on. In early 2008, we had to build the whole platform from scratch, but now, folks use services like Instagram, Twitter, Facebook, and Foursquare to share their experiences. Many artists share back, sending out photos and tweets from on stage. Most major festivals and promoters have some kind of fan photo/input service that they promote as well. CrowdFire was a great idea, and maybe, had I not been overwhelmed with running FM, we might have turned it into a real company/service that could have integrated all this output and created something big in the world. But it was a bit ahead of its time.

What has happened since that first Outside Lands is that at every concert I’ve attended, I’ve noticed the crowd’s increasing connection to their smart phones – taking pictures, group texting, tweeting, and sharing the moments with their extended networks across any number of social services. It’s hard to find an experience more social than a big concert, and the thousands of constantly lit smartphone screens are a testament to that fact, as are the constant streams of photos and status updates coming out of nearly every show I’ve seen, or followed enviously online.

Which brings me back to last night. I was unaware of the policy, so as Wilco opened at the sold-out Warfield, something felt off to me. Here were two thousand San Francisco hipsters, all turned attentively toward the stage – but most of them had their hands in their pockets! As the band went into the impossible-not-to-move-to “Art of Almost” and “I Might,” I started wondering what was up – why weren’t people at least swaying?! The music was extraordinary, the sound system perfectly tuned. But everyone seemed very intent on…well…being intent. They stared forward, hands in pocket, nodded their heads a bit, but no one danced. It was a rather odd vibe. It was as if the crowd had been admonished to not be too … expressive.

Then it hit me. Nobody had their phone out. I turned to a security guard and asked why no one was holding up a phone. That’s when I learned of Wilco’s policy.

It seemed to me that the rule had the unintended consequence of muting the crowd’s ability to connect to the joy of the moment. Odd, that. We’re so connected to these devices and their ability to reflect our own sense of self that when we’re deprived of them, we feel somehow less…human.

My first reaction was “Well, this sucks,” but on second thought, I got why Tweedy wanted his audience to focus on the experience in the room, instead of watching and sharing it through the screens of their smartphones. By the encore, many people were dancing – they had loosened up. But in the end, I’m not sure I agree with Wilco – they’re fighting the wrong battle (and losing extremely valuable word of mouth in the process, but that’s another post).

There are essentially two main reasons to hold a phone up at a show. First, to capture a memory for yourself, a reminder of the moment you’re enjoying. And second, to share that moment with someone – to express your emotions socially. Both seem perfectly legitimate to me. (I’m not down with doing email or taking a call during a show, I’ll admit).
But the smart phone isn’t a perfect device, as we all know. It forces the world into a tiny screen. It runs out of battery, bandwidth, and power. It distracts us from the world around us. There are too many steps – too much friction – between capturing the things we are experiencing right now and the sharing of those things with people we care about.

But I sense that the sea of smart phones lit up at concerts is a temporary phenomenon. The integration of technology, sharing, and social into our physical world, on the other hand, well that ain’t going away. In the future, it’s going to be much harder to enforce policies like Wilco’s, because the phone will be integrated into our clothing, our jewelry, our eyeglasses, and possibly even ourselves. When that happens – when I can take a picture through my glasses, preview it, then send it to Instagram using gestures from my fingers, or eyeblinks, or a wrinkle of my nose – when technology becomes truly magical – asking people to turn it off is going to be the equivalent of asking them not to dance – to not express their joy at being in the moment.

And why would anyone want to do that?

For Posterity

By - January 27, 2012

I had to post this image from Twitter.

 

 

There. If this continues, I figure I’ll be at least in a good negotiating position come the Rapture.

Once Again, RSS Is Dead. But ONLY YOU Can Save It!

By - January 25, 2012

About 14 months ago, I responded to myriad “RSS is Dead” stories by asking you, my RSS readers, if you were really reading. At that point, Google’s Feedburner service was telling me I had more than 200,000 subscribers, but it didn’t feel like the lights were on – I mean, that’s a lot of people, but my pageviews were low, and with RSS, it’s really hard to know if folks are reading you, because the engagement happens on the reader, not here on the site. (That’s always been the problem publishers have had with RSS – it’s impossible to monetize. I mean, think about it. Dick Costolo went to Twitter after he sold Feedburner to Google. Twitter! And this was *before* it had a business model. Apparently that was far easier to monetize than RSS).

Now, I made the decision long ago to let my “full feed” go into RSS, and hence, I don’t get to sell high-value ads to those of you who are RSS readers. (I figure the tradeoff is worth it – my main goal is to get you hooked on my addiction to parentheses, among other things.)

Anyway, to test my theory that my RSS feed was Potemkin in nature, I wrote a December, 2010 post asking RSS readers to click through and post a comment if they were, in fact, reading me via RSS. Overwhelmingly they responded “YES!” That post still ranks in the top ten of any post, ever, in terms of number of comments plus tweets – nearly 200.

Now, put another way this result was kind of pathetic – less than one in 1000 of my subscribers answered the call. Perhaps I should have concluded that you guys are either really lazy, secretly hate me, or in fact, really aren’t reading. Instead, I decided to conclude that for every one of you that took the time to comment or Tweet, hundreds of you were nodding along in agreement. See how writers convince themselves of their value?

Which is a long way to say, it’s time for our nearly-yearly checkup. And this time, I’m going to give you more data to work with, and a fresh challenge. (Or a pathetic entreaty, depending on your point of view.)

Ok, so here’s what has happened in 14 months: My RSS feed has almost doubled – it now sports nearly 400,000 subscribers, which is g*dd*am impressive, no? I mean, who has FOUR HUNDRED THOUSAND people who’ve raised their hands and asked to join your club? I’ve WON, no? Time for gold-plated teeth or somesh*t, right?

Well, no.

While it’s true that nearly 400,000 of you have elected to follow my RSS feed, the grim truth is more aptly told by what Google’s Feedburner service calls my “Reach.” By their definition, reach means “the total number of people who have taken action — viewed or clicked — on the content in your feed.”

And that number, as you can see, is pathetic. I mean, “click,” I can understand. Why click when you can read the full article in your reader? But “view”?! Wait, lemme do some math here….OK, one in 594 of you RSS readers are even reading my stuff. That’s better than the one in 1000 who answered the call last time, but wow, it’s way worse than I thought. Just *reading* doesn’t require you click through, or tweet something, or leave a comment.

Either RSS is pretty moribund, or, I must say, I am deeply offended.

What I really want to know is this: Am I normal? Is it normal for sites like mine to have .0017 percent of its RSS readers actually, well, be readers?

Or is the latest in a very long series of posts (a decade now, trust me) really right this time  - RSS is well and truly dead?

Here’s my test for you. If I get more comments and tweets on this post than I have “reach” by Google Feedburner status, well, that’s enough for me to pronounce RSS Alive and Well (by my own metric of nodding along, of course). If it’s less than 664, I’m sorry, RSS is Well And Truly Dead. And it’s all your fault.

(PS, that doesn’t mean I’ll stop using it. Ever. Insert Old Man Joke Here.)

The Future of War (From Jan., 1993 to the Present)

By - January 24, 2012

(image is a shot of my copy of the first Wired magazine, signed by our founding team)
I just read this NYT piece on the United States’ approach to unmanned warfare: Do Drones Undermine Democracy?. From it:

There is not a single new manned combat aircraft under research and development at any major Western aerospace company, and the Air Force is training more operators of unmanned aerial systems than fighter and bomber pilots combined. In 2011, unmanned systems carried out strikes from Afghanistan to Yemen. The most notable of these continuing operations is the not-so-covert war in Pakistan, where the United States has carried out more than 300 drone strikes since 2004.

Yet this operation has never been debated in Congress; more than seven years after it began, there has not even been a single vote for or against it. This campaign is not carried out by the Air Force; it is being conducted by the C.I.A. This shift affects everything from the strategy that guides it to the individuals who oversee it (civilian political appointees) and the lawyers who advise them (civilians rather than military officers).

It also affects how we and our politicians view such operations. President Obama’s decision to send a small, brave Navy Seal team into Pakistan for 40 minutes was described by one of his advisers as “the gutsiest call of any president in recent history.” Yet few even talk about the decision to carry out more than 300 drone strikes in the very same country.

Read the whole piece. Really, read it. If any article in the past year or so does a better job of displaying how what we’ve built with technology is changing the essence of our humanity, I’d like to read it.

For me, this was a pretty powerful reminder. Why? Because we put the very same idea on display as the very first cover story of Wired, nearly 20 years ago. Written by Bruce Sterling, whose star has only become brighter in the past two decades, it predicts the future of war with an eerie accuracy. In the article, Sterling describes “modern Nintendo training for modern Nintendo war.” Sure, if he was all seeing, he might have said Xbox, but still…here are some quotes from nearly 20 years ago:

The omniscient eye of computer surveillance can now dwell on the extremes of battle like a CAT scan detailing a tumor in a human skull. This is virtual reality as a new way of knowledge: a new and terrible kind of transcendent military power.

…(Military planners) want a pool of contractors and a hefty cadre of trained civilian talent that they can draw from at need. They want professional Simulation Battle Masters. Simulation system operators. Simulation site managers. Logisticians. Software maintenance people. Digital cartographers. CAD-CAM designers. Graphic designers.

(Ed: Like my son playing Call of Duty?)

And it wouldn’t break their hearts if the American entertainment industry picked up on their interactive simulation network technology, or if some smart civilian started adapting these open-architecture, virtual-reality network protocols that the military just developed. The cable TV industry, say. Or telephone companies running Distributed Simulation on fiber-to-the-curb. Or maybe some far-sighted commercial computer-networking service. It’s what the military likes to call the “purple dragon” angle. Distributed Simulation technology doesn’t have to stop at tanks and aircraft, you see. Why not simulate something swell and nifty for civilian Joe and Jane Sixpack and the kids? Why not purple dragons?

(Ed: Skyrim, anyone?!)

Can governments really exercise national military power – kick ass, kill people – merely by using some big amps and some color monitors and some keyboards, and a bunch of other namby-pamby sci-fi “holodeck” stuff?

The answer is yes.

Say you are in an army attempting to resist the United States. You have big tanks around you, and ferocious artillery, and a gun in your hands. And you are on the march.

Then high-explosive metal begins to rain upon you from a clear sky. Everything around you that emits heat, everything around you with an engine in it, begins to spontaneously and violently explode. You do not see the eyes that see you. You cannot know where the explosives are coming from: sky-colored Stealths invisible to radar, offshore naval batteries miles away, whip-fast and whip-smart subsonic cruise missiles, or rapid-fire rocket batteries on low-flying attack helicopters just below your horizon. It doesn’t matter which of these weapons is destroying your army – you don’t know, and you won’t be told, either. You will just watch your army explode.

Eventually, it will dawn on you that the only reason you, yourself, are still alive, still standing there unpierced and unlacerated, is because you are being deliberately spared. That is when you will decide to surrender. And you will surrender. After you give up, you might come within actual physical sight of an American soldier.

Eventually you will be allowed to go home. To your home town. Where the ligaments of your nation’s infrastructure have been severed with terrible precision. You will have no bridges, no telephones, no power plants, no street lights, no traffic lights, no working runways, no computer networks, and no defense ministry, of course. You have aroused the wrath of the United States. You will be taking ferries in the dark for a long time.

Now imagine two armies, two strategically assisted, cyberspace-trained, post-industrial, panoptic ninja armies, going head-to-head. What on earth would that look like? A “conventional” war, a “non-nuclear” war, but a true War in the Age of Intelligent Machines, analyzed by nanoseconds to the last square micron.

Who would survive? And what would be left of them?

Who indeed.