free html hit counter Joints After Midnight & Rants Archives | Page 8 of 42 | John Battelle's Search Blog

Here We Go Again: The Gray Market in Twitter and Facebook

By - August 07, 2012

So, casually reading through this Fast Company story about sexy female Twitter bots, I come across this astounding, unsubstantiated claim:

My goal was to draw a straight line from a Twitter bot to the real, live person whose face the bot had stolen. In the daily bot wars–the one Twitter fights every day, causing constant fluctuations in follower counts even as brands’ followers remain up to 48% bot–these women are the most visible and yet least acknowledged victims…

There it was, tossed in casually, almost as if it was a simple cost of doing business – nearly half of the followers of major brands could well be “bots.”

The article focuses on finding a pretty woman whose image had been hijacked, sure, but what I found most interesting (but sadly unsurprising) was how it pointed to a site that promises to a thousand  followers to anyone who pays…wait for it…about $17. Yes, the site is real. And no, you shouldn’t be surprised, in the least, that such services exist.

It has always been so.

Back when I was reporting for The Search, I explored the gray market that had sprung up around Google (and still flourishes, despite Google’s disputed attempts to beat it back). Fact is, wherever there is money to be made, and ignorance or desperation exists in some measure, shysters will flourish. And a further fact is this: Marketers, faced with CMO-level directives to “increase my follower/friend counts,” will turn to the gray market. Just as they did back in the early 2000s, when the directive was “make me rank higher in search.”

Earlier this week I got an email from a fellow who has been using Facebook to market his products. He was utterly convinced that nearly all the clicks he’s received on his ad were fake – bots, he thought, that were programmed to make his campaigns look as if they were performing well. He was further convinced that Facebook was running a scam – running bot networks to drive performance metrics. I reminded him that Facebook was a public company run by people I believed were well intentioned, intelligent people who knew that such behavior, if discovered, would ruin both their reputation as well as that of the company.

Instead, I suggested, he might look to third parties he might be working with – or, hell, he might just be the victim of a drive-by shooting – poorly coded bots that just click on ad campaigns, regardless of whose they might be.

In short, I very much doubt Facebook (or Twitter) are actively driving fraudulent behavior on their networks. In fact, they have legions of folks devoted to foiling such efforts.Yet there is absolutely no doubt that an entire, vibrant ecosystem is very much engaged in gaming these services. And just like Google had at the dawn of search marketing, Twitter and Facebook have a very – er – complicated relationship with these fraudsters. On the one hand, the gray hats are undermining the true value of these social networks. But on the other, well, they seem to help important customers hit their Key Performance Indicators, driving very real money into company coffers, either directly or indirectly.

I distinctly recall a conversation with a top Google official in 2005, who – off the record – defended AdSense-splattered domain-squatters as “providing a service to folks who typed the wrong thing into the address bar.” Uh huh.

As long as marketers are obsessed with hollow metrics like follower counts, Likes, and unengaged “plays,” this ecosystem will thrive.

What truly matters, of course, is engagement that can be measured beyond the actions of bots. It is coming. But not before millions of dollars are siphoned off by the opportunists who have always lived on the Internet’s gray edge.

  • Content Marquee

The Power of Being There

By - August 06, 2012

It’s been building for weeks – Friday marks the first day of the fifth annual Outside Lands festival here in San Francisco. Despite the demands of work and family, I try to get to as many festivals as I can – so far, I’ve managed to see Bonnaroo a few times, Coachella once (I’ll be back!), Austin City Limits, and a few others. Outside Lands is local to San Francisco and therefore much easier to attend – this will be my third. Compared to your average festival goer (who tends to be single and about half my age) I’m a punter, but I’ll take it.

Why do I go? In two words, serendipity and joy. When you gather with tens of thousands of like minded, smiling people, unexpected connections are made, and bouts of pure happiness break out all over the place. Who wouldn’t want to soak in some of that?

I bring this all up because I’ve noticed a trend, highlighted by this story: YouTube streaming Lollapalooza music festival for free this weekend. Outside Lands was one of the first festivals to stream for free (back in 2009, if I recall), and many others have followed apace.

Why?

Well, I think the truth is obvious, but worth restating: it’s one thing to be there, and quite another to watch everyone else being there. The value of gathering together only increases as the virtual channel becomes ubiquitous. And that is a good sign for humanity, to my mind.

What We Lose When We Glorify “Cashless”

By - July 24, 2012

Look, I’m not exactly a huge fan of grimy greenbacks, but I do feel a need to point out something that most coverage of current Valley darling Square seems to miss: The “Death of Cash” also means the “death of anonymous transactions” – and no matter your view of the role of  government and corporations in our life, the very idea that we might lose the ability to transact without the creation of a record merits serious discussion. Unfortunately, this otherwise worthy cover story in Fortune about Square utterly ignores the issue.

And that’s too bad. A recent book called “The End of Money” does get into some of these issues – it’s on my list to read – but in general, I’ve noticed a lack of attention to the anonymity issue in coverage of hot payment startups. In fact, in interviews I’ve read, the author of “The End of Money” makes the point that cash is pretty much a blight on our society – in that it’s the currency of criminals and a millstone around the necks of the poor.

Call it a hunch, but I sense that many of us are not entirely comfortable with a world in which every single thing we buy creates a cloud of data. I’d like to have an option to not have a record of how much I tipped, or what I bought at 1:08 am at a corner market in New York City. Despite protections of law, technology, and custom, that data will remain forever, and sometimes, we simply don’t want it to.

What do you think?  (And yes, I am aware of bitcoin…)

BTW, this mini-rant is very related to my last post: First, Software Eats the World, Then, The Mirror World Emerges.

First, Software Eats the World, Then, The Mirror World Emerges

By - July 18, 2012

David Gelernter of Yale

(image Edge.org) A month or so ago I had the pleasure of sitting down with Valley legend Marc Andreessen, in the main for the purpose of an interview for my slowly-developing-but-still-moving-forward book. At that point, I had not begun re-reading David Gelernter’s 1991 classic Mirror Worlds: or the Day Software Puts the Universe in a Shoebox…How It Will Happen and What It Will Mean.

Man, I wish I had, because I could have asked Marc if it was his life-goal to turn David’s predictions into reality. Marc is well known for many things, but his recent mantra that “Software Is Eating the World” (Wall St. Journal paid link, more recent overview here) has become nearly everyone’s favorite Go-To Big Valley Trend. And for good reason – the idea seductively resonates on many different levels, and forms the backbone of not just Andreessen’s investment thesis, but of much of the current foment in our startup-driven industry.

A bit of background: Andreessen’s core argument is that nearly every industry in the world is being driven by or turned into software in one way or another. In some places, this process is deeply underway: The entertainment business is almost all software now, for example, and the attendant disruption has created extraordinary value for savvy investors in companies like Amazon, Netflix, and Apple. Further, Marc points out that the largest company in direct marketing these days is a software company: Google. His  thesis extends to transportation (think Uber but also FedEx, which runs on software), retail (besides Amazon, Walmart is a data machine),  healthcare (huge data opportunity, as yet unrealized), energy (same), and even defense. From his Journal article:

The modern combat soldier is embedded in a web of software that provides intelligence, communications, logistics and weapons guidance. Software-powered drones launch airstrikes without putting human pilots at risk. Intelligence agencies do large-scale data mining with software to uncover and track potential terrorist plots.

That quote reminds me of Wired’s first cover story, in 1993, about the future of war. But in 1991, two years before even that watershed moment (well, for me anyway), Yale scholar Gelernter published Mirror Worlds, and in it he predicted that we’d be putting the entire “universe in a shoebox” via software.  Early in the book, Gelernter posits the concept of the Mirror World, which might best be described as a more benign version of The Matrix, specific to any given task, place, or institution. He lays out how such worlds will come to be, and declares that the technology already exists for such worlds to be created. “The software revolution hasn’t begun yet; but it will soon,” he promises.

As we become infinite shadows of data, I sense Gelernter is right, and VCs like Andreessen and the entrepreneurs they are backing are leading the charge. I’ll be reviewing Mirror Worlds later in the summer – I’m spending time with Gelernter at this home in New Haven next month – but for now, I wanted to just note how far we’ve come, and invite all of you, if you are fans of his work, to help me ask Gelernter intelligent questions about how his original thesis has morphed in two decades.

It seems to me that if true “mirror worlds” are going to emerge, the first step will have to be “software eating the world” – IE, we’ll have to infect our entire physical realities with software, such that those realities emanate with real time and useful data. That seems to be happening apace. And the implications of how we go about architecting such systems are massive.

One of my favorite passages from Mirror Worlds, for what it’s worth:

The intellectual content, the social implications of these software gizmos make them far too important to be left in the hands of the computer sciencearchy…..Public policy will be forced to come to grips with the implications. So will every thinking person: A software revolution will change the way society’s business is conducted, and it will change the intellectual landscape.

Indeed!

It’s Hard to Lay Fallow

By - June 27, 2012

I’ll admit it, I’m one of those people who has a Google News alert set for my own name. Back in the day, it meant a lot more than it does now – the search results used to pick up blog mentions as well as “regular” news mentions, and before FacebookLand took over our world (and eschewed Google’s), a news alert was a pretty reliable way to find out what folks might be saying about you or your writing on any given day.

Like most folks who maintain a reasonably public conversation, I now watch Twitter’s @replies far more than I do Google news alerts. Of course, Twitter doesn’t catch everything, so I never unsubscribed from my Google News alert.

Yesterday, one came over the transom, and it kind of crushed me.  “The End of the Tech Conference?” it asked. The opening line was included in the snippet: “The heartbreak was palpable when John Battelle announced via blog post back in April that the Web 2.0 Summit would not be held for the first time since its debut in 2004.”

The funny thing is, while I think the writer intended to describe the Web 2 community’s “heartbreak” – certainly an arguable supposition given how overwhelmed our industry is with conferences – what she may not have realized is how close to home the line hit for me. When I read it, I felt my own loss – it’s difficult to stop doing something you’ve done well and for a long time. In my case, I’ve hosted a gathering of Internet industry leaders nearly every year since 1998 (before Web 2, there was The Industry Standard’s “Internet Summit”). That’s a decade and a half. Not doing it is far harder than I thought.

I took the decision to step away from the Web 2 Summit as inevitable for two main reasons. First, I needed to work on the book, and there didn’t seem to be room for such an ambitious project if I kept my two other day jobs (Web 2 and Federated Media Publishing). Web 2 takes an extraordinary amount of time to do – with nearly 70 speakers and three days of programming, my life very quickly becomes overwhelmed with research, production calls, and pre-interviews, not to mention all the sales, operations, and marketing work.

Second, I had been doing Web 2 for a long time, and I wanted to step away and look at it with fresh eyes – let it lay fallow, so to speak. Stop tilling and seeding the same soil, let it repair, in the most catholic interpretation of the word (“repair” derives from the Latin “to go home”). And it’s this part that’s been really hard. It’s a natural cycle of grief, in a way – I’m probably deep in the trough of sorrow right now – but I do kind of miss the work.

In other words, it’s hard to lay fallow.

But the beauty of a fallow field is what’s going on underneath. If you trust yourself enough, you’ll realize all kinds of seeds are competing to push through and gather the resources of your attention. I’m learning that it takes a lot of will power to let that process run its course. I find myself “watering” all sorts of potential new growth ideas. I’m not sure which will take root, which are weeds, and which might yield the wrong crop, so to speak. And that’s scary.

But it’s also good. If you’re not a little scared, you’re not really paying attention, are you?

Meanwhile, I can report that I *will* be involved in a new kind of gathering this Fall, one that I can’t yet announce, because it involves many other wonderful partners. It’s not a typical tech conference, and it’s certainly not on par with Web 2 in terms of commitment or time – either from me or the attendees. But it’s a seed, one I’m happy to be cultivating. Stay tuned for more on that soon.

Meanwhile, back to the fallows…

(image: Shutterstock)

Do Not Track Is An Opportunity, Not a Threat

By - June 10, 2012

This past week’s industry tempest centered around Microsoft’s decision to implement “Do Not Track” (known as “DNT”) as a default on Internet Explorer 10, a browser update timed to roll out with the company’s long-anticipated Windows 8 release.

Microsoft’s decision caught much of the marketing and media industry by surprise – after all, Microsoft itself is a major player in the advertising business, and in that role has been a strong proponent of the current self-regulatory regime, which includes, at least until Microsoft tossed its grenade into the marketplace, a commitment to implementation of DNT as an opt-in technology, rather than as a default.*

For most readers I don’t need to explain why this matters, but in case you’re new to the debate, when enabled, DNT sets a “flag” telling websites that you don’t want data about your visit to be used for purposes of creating a profile of your browsing history (or for any other reason). Whether we like it or not, such profiles have driven a very large business in display advertising over the past 15 years. Were a majority of consumers to implement DNT, the infrastructure that currently drives wide swathes of the web’s  monetization ecosystem would crumble, taking a lot of quality content along with it.

Once released, it’s estimated that IE 10 could quickly grab as much as 25-30% of browser market share. The idea that the online advertising industry could lose almost a third of its value due to the actions of one rogue player is certainly cause for alarm. Last week’s press were full of conspiracy theories about why Microsoft was making such a move. The company claims it just wants to protect users’ privacy, which strikes me as disingenuous – it’s far more likely that Microsoft is willing to spike its relatively small advertising business in exchange for striking a lethal blow to Google’s core business model, both in advertising and in browser share.

I’m quite certain the Windows 8 team is preparing to market IE 10 – and by extension, Windows 8 – as the safe, privacy-enhancing choice, capitalizing on Google’s many government woes and consumers’ overall unease with the search giant’s power. I’m also quite certain that Microsoft, like many others, suffers from a case of extreme Apple envy, and wishes it had a pristine, closed-loop environment like iOS that it could completely control. In order to create such an environment, Microsoft needs to gain consumer’s trust. Seen from that point of view, implementing DNT as a default just makes sense.

But the more I think through it, the more I’m somewhat unperturbed by the whole affair. In fact, I’m rather excited by it.

First off, it’s not clear that IE10′s approach to DNT will matter. When it comes to whether or not a site has to comply with browser flags such as DNT, websites and third party look to the standard settings body knows as the WC3. That organization’s proposed draft specification on DNT is quite clear: It says no company may enforce a default DNT setting for a user, one way or the other. In other words, this whole thing could be a tempest in a teapot. Wired recently argued that Microsoft will be forced to back down and change its policy.

But I’m kind of hoping Microsoft will keep DNT in place. I know, that’s a pretty crazy thing for a guy who started an advertising-run business to say, but in this supposed threat I see a major opportunity.

Imagine a scenario, beginning sometime next year, when website owners start noticing significant numbers of visitors with IE10 browsers swinging by their sites. Imagine further that Microsoft has stuck to its guns, an all those IE10 browsers have their flags set to “DNT.”

To me, this presents a huge opportunity for the owner of a site to engage with its readers, and explain quite clearly the fact that good content on the Internet is paid for by good marketing on the Internet. And good marketing often needs to use “tracking” data so as to present quality advertising in context. (The same really can and should be said of content on the web – but I’ll just stick to advertising for now).

Advertising and content have always been bound together – in print, on television, and on the web. Sure, you can skip the ad – just flip the page, or press “ffwd” on your DVR. But great advertising, as I’ve long argued, adds value to the content ecosystem, and has as much a right to be in the conversation as does the publisher and the consumer.

Do Not Track provides our industry with a rare opportunity to speak out and explain this fact, and while the dialog box I’ve ginned up at the top of this post is fake, I’d love to see a day when they are popping up all over the web, reminding consumers that not only does quality content need to be supported, in fact, the marketers supporting it actually deserve our attention as well.

At present, the conversation between content creator, content consumer, and marketer is poorly instrumented and rife with mistrust. Our industry’s “ad choices” self regulatory regime – those little triangle icons you see all over display ads these days – is a good start. But we’ve a long way to go. Perhaps unwittingly, Microsoft may be pushing us that much faster toward a better future.

*I am on the board of the IAB, one of the major industry trade groups which promotes self-regulation. The opinions here are my own, as usual. 

In 1844, Morse Gets The Scoop, Then Tweets His Dinner

By - June 07, 2012

I’m reading a fascinating biography of Samuel Morse - Lightning Man: The Accursed Life Of Samuel F.B. Morse by Kenneth Silverman. I’ll post a review in a week or so, but one scene bears a quick post.

Morse successfully demonstrated his telegraph between Baltimore and Washington DC in May of 1844. Three days later the Democratic party convention commenced in Baltimore. In what turned out to be a masterstroke of “being in the right place at the right time,” Morse’s telegraph line happened to be in place to relay news of the convention back to the political classes in DC.

Recall, this was at a time when news was carried by horseback or, in the best case, by rail. It took hours for messages to travel between cities like Baltimore and DC – and they were just 45 miles apart.

Adding to the sensationalism of the telegraph’s public debut, the Democratic convention of 1844 was fraught with controversy and political implication – candidates’ fortunes turned on nation-changing issues such as whether to reclaim Oregon from the British, and whether to annex Texas into the Union, which had serious implications for a growing movement for the abolition of slavery.

Remember, this was 15 years before the Civil War began, and just 30-odd years after the war of 1812, during which the British torched the House of Representatives.

Morse, who by his fifties had endured nearly a dozen years of false starts, failures, near-bankruptcy, and more, turned out to be a master publicist. He positioned his partner Alfred Vail at the convention and himself near Congress. Vail began sending regular reports on the convention to Morse, who was surrounded by hundreds of reporters, Senators,  and other dignitaries in DC. News came in short bursts familiar to anyone who’s spent time on Twitter or Facebook. In the “conversation,” most likely the first of its kind to report news in real time, all of Washington learned that the “dark horse” candidate James Polk, who supported bringing Texas into the Union, would prevail.

It makes for fascinating reading, with a funny kicker at the end:

V[ail] Mr. Brewster of Pa is speaking in favour of Buchanan

M[orse] yes….

V Mr Brewster says his delegation go for VB but if VB’s friends desert them, the Delegation go for Buchanan…. The vote taken will be nearly unanimous for J K Polk & harmony & union are restored

M Is it a fact or a mere rumor

V Wait till the ballot comes…. Illinois goes for Polk … Mich goes for Polk. Penn asks leave to correct her error so as to give her whole vote for Polk….

M Intense anxiety prevails to … hear the result of last Balloting

V Polk is unanimously nom

M 3 cheers have been given here for Polk and 3 for the Telegraph.

V Have you had your dinner

M yes have you

V yes what had you

M mutton chop and strawberries

And so began a revolution in communications and industry. But even back then, folks shared both the extraordinary and the mundane across the wires….

 

 

On Small, Intimate Data

By - May 29, 2012

Part of the research I am doing for the book involves trying to get my head around the concept of “Big Data,” given the premise that we are in a fundamental shift to a digitally driven society. Big data, as you all know, is super hot – Facebook derives its value because of all that big data it has on you and me, Google is probably the original consumer-facing big data company (though Amazon might take issue with that), Microsoft is betting the farm on data in the cloud, Splunk just had a hot IPO because it’s a Big Data play, and so on.

But I’m starting to wonder if Big Data is the right metaphor for all of us as we continue this journey toward a digitally enhanced future. It feels so – impersonal – Big Data is something that is done to us or without regard for us as individuals. We need a metaphor that is more about the person, and less about the machine. At the very least, it should start with us, no?

Elsewhere I’ve written about the intersection of data and the platform for that data – expect a lot more from me on this subject in the future. But in short, I am unconvinced that the current architecture we’ve adopted is ideal – where all “our” data, along with the data created by that data’s co-mingling with other data – lives in “cloud” platforms controlled by large corporations whose terms and values we may or may not agree with (or even pay attention to, though some interesting folks are starting to). And the grammar and vocabulary now seeping into our culture is equally mundane and bereft of the subject’s true potential – the creation, sharing and intermingling of data is perhaps the most important development of our generation, in terms of potential good it can create in the world.

At Web 2 last year a significant theme arose around the idea of “You Are the Platform,” driven by people and companies like Chris Poole, Mozilla, Singly, and many others. I think this is an under-appreciated and important idea for our industry, and it centers around, to torture a phrase, the idea of “small” rather than Big Data. To me, small means limited, intimate, and actionable by individuals. It’s small in the same sense that the original web was “small pieces loosely joined” (and the web itself was “big.”)  It’s intimate in that it’s data that matters a lot to each of us, and that we share with much the same kind of social parameters that might constrain a story at an intimate dinner gathering, or a presentation at a business meeting. And should we choose to share a small amount of intimate data with “the cloud,” it’s important that the cloud understand the nature of that data as distinct from its masses of “Big Data.”

An undeveloped idea, to be sure, but I wanted to sketch this out today before I leave for a week of travel.

An Appreciation of The “Home Phone”

By - May 23, 2012

Last night on a whim I asked folks on Twitter if they had a home phone – you know, a “hard line” – the k ind of communications device that used to be ubiquitous, but seem increasingly an anachronism these days. The response was overwhelming – only three or four of about 35 responses, about ten percent, said they did, and most of those had them due to bad cel reception or because it makes people feel safe in case of an emergency (the “911 effect”).

The reason I conducted my unscientific poll on the home phone came down to my own experience – my home phone (yes, I have one) rings quite rarely, and when it does, it’s almost always a telemarketer, despite the fact that we’re on the “do not call list.” All of our friends and family know if they want to get in touch, they need to call our cels. Of course, our cels don’t work very well in the hills of Marin County, California, which creates a rather asynchronous sense of community, but more on that in a bit.

I set about writing this post not to bury the home phone, but to celebrate it. The home phone is relatively cheap, incredibly reliable, and – if you buy the right phone – will work for years without replacement. Oh, and far as I can tell, a home phone won’t give you brain cancer.

In a perfect world, the hard line should have become a platform for building out an entire app ecosystem for the home. And yet….it didn’t. Thanks to its monopoly nature and the resultant lack of competition, basic home phone service hasn’t changed much in 20 or so years – we got voice mail, call waiting, and a few other “innovations,” and that’s about it. It’s a dumb technology that is only getting dumber.

Now, I understand why the hard line is dying – mobile telephony is much more convenient for the consumer, and far more profitable for the telephone companies. Mobile phones are not a regulated monopoly (at least, not quite yet), so there’s a lot more innovation going on, at least at the platform level.

But I’m not sure we’ve really thought about what we’re losing as we bid adieu to the home phone (and I’m not talking about 911, which is a mostly solved problem). That one phone number – I can still remember mine from my earliest days growing up – was a shared identity for our family. When it rang, it forced a number of social cohesions to occur between us – we’d either race to answer it first (if we thought it might be for us) or we’d argue over who should get it (if we didn’t). An elaborate system of etiquette and social standards flowered around the home phone: how long a child might be allowed to stay on the phone, how late one could call without being impolite, and of course the dread implications of a late night call which violated that norm.

In short, the home phone was a social, shared, immediate technology, one that existed in rhythm with the physical expression of our lives in our most formative space: Our home. But it’s quickly being replaced by a technology that is private, mobile, asynchronous and virtual. Today, my kids don’t even look up if our home phone rings. But they’ll spend hours up in their room, texting their friends and chatting over the Internet. In other words, the loss of the home phone has sped up the phenomenon Sherry Turkle calls “being alone together.” We may be in the same physical space, but we are not sharing the same kind of social space we used to. And something is lost in that transition.

We may yet might decide there’s value in what the home phone once represented. I believe smart entrepreneurs will see opportunity in the “hard line,” and might help us rediscover the benefits of sharing some of our communications bounded once again in real space and time.

Jaron Lanier: Something Doesn’t Smell Right

By - May 08, 2012

Jaron Lanier’s You Are Not A Gadget has been on my reading list for nearly two years, and if nothing else comes of this damn book I’m trying to write, it’ll be satisfying to say that I’ve made my way through any number of important works that for one reason or another, I failed to read up till now.

I met Jaron in the Wired days (that’d be 20 years ago) but I don’t know him well – as with Sherry Turkle and many others, I encountered him through my role as an editor, then followed his career with interest as he veered from fame as a virtual reality pioneer into his current role as chief critic of all things “Web 2.0.” Given my role in that “movement” – I co-founded the Web 2 conferences with Tim O’Reilly in 2004 – it’d be safe to assume that I disagree with most of what Lanier has to say.

I don’t. Not entirely, anyway. In fact, I came away, as I did with Turkle’s work, feeling a strange kinship with Lanier. But more on that in a moment.

In essence, You Are Not A Gadget is a series of arguments, some concise, others a bit shapeless, centering on one theme: Individual human beings are special, and always will be, and digital technology is not a replacement for our humanity. In particular, Lanier is deeply skeptical of any kind of machine-based mechanism that might be seen as replacing or diminishing our specialness, which over the past decade, Lanier sees happening everywhere.

Lanier is most eloquent when he describes, late in the book, what he believes humans to be: the result of a very long, very complicated interaction with reality (sure, irony alert given Lanier’s VR fame, but it makes sense when you read the book):

I believe humans are the result of billions of years of implicit, evolutionary study in the school of hard knocks. The cybernetic structure of a person has been refined by a very large, very long, and very deep encounter with physical reality.

Lanier worries we’re losing that sense of reality. From crowdsourcing and Wikipedia to the Singularity movement, he argues that we’re starting to embrace a technological philosophy that can only lead to loss. Early in the book, he writes:

“…certain specific, popular internet designs of the moment…tend to pull us into life patterns that gradually degrade the ways in which each of us exists as an individual. These unfortunate designs are more oriented toward treating people as relays in a global brain….(this) leads to all sorts of maladies….”

Lanier goes on to specific examples, including the online tracking associated with advertising, the concentration of power in the hands of the “lords of the clouds” such as Microsoft, Facebook, Google, and even Goldman Sachs, the loss of analog musical notation, the rise of locked in, fragile, and impossibly complicated software programs; and ultimately, the demise of the middle class. It’s a potentially powerful argument, and one I wish Lanier had made more completely. Instead, after reading his book, I feel forewarned, but not quite forearmed.

Lanier singles out many of our shared colleagues – the leaders of the Web 2.0 movement – as hopelessly misguided, labeling them “cynernetic totalists” who believe technology will solve all problems, including that of understanding humanity and consciousness. He worries about the fragmentation of our online identity, and warns that Web 2 services – from blogs to Facebook – lead us to leave little pieces of ourselves everywhere, feeding a larger collective, but resulting in no true value to the individual.

If you read my recent piece On Thneeds and the “Death of Display”, this might sound familiar, but I’m not sure I’d be willing to go as far as Lanier does in claiming all this behavior of ours will end up impoverishing our culture forever. I tend to be an optimist, Lanier, less so. He rues the fact that the web never implemented Ted Nelson’s vision of true hypertext – where the creator is remunerated via linked micro-transactions, for example. I think there were good reasons this system didn’t initially win, but there’s no reason to think it never will.

Lanier, an accomplished musician – though admittedly not a very popular one – is convinced that popular culture has been destroyed by the Internet. He writes:

Pop culture has entered into a nostalgic malaise. Online culture is dominated by trivial mashups of the culture that existed before the onset of mashups, and by fandom responding to the dwindling outposts of centralized mass media. It is a culture of reaction without action.

As an avid music fan, I’m not convinced. But Lanier goes further:

Spirituality is committing suicide. Consciousness is attempting to will itself out of existence…the deep meaning of personhood is being reduced by illusions of bits.

Wow! That’s some powerful stuff. But after reading the book, I wasn’t convinced about that, either, though Lanier raises many interesting questions along the way. One of them boils down to the concept of smell – the one sense that we can’t represent digitally. In a section titled “What Makes Something Real Is That It Is Impossible to Represent It To Completion,” Lanier writes:

It’s easy to forget that the very idea of a digital expression involves a trade-off with metaphysical overtones. A physical oil painting cannot convey an image created in another medium; it is impossible to make an oil painting look just like an ink drawing, for instance, or vice versa. But a digital image of sufficient resolution can capture any kind of perceivable image—or at least that’s how you’ll think of it if you believe in bits too much. Of course, it isn’t really so. A digital image of an oil painting is forever a representation, not a real thing. A real painting is a bottomless mystery, like any other real thing. An oil painting changes with time; cracks appear on its face. It has texture, odor, and a sense of presence and history.

This really resonates with me. In particular, the part about the odor. Turns out, odor is a pretty interesting subject. Our sense of smell is inherently physical – actual physical molecules of matter are required to enter our bodies and “mate” with receptors in our nervous system in order for us to experience an odor:

Olfaction, like language, is built up from entries in a catalog, not from infinitely morphable patterns. …the world’s smells can’t be broken down into just a few numbers on a gradient; there is no “smell pixel.”

Lanier suspects – and I find the theory compelling – that olfaction is deeply embedded in what it means to be human. Certainly such a link presents a compelling thought experiment as we transition to a profoundly digital world. I am very interested in what it means for our culture that we are truly “becoming digital,” that we are casting shadows of data in nearly everything we do, and that we are struggling to understand, instrument, and respond socially to this shift. I’m also fascinated by the organizations attempting to leverage that data, from the Internet Big Five to the startups and behind the scenes players (Palantir, IBM, governments, financial institutions, etc) who are profiting from and exploiting this fact.

But I don’t believe we’re in early lockdown mode, destined to digital serfdom. I still very much believe in the human spirit, and am convinced that if any company, government, or leader pushes too hard, we will “sniff them out,” and they will be routed around. Lanier is less complacent: he is warning that if we fail to wake up, we’re in for a very tough few decades, if not worse.

Lanier and I share any number of convictions, regardless. His prescriptions for how to insure we don’t become “gadgets” might well have been the inspiration for my post Put Your Taproot Into the Independent Web, for example (he implores us to create, deeply, and not be lured into expressing ourselves solely in the templates of social networking sites). And he reminds readers that he loves the Internet, and pines, a bit, for the way it used to be, before Web 2 and Facebook (and one must assume, Apple), rebuilt it into forms he now decries.

I pine a bit myself, but remain (perhaps foolishly) optimistic that the best of what we’ve created together will endure, even as we journey onward to discover new ways of valuing what it means to be a person. And I feel lucky to know that I can reach out to Jaron – and I have – to continue this conversation, and report the results of our dialog on this site, and in my own book.

Next up: A review (and dialog with the author) of Larry Lessig’s Code And Other Laws of Cyberspace, Version 2.

Other works I’ve reviewed:

Wikileaks And the Age of Transparency  by Micah Sifry (review)

Republic Lost by Larry Lessig (review)

Where Good Ideas Come From: A Natural History of Innovation by Steven Johnson (my review)

The Singularity Is Near: When Humans Transcend Biology by Ray Kurzweil (my review)

The Corporation (film – my review).

What Technology Wants by Kevin Kelly (my review)

Alone Together: Why We Expect More from Technology and Less from Each Other by Sherry Turkle (my review)

The Information: A History, a Theory, a Flood by James Gleick (my review)

In The Plex: How Google Thinks, Works, and Shapes Our Lives by Steven Levy (my review)

The Future of the Internet–And How to Stop It by Jonathan Zittrain (my review)

The Next 100 Years: A Forecast for the 21st Century by George Friedman (my review)

Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives by the Year 2100 by Michio Kaku (my review)