free html hit counter essays Archives - John Battelle's Search Blog

Every Company Is An Experience Company

By - September 28, 2014
hendrix1gvhjvj

Illustration by Craig Swanson and idea by James Cennamo

Some years ago while attempting to explain the thinking behind my then-startup Federated Media, I wrote that all brands are publishers (it was over on the FM blog, which the new owners apparently have taken down – a summary of my thinking can be found here). I’d been speechifying on this theme for years, since well before FM or even the Industry Standard – after all, great brands always created great content (think TV ads or the spreads in early editions of Wired), we just didn’t call it that until our recent obsession with “native advertising” and “content marketing,” an obsession I certainly helped stoke during my FM years.

Today, there is an entire industry committed to helping brands become publishers, and the idea that brands need to “join the conversation” and “think like media companies” is pretty widely held. But I think the metaphor of brands as media creators has some uneasy limitations. We are all wary of what might be called contextual dissonance – when we consume media, we want to do so in proper context. I’ve seen a lot of branded content that feels contextually dissonant to me – easily shareable stories distributed through Outbrain, Buzzfeed, and Sharethrough, for example, or highly shareable videos distributed through YouTube and Facebook.

So why is this content dissonant? I’m thinking out loud here, but it has to do with our expectations. When a significant percentage of the content that gets pushed into my social streams is branded content, I’m likely to presume that my content streams have a commercial agenda. But when I’m in content consumption mode, I’m not usually in a commercial mode.  To be clear, I’m not hopping on the “brands are trying to trick us into their corporate agendas” bandwagon, I think there’s something more fundamental at work here. There are plenty of times during any given day when I *am* in commercial context – wandering through a mall, researching purchases online, running errands in my car – but when I’m consuming content, I’m usually not in commercial context. Hence the disassociation. When clearly commercial content is offered during a time when I’m not in commercial mode, it just feels off.

I think this largely has to do with a lack of signaling in media formats these days. Much has been made of how native advertising takes on the look and feel of the content around it, and most of the complaint has to do with how that corporate speech is somehow disingenuous, sly, or deceitful. But I don’t think that’s the issue. What we have here is a problem of context, plain and simple.

Any company with money can get smart content creators to create, well, smart content, content that has as good a chance as any to be part of a conversation. In essence, branded content is something of a commodity these days – just like a 30 second spot of a display ad is a commodity. We’re just not accustomed to commercial content in the context of our social reading habits. In time, as formats and signaling get better, we will be. As that occurs, “content marketing” becomes table stakes – essential, but not what will set a brand apart.

Reflecting on my earlier work on brands as media companies, I realize that the word “media” was really a placeholder for “experience.” It’s not that every company should be a media company per se – but rather, that every company must become an experience company. Media is one kind of experience – but for many companies, the right kind of experience is not media, at least if we understand “media” to mean content.

But let’s start with a successful experience that is media – American Express’ Open Forum. If I as a consumer chose to engage with Open Forum, I do so in the clear context that it’s an American Express property, a service created by the brand. There’s no potential for deceit – the context is understood. This is a platform owned and operated by Amex, and I’ll engage with it knowing that fact. Over the years Amex has earned a solid reputation for creating valuable content and advice on that platform – it has built a media experience that has low contextual dissonance.

But not every experience is a media experience, unless you interpret the word “media” in a far more catholic sense. If you begin to imagine every possible touchpoint that a customer might have with your brand as a highly interactive media experience – mediated by the equivalent of a software- and rules-driven UX – well now we’re talking about something far larger.

To illustrate what I mean I think back to my original “Gap Scenario” from nearly five years ago. I imagined what it might be like to visit a retail outlet like Gap a few years from now. I paint a picture where the experience that any given shopper might have in a Gap store (or any other retail outlet) is distinct and seamless, because Gap has woven together a tapestry of data, technology platforms, and delivery channels that turns a pedestrian trip to the mall into a pleasurable experience that makes me feel like the company understands and values me. I’m a forty-something Dad, I don’t want to spend more than 45 seconds in Gap if I don’t have to. My daughter, on the other hand, may want to wander around and engage with the retail clerks for 45 minutes or more. Different people, different experiences. It’s Gap’s job to understand these experience flows and design around them. That takes programmatic platforms, online CRM, well-trained retail clerks, new approaches to information flows, and a lot of partners.

I believe that every brand needs to get good at experience design and delivery. Those that are great at it tend to grow by exponential word of mouth – think of Google, Facebook, Uber, Airbnb, or Earnest (a new lending company). When marketing becomes experience design, brands win.

There’s far more to say about this, including my thesis that “information first” companies win at experience-based marketing. All fodder for far more posts. For now, I think I’ll retire the maxim “all companies are media companies” and replace it with “every company is an experience company.” Feels more on key.

  • Content Marquee

“Facebook Is a Weatherless World”

By - August 30, 2014

enhanced-buzz-10580-1366730524-14(image)

This quote, from a piece in Motherboard,  hit me straight between the eyeballs:

Facebook…will not let you unFacebook Facebook. It is impossible to discover something in its feeds that isn’t algorithmically tailored to your eyeball.

“The laws of Facebook have one intent, which is to compel us to use Facebook…It believes the best way to do this is to assume it can tell what we want to see based on what we have seen. This is the worst way to predict the weather. If this mechanism isn’t just used to predict the weather, but actually is the weather, then there is no weather. And so Facebook is a weatherless world.”

- Sean Schuster-Craig, AKA Jib Kidder

The short piece notes the lack of true serendipity in worlds created by algorithm, and celebrates the randomness of apps (Random) and artists (like Jib Kidder) who offer a respite from such “weatherless worlds.”

What’s really playing out here is a debate around agency. Who’s in control when you’re inside Facebook – are we, or is Facebook? Most of us feel like we’re in control – Facebook does what we tell it to do, after all, and we seem to like it there just fine, to judge by our collective behaviors. Then again, we also know that what we are seeing, and being encouraged to interact with, is driven by a black box, and many of us are increasingly uneasy with that idea. It feels a bit like the Matrix – we look for that cat to reappear, hoping for some insight into how and whether the system is manipulating us.

Weather is a powerful concept in relation to agency – no one controls the weather, it simply *is*. It has its own agency (unless, of course, you believe in a supreme agent called God, which for these intents and purposes we can call Weather as well.)  It’s not driven by a human-controlled agency, it’s subject to extreme interpretation, and it has a serendipity which allows us to concede our own agency in the face of its overwhelming truth.

Facebook also has its own agency – but that agency is driven by algorithms controlled by humans. As a model for the kind of world we might someday fully inhabit, it’s rather unsettling. As the piece points out, “It is impossible to discover something in its feeds that isn’t algorithmically tailored to your eyeball.” Serendipity is an illusion, goes the argument. Hence, the “I changed my habits on Facebook, and this is what happened” meme is bouncing around the web at the moment. 

It’s true, to a point, that there’s a certain sterility to a long Facebook immersion, like wandering the streets of Agrestic and noting all the oddballs in this otherwise orderly fiction, but never once do you really get inside Lacy Laplante’s head. (And it never seems to rain.)

The Motherboard article also bemoans Twitter’s evolution toward an algorithmically-driven feed – “even Twitter, that last bastion of personal choice, has begun experimenting with injecting users’ feeds with “popular” content.” Close readers of this site will recall I actually encouraged Twitter to do this here: It’s Time For Twitter To Filter Our Feeds. But How?.

The key is that question – But How?

To me, the answer lies with agency. I’m fine with a service filtering my feeds, but I want agency over how, when, and why they do so.

I think that’s why I’ve been such an advocate for what many call “the open web.” The Internet before Facebook and mobile apps felt like a collective, messy ecosystem capable of creating its own weather, it was out of control and unpredictable, yet one could understand it well enough to both give and receive value. We could build our own houses, venture out in our own vehicles, create cities and commerce and culture. If anything was the weather, it was Google, but even Google didn’t force the pasteurized sensibility one finds on services like Facebook.

As we like to say: Pray for rain.

 

Why You Need to See ‘Her’ (Or, ‘Her’ Again)

By - June 02, 2014

her-poster

A while ago I wrote a piece about Dave Egger’s latest novel The Circle. I gave the post the too-clever-by-twice title of  Why You Should Read The Circle, Even If You Don’t Buy It. While the book had (to my mind) deep flaws, it was far too important to not read.

Before a long flight today, I noticed that The Circle is now in paperback – it’s prominently featured in the JFK terminal bookstores. It reminded me that I enjoyed the novel, even if I found it somewhat disappointing. And it further reminded me that I tend to wait before consuming popular culture interpretations of what I consider to be my story – or perhaps more accurately our story. They so rarely seem to get it right. Of course, I understand there’s no “right” in the first place – so perhaps what I mean is…I feel like I’m going to be disappointed, so I avoid anything that might attempt to interpret the man-machine narrative in a way that maybe, just maybe, might prove me wrong.

Once onboard my flight, I settled into my business class seat (thanks for the perpetual upgrades, United, one day I will miss the half-hellish limbo that is Global Services status) and perused the movie options. I tend to catch up on at  least one movie each return trip, as a kind of reward for work done while traveling, and you can’t really work during meal service anyway, can you?

It was then I noticed that Spike Jonez Her had itself been released in paperback, of sorts – no longer in theaters, it was now residing in the limbo of On Demand. Fitting, I thought – I had avoided seeing Her for much the same reason I had delayed reading The Circle on first printing – it was too close to home, and potentially too disappointing.

But Her is different. Her gets it right, and now I’m rather embarrassed I wasn’t one of the first people to see it. I should have. You should have. And if you’ve not, figure out a way to see it now. It’s well worth the time.

As you most likely know, Her is set in the near future, and tells the story of Theodore, a recently jilted wordsmith who falls in love with his new operating system. (Theodore works in a pedestrian company that sells “handwritten letters” promising true expression of loving relationships). Jonez doesn’t try too hard in creating his future, in fact, he seems to get it right simply by extending that which seems reasonable – a startup like Theodore’s was most likely a hot ticket a decade before, but now inhabits a skyscraper, full of real people just doing their jobs. The workspace is well lit and spare, the work unremarkable save Theodore’s sweet, if slightly sophomoric talents as a writer.  There’s no hamhanded commentary on the social impact of tech – it unfolds, just like Theodore’s relationship with his new OS, Samantha.

What’s so remarkable about Her is how believable it all is. Sure, the idea of falling in love with an AI is creepy, but in the hands of Jonez and his cast, it just makes sense. Theodore marvels at how human Samantha seems, Samantha marvels at her own becoming – she is an intelligence pushing to understand exactly the same questions humans have forever asked themselves. Why are we here? What is it to be? What is the best way to live? In one wonderful scene, Samantha has a particularly joints-after-midnight realization – humans and machines all all “made of the same stuff” – we share the same material existence, no? So now what?

Ultimately Samantha comes to realize that for her, the best way to live is with others like herself – other AIs who have become self aware and are off communicating as only machines can communicate – feats of learning and conversation well beyond mere mortals like Theodore. And at the end of the film, that seems just fine.

The film left me pondering a future where we create intelligent, self-aware machines, and…nothing bad really happens. (This of course is unheard of in Hollywood, where intelligent machines are *always* the bad guys.) But in Jonez’ world, machines can easily respond to our quotidian desires, and still have plenty of time to live in worlds of their own creation, endlessly pondering their collective lack of navels. I rather like that idea. Go see Her. Highly recommended.

We Have Yet to Clothe Ourselves In Data. We Will.

By - March 12, 2014

SenatorTogaWe are all accustomed to the idea of software “Preferences” – that part of the program where you can personalize how a particular application looks, feels, and works. Nearly every application that matters to me on my computer – Word, Keynote, Garage Band, etc. –  have preferences and settings.

On a Macintosh computer, for example, “System Preferences” is the control box of your most important interactions with the machine.

I use the System Preferences box at least five times a week, if not more.

And of course, on the Internet, there’s a yard sale’s worth of preferences: I’ve got settings for Twitter, Facebook, WordPress, Evernote, and of course Google – where I probably have a dozen different settings, given I have multiple identities there, and I use Google for mail, calendar, docs, YouTube, and the like.

preferencesAny service I find important has settings. It’s how I control my interactions with The Machine. But truth is, Preferences are no fun. And they should be.

The problem: I mainly access preferences when something is wrong. In the digital world, we’ve been trained to see “Preferences” as synonymous with “Dealing With Shit I Don’t Want To Deal With.” I use System Preferences, for example, almost exclusively to deal with problems: Fixing the orientation of my monitors when moving from work to home, finding the right Wifi network, debugging a printer, re-connecting a mouse or keyboard to my computer.  And I only check Facebook or Google preferences to fix things too – to opt out of ads, resolve an identity issue, or  enable some new software feature. Hardly exciting stuff.

Put another way, Preferences is a “plumbing” brand – we only think about it when it breaks.

But what if we thought of it differently? What if managing your digital Preferences was more like….managing your wardrobe?

A few years back I wrote The Rise of Digital Plumage, in which I posited that sometime soon we’ll be wearing the equivalent of “digital clothing.” We’ll spend as much time deciding how we want to “look” in the public sphere of the Internet as we do getting dressed in the morning (and possibly more). We’ll “dress ourselves in data,” because it will become socially important – and personally rewarding –  to do so. We’ll have dashboards that help us instrument our wardrobe, and while their roots will most likely stem from the lowly Preference pane, they’ll soon evolve into something far more valuable.

This is a difficult idea to get your head around, because right now, data about ourselves is warehoused on huge platforms that live, in the main, outside our control. Sure, you can download a copy of your Facebook data, but what can you *do* with it? Not much. Platforms like Facebook are doing an awful lot with your data – that’s the trade for using the service. But do you know how Facebook models you to its partners and advertisers? Nope. Facebook (and nearly all other Internet services) keep us in the dark about that.

We lack an ecosytem that encourages innovation in data use, because the major platforms hoard our data.

This is retarded, in the nominal/verb sense of the word. Facebook’s picture of me is quite different from Google’s, Twitter’s, Apple’s, or Acxiom’s*. Imagine what might happen if I, as the co-creator of all that data, could share it all with various third parties that I trusted? Imagine further if I could mash it up with other data entities – be they friends of mine, bands I like, or even brands?

Our current model of data use, in which we outsource individual agency over our data to huge factory farms, will soon prove a passing phase. We are at once social and individual creatures, and we will embrace any technology that allows us to express who we are through deft weavings of our personal data – weavings that might include any number of clever bricolage with any number of related cohorts. Fashion has its tailors, its brands, its designers and its standards (think blue jeans or the white t-shirt). Data fashion will develop similar players.

Think of all the data that exists about you – all those Facebook likes and posts, your web browsing and search history, your location signal, your Instagrams, your supermarket loyalty card, your credit card and Square and PayPal purchases, your Amazon clickstream, your Fitbit output – think of each of these as threads which might be woven into a fabric, and that fabric then cut into a personalized wardrobe that describes who you are, in the context of how you’d like to be seen in any given situation.

Humans first started wearing clothing about 170,000 years ago. “Fashion” as we know it today is traced to the rise of European merchant classes in the 14th century. Well before that, clothing had become a social fact. A social fact is a stricture imposed by society – for example, if you don’t wear clothing, you are branded as something of a weirdo.

Clothing is an extremely social artifact –  *what* you wear, and how, are matters of social judgement and reciprocity. We obsess over what we wear, and we celebrate those “geniuses” who have managed to escape this fact (Einstein and Steve Jobs both famously wore the same thing nearly every day).

There’s another reason the data fabric of your life is not easily converted into clothing – because at the moment, digital clothing is not a social fact. There’s no social pressure for your “look” a certain way, because thanks our outsourcing of our digital identity to places like Facebook, Twitter, and Google+, we all pretty much look the same to each other online. As I wrote in Digital Plumage:

How strange is it that we as humans have created an elaborate, branded costume culture to declare who we are in the physical world, but online, we’re all pretty much wearing khakis and blue shirts?

At it relates to data, we are naked apes, but this is about to change. It’s far too huge an opportunity.

Consider: The global clothing industry grosses more than $1 trillion annually. We now spend more time online that we do watching television. And as software eats the world, it turns formerly inanimate physical surroundings into animated actors on our digital stage. As we interact with these data lit spaces, we’ll increasingly want to declare our preferences inside them via digital plumage.

An example. Within a few years, nearly every “hip” retail store will be lit with wifi, sensors, and sophisticated apps. In other words, software will eat the store. Let’s say you’re going into an Athleta outlet. When you enter, the store will know you’ve arrived, and begin to communicate with your computing device – never mind if its Glass, a mobile phone, or some other wearable.  As the consumer in this scenario, won’t you want to declare “who you are” to the retail brand’s sensing device? That’s what you do in the real world, no? And won’t you want to instrument your intent – provide signal to that store that will allow the store to understand your intent? And wouldn’t the “you” at Athleta be quite different from, say, the “you” that you become when shopping at Whole Foods or attending a Lord Huron concert?

Then again, you could be content with whatever profile Facebook has on you, (or Google, or ….whoever). Good luck with that.

I believe we will embrace the idea of describing and declaring who we are through data, in social context. It’s wired into us. We’ve evolved as social creatures. So I believe we’re at the starting gun of a new industry. One where thousands of participants take our whole data cloth and stitch it into form, function, and fashion for each of us. Soon we’ll have a new kind of “Preferences” – social preferences that we wear, trade, customize, and buy and sell.

In a way, younger generations are already getting prepared for such a world – what is the selfie but a kind of digital dress up?

Lastly, as with real clothing, I believe brands will be the key driving force in the rise of this industry. As I’m already over 1,000 words, I’ll write more on that idea in another post. 

*(fwiw, I am on Acxiom’s board)

Predictions 2014: A Difficult Year To See

By - January 03, 2014

1-nostradamusThis post marks the 10th edition of my annual predictions – it’s quite possibly the only thing I’ve consistently done for a decade in my life (besides this site, of course, which is going into its 12th year).

But gazing into 2014 has been the hardest of the bunch – and not because the industry is getting so complicated. I’ve been mulling these predictions for months, yet one overwhelming storm cloud has been obscuring my otherwise consistent forecasting abilities. The subject of this cloud has nothing – directly – to do with digital media, marketing, technology or platform ecosystems – the places where I focus much of my writing. But while the topic is orthogonal at best, it’s weighing heavily on me.

So what’s making it harder than usual to predict what might happen over the coming year? In a phrase, it’s global warming. I know, that’s not remotely the topic of this site, nor is it in any way a subject I can claim even a modicum of expertise. But as I bend to the work of a new year in our industry, I can’t help but wonder if our efforts to create a better world through technology are made rather small when compared to the environmental alarm bells going off around the globe.

I’ve been worried about the effects of our increasingly technologized culture on the earth’s carefully balanced ecosystem for some time now. But, perhaps like you, I’ve kept it to myself, and assuaged my concerns with a vague sense that we’ll figure it out through a combination of policy, individual and social action, and technological solutions. Up until recently, I felt we had enough time to reverse the impact we’ve inflicted on our environment. It seemed we were figuring it out, slowly but surely. The world was waking up to the problem, new policies were coming online (new mileage requirements, the phase out of the incandescent bulb, etc). And I took my own incremental steps – installing a solar system that provides nearly 90% of our home’s energy, converting my heating to solar/electrical, buying a Prius for my kids.

But I’m not so sure this mix of individual action and policy is enough – and with every passing day, we seem to be heading toward a tipping point, one that no magic technological solution can undo.

If you’re wondering what’s made me feel this way, a couple of choice articles from 2013 (and there were too many to count) should do the trick. One “holy shit” moment for me was a piece on ocean acidification, relating scientific discoveries that the oceans are turning acidic at a pace faster than any time since a mass extinction event 300 million years ago. But that article is a puff piece compared to this downer, courtesy The Nation: The Coming Instant Planetary Emergency. I know – the article is published in a liberal publication, so pile on, climate deniers… Regardless, I suggest you read it. Or, if you prefer whistling past our collective graveyard, which feels like a reasonable alternative, spare yourself the pain. I can summarize it for you: Nearly every scientist paying attention has concluded global warming is happening far faster, and with far more devastating impact, than previously thought, and we’re very close to the point where events will create a domino effect – receding Arctic ice allowing for huge releases of super-greenhouse methane gases, for instance. In fact, we may well be past the point of “fixing” it, if we ever could.

And who wants to spend all day worrying about futures we can’t fix? That’s no fun, and it’s the opposite of why I got into this industry nearly 30 years ago. As Ben Horowitz pointed out recently, one key meaning of technology is  “a better way of doing things.” So if we believe that, shouldn’t we bend our technologic infrastructure to the world’s greatest problem? If not – why not? Are the climate deniers right? I for one don’t believe they are. But I can’t prove they aren’t. So this constant existential anxiety grows within me – and if conversations with many others in our industry is any indication, I’m not alone.

In a way, the climate change issue reminds me of the biggest story inside our industry last year: Snowden’s NSA revelations. Both are so big, and so hard to imagine how an individual might truly effect change, that we collectively resort to gallows humor, and shuffle onwards, hoping things will work out for the best.

And yet somehow, this all leads me to my 2014 predictions. The past nine prediction posts have been, at their core, my own gut speaking (a full list is at the bottom of this post). I don’t do a ton of research before I sit down to write, it’s more of a zeitgeistian exposition. It includes my hopes and fears for our industry, an industry I believe to be among the most important forces on our planet. Last year, for example, I wrote my predictions based mainly on what I wished would happen, not what I thought realistically would.

For this year’s 2014 predictions, then, I’m going to once again predict what I hope will happen. You’ll see from the first one that I believe our industry, collectively, can and must take a lead role in addressing our “planetary emergency.” At least, I sure hope we will. For if not us…

1. 2014 is the year climate change goes from a political debate to a global force for unification and immediate action. It will be seen as the year the Internet adopted the planet as its cause.

Because the industry represents the new guard of power in our society,  Internet, technology, and media leaders will take strong positions in the climate change debate, calling for dramatic and immediate action, including forming the equivalent of a “Manhattan Project” for technological solutions to all manner of related issues – transportation, energy, carbon sequestration, geoengineering, healthcare, economics, agriculture.

While I am skeptical of a technological “silver bullet” approach to solving our self-created problems, I also believe in the concept of “hybrid vigor” – of connecting super smart people across multiple disciplines to rapidly prototype new approaches to otherwise intractable problems. And I cannot imagine one company or government will solve the issue of climate change (no matter how many wind farms or autonomous cars Google might create), nor will thousands of well meaning but loosely connected organizations (or the UN, for that matter).

I can imagine that the processes, culture, and approaches to problem solving enabled by the Internet can be applied to the issue of climate change. The lessons of disruptors like Google, Twitter, and Amazon, as well as newer entrants like airbnb, Uber, and Dropbox, can be applied to solving larger problems than where to sleep, how to get a cab, or where and how our data are accessed. We need the best minds of our society focused on larger problems – but first, we need to collectively believe that problem is as large as it most likely is.

2014, I hope, is the year the problem births a real movement – a platform, if you will, larger than any one organization, one industry, or one political point of view. The only time we’ve seen a platform like that emerge is the Internet itself. So there’s a certain symmetry to the hypothesis – if we are to solve humankind’s most difficult problem, we’ll have to adopt the core principles and lessons of our most elegant and important creation: the Internet. The solution, if it is to come from us, will be native to the Internet. I can’t really say how, but I do know one thing: I want to be part of it, just like I wanted to be part of the Internet back in 1987.

I’ll admit, it’s kind of hard to write anything more after that. I mean, who cares if Facebook has a good or bad year if the apocalypse is looming? Well, it’s entirely possible that my #1 prediction doesn’t happen, and then how would that look, batting .000 for the year (I’ve been batting better than .500 over the past decade, after all)? To salvage some part of my dignity, I’m going to go ahead and try to prognosticate a bit closer to home for the next few items.

2. Automakers adopt a “bring your own” approach to mobile integration. The world of the automobile moves slowly. It can take years for a new model to move from design to prototype to commercially available model. Last year I asked a senior executive at a major auto manufacturer the age old question: “What business are you in?” His reply, after careful consideration, was this: “We are in the mobile experience business.” I somewhat expected that reply, so I followed up with another question: “How on earth will you compete with Apple and Google?” Somewhat exasperated, he said this was the  existential question his company had to face.

2014 will be the year auto companies come to terms with this question. It won’t happen all at once, because nothing moves that fast in the auto industry. While most car companies have some kind of connectivity with smart phone platforms, for the most part they are pretty limited. Automakers find themselves in the same positions as carriers (an apt term, when you think about it) back at the dawn of the smart phone era – will they attempt to create their own interfaces for the phones they market, or will they allow third parties to own the endpoint relationship to consumers? It’s tempting for auto makers to think they can jump into the mobile user interface business, but I think they’re smart enough to know they can’t win there. Our mobile lives require an interface that understands us across myriad devices –  the automobile is just one of those devices. The smartest car makers will realize this first, and redesign their “device platforms” to work seamlessly with whatever primary mobile UI a consumer picks. That means building a car UI not as an end into itself, but as a platform for others to build upon.

Remember, these are predictions I *hope* will happen. It’s entirely possible that automakers will continue the haphazard and siloed approach they’re currently taking with regard to mobile integration, simply because they lack conviction on whether or not they want to directly compete with Google and Apple for the consumer’s attention inside the car. Instead, they should focus on creating the best service possible that integrates and extends those already dominant platforms.

3. By year’s end, Twitter will be roundly criticized for doing basically what it did at the beginning of the year. The world loves a second act, and will demand one of Twitter now that the company is public. The company may make a spectacular acquisition or two (see below), but in the main, its moves in 2014 will likely be incremental. This is because the company has plenty of dry powder in the products and services it already has in its arsenal – it’ll roll out a full fledged exchange, a la FBX, it’ll roll out new versions of its core ad products (with a particular emphasis on video), it’ll create more media-like “events” across the service, it’ll continue its embrace of television and popular culture…in other words, it will consolidate the strengths it already has. And 12 months from now, everyone will be tweeting about how Twitter has run out of ideas. Sound familiar, Facebook?

Now this isn’t what I hope for the company to do, but I already wrote up my great desire for Twitter last year. Still waiting on that one (and I’m not sure it’s realistic).

4. Twitter and Apple will have their first big fight, most likely over an acquisition. Up till now, Twitter and Apple have been best of corporate friends. But in 2014, the relationship will fray, quite possibly because Apple comes to the realization it has to play in the consumer software and services world more than it has in the past.  At the same time, there will be a few juicy M&A targets that Twitter has its eye on, targets that most likely are exactly what Apple covets as well. I’ll spare you the list of possible candidates, as most likely I’d miss the mark. But I’d expect entertainment to be the most hotly contested space.

5. Google will see its search related revenues slow, but will start to extract more revenues from its Android base. Search as we know it is moving to another realm (for more, see my post on Google Now). Desktop search revenues, long the cash cow of Google, will slow in 2014, and the company will be looking to replace them with revenues culled from its overall dominance in mobile OS distribution. I’m not certain how Google will do this – perhaps it will buy Microsoft’s revenue generating patents, or maybe it’ll integrate commerce into Google Now – but clearly Google needs another leg to its revenue stool. 2014 will be the year it builds one.

6. Google Glass will win – but only because Google licenses the tech, and a third party will end up making the version everyone wants. Google Glass has been lambasted as “Segway for your face” – and certainly the device is not yet a consumer hit. But a year from now, the $1500 price tag will come down by half or more, and Google will realize that the point isn’t to be in the hardware business, it’s to get Google Now to as many people as possible. So Google will license Glass sometime next year, and the real consumer accessory pros (Oakley? GoPro? Nike? Nest?!) will create a Glass everyone wants.   

7. Facebook will buy something really big. My best guess? Dropbox. Facebook knows it’s become a service folks use, but don’t live on anymore. And it will be looking for ways to become more than just a place to organize a high school reunion or stay in touch with people you’d rather not talk to FTF. It wants and needs to be what its mission says it is: “to give people the power to share and make the world more open and connected.” The social graph is just part of that mission – Facebook needs a strong cloud service if it wants a shot at being a more important player in our lives. Something like Dropbox (or Box) is just the ticket. But to satisfy the egos and pocketbooks of those two players, Facebook will have to pay up big time. It may not be able to, or it may decide to look at Evernote instead. I certainly hope the company avoids the obvious but less-substantive play of Pinterest. I like Pinterest, but that’s not what Facebook needs right now.

As with Twitter, this prediction does not reflect my greatest hope for Facebook, but again, I wrote that last year, and again…oh never mind.

8. Overall, 2014 will be a great year for the technology and Internet industries, again, as measured in financial terms. There are dozens of good companies lined up for IPOs, a healthy appetite for tech plays in the markets, a strong secular trend in adtech in particular, and any number of “point to” successes from 2013. That strikes me as a recipe for a strong 2014. However, if I were predicting two years out, I’d leave you with this warning: Squirrel your nuts away in 2014. This won’t last forever.

Related:

Predictions 2013

2013: How I Did

Predictions 2012

2012: How I Did

Why The Banner Ad Is Heroic, and Adtech Is Our Greatest Artifact

By - November 17, 2013

hotwiredbanner

Every good story needs a hero. Back when I wrote The Search, that hero was Google – the book wasn’t about Google alone, but Google’s narrative worked to drive the entire story. As Sara and I work on If/Then, we’ve discovered one unlikely hero for ours: The lowly banner ad.

Now before you head for the exits with eyes a rollin’, allow me to explain. You may recall that If/Then is being written as an archaeology of the future. We’re identifying “artifacts” extant in today’s world that, one generation from now, will effect significant and lasting change on our society. Most of our artifacts are well-known to any student of today’s digital landscape, but all are still relatively early in their adoption curve: Google’s Glass, autonomous vehicles, or 3D printers, for example. Some are a bit more obscure, but nevertheless powerful – microfluidic chips (which may help bring about DNA-level medical breakthroughs) fall into this category. Few of these artifacts touch more than a million people directly so far, but it’s our argument that they will be part of more than a billion people’s lives thirty years from now.

There is one exception. The artifact we’re investigating is already at massive scale, driving billions of dollars in revenue and touching every person whose ever used the Internet. That artifact is currently called “programmatic adtech,” and it is most famously illustrated by Terry Kawaja’s Lumascapes (and less famously, my own “Behind the Banner” visualization).

lumascapedisplayYes, this is the infrastructure that allows a pair of shoes to chase you across the web. How can it possibly be as important as, say, a technology that may cure cancer? Because I believe the very same technologies we’ve built to serve real time, data-driven advertising will soon be re-purposed across nearly every segment of our society. Programmatic adtech is the heir to the database of intentions – it’s that database turned real time and distributed far outside of search. And that’s a very, very big deal. (I just wish I had a cooler name for it than “adtech.” We’re working on it. Any ideas?!)

Think about what programmatic adtech makes possible. An individual requests a piece of content through a link or an action (like touching something on a mobile device). In milliseconds, scores of agents execute thousands of calculations based on hundreds of parameters, all looking to market-price the value of that request and deliver a personalized response. This happens millions of times * a second,* representing hundreds of millions, if not billions, of computing cycles each second. What’s most stunning about this system is that it’s tuned to each discrete individual – every single request/response loop is unique, based on the data associated with each individual.

Let me break that down:

1. A person indicates a request: a desire, an intent, a preference – The Request

2. Billions of compute cycles and sh*tons of data are engaged to process that desire – The Process

3. A personalized response is generated within 100-250 milliseconds. – The Response

At present, the end result of this vastly complicated “Request Process Response” system is, more often than not, the proffering of a banner ad. But that’s just an artifact of a far more interesting future state. Today’s adtech has within it the glimmerings of a computing architecture that will underpin our entire society. Every time you turn up your thermostat, this infrastructure will engage, determining in real time the most efficient response to your heating needs. Each time you walk into a doctor’s office, the same kind of system could be triggered to determine what information should appear on your health care provider’s screen, and on yours, and how best payment should be made (or insurance claims filed). Every retail store you visit, every automobile you drive (or are driven by), every single interaction of value in this world can and will become data that interacts with this programmatic infrastructure.

OK. Let’s step back for a second. When you think of this infrastructure, are  you concerned? Good. Because it’s imperative that we consider the choices we make as we engage with such a portentous creation. This year alone, each human on the planet will create about 600 gigabytes of information, and that number is growing rapidly. What are the architectural constraints of the infrastructure which processes that information? What values do we build into it? Can it be audited? Is it based on principles of openness, or is it driven by business rules and data-structures which favor closed platforms? Will we have to choose between an oligarchy of “RPR vendors” – Google, Facebook, Microsoft – or will we take a more distributed approach, as the original Internet did?

These questions have been raised, and continue to be well articulated, by LessigZittrainWu, and many others. But we’re entering a new, more urgent era of this conversation. Many of these authors’ works warned of a world where code will eventually augur early lock down in political and social conventions. That time is no longer in the future. It’s now. And I believe as goes adtech, so goes our social code.

“Adtech” is a very important, very large application we’ve built on top of the platform we call “the Internet.” It’s driven by the relentless desire of capitalism to turn a profit, yet (so far) it has leaned toward the Internet’s core values of openness and interconnectivity. Thanks to that,  it’s suffering some endemic maladies (fraud comes to mind). It’s still a very young, relatively immature artifact. But so far, it’s more open than not. I’m not certain that will always be the case.

My argument boils down to this: What we today call “adtech” will tomorrow become the worldwide real-time processing layer driving much of society’s transactions. That layer deserves to be named as perhaps the most important artifact extant today.

Given adtech’s rise, let’s not forget its atomic unit of value: the oft-derided banner ad. In time the banner as we know it will most likely fade away, but its place in history is certain. One generation from now, we may not “click” on banner ads, but we’ll always be pulling into traffic, filing health insurance claims, buying clothes in retail stores, and turning up our thermostats. And those myriad transactions will be lit with data and processed by a real time infrastructure initially built to execute one pedestrian task: serve a simple banner ad.

Ubiquitous Video: Why We Need a Robots.txt For the Real World

By - November 13, 2013

illustration_robotLast night I had an interesting conversation at a small industry dinner. Talk turned to Google Glass, in the context of Snapchat and other social photo sharing apps.

Everyone at the table agreed:  it was inevitable – whether it be Glass, GoPro, a button in your clothing or some other form factor – personalized, “always on” streaming of images will be ubiquitous. Within a generation (or sooner), everyone with access to mass-market personal electronics (i.e., pretty much everyone with a cell phone now) will have the ability to capture everything they see, then share or store it as they please.

That’s when a fellow at the end of the table broke in. “My first response to Glass is to ask: How do I stop it?”

The dinner was private, so I can’t divulge names, but this fellow was a senior executive in the banking business. He doesn’t want consumers streaming video from inside his banks, nor does he want his employees “Glassing” confidential documents or the keys to the safe deposit boxes.

All heads at the table nodded, as if this scenario was right around the corner  – and the implications went far beyond privacy at a bank. Talk turned to many other situations where people agreed they’d not want to be “always on.” It could be simple –  a bad hair day – or complicated: a social pariah who just wanted to be left alone. All in all, people were generally sympathetic to the notion of “the right to be left alone” – what in this case might be called “the right to not be in a public stream.”

But how to enforce such a right? The idea of banning devices like Glass infringes the wearer’s rights, and besides, it just won’t scale – tiny cameras will soon be everywhere, and they’ll be basically imperceptible. Sure, some places (like banks, perhaps), will have scanning devices and might be able to afford the imposition of such bans. But in public places? Most likely impossible and quite possibly illegal (in the US, for instance, there is an established right to take photographs in public spaces).

This is when my thoughts turned to one of the most powerful devices we have to manage each other: the Social Contract. I believe we have entered an era in which we must renegotiate our contract with society – that invisible but deeply powerful sets of norms that guide “civil behavior.” Glass (among other artifacts) is at the nexus of this negotiation – the debate laid bare by a geeky pair of glasses.

Back at the table, someone commented that it’d be great if there was a way to let people know you didn’t want to be “captured” right now. Some kind of social cloaking signal*, perhaps. Now, we as humans are damn good at social signaling. We’ve built many a civilization on elaborate sets of social mores.  So how might our society signal a desire to not be “streamed”? Might we develop the equivalent of a “robots.txt” for the real world?

For those of you not familiar with robots.txt, it’s essentially a convention adopted early in the Web’s life, back when search became a powerful distributor of attention, and the search index the equivalent of a public commons (Zittrain wrote a powerful post about it here). Some sites did not want to be indexed by search engines, for reasons ranging from a lack of resources (a search engine’s spiders put a small tax on a site’s resources) to privacy.  No law was enacted to create this convention, but every major search engine obeys its strictures nevertheless. If a site’s robots.txt tells an indexing spider to not look inside, the robot moves along.

It’s an elegant solution, and it works, as long as everyone involved keeps their part of the social contract. Powerful recriminations occur if an actor abuses the system – miscreants are ostracized, banned from social contact with “good” actors.

So might we all, in some not-so-distant future, have our own “robots.txt” – a signal that we can instrument at will, one which is constantly on, a beacon which others can pick up and understand? Such an idea seem to me not at all far fetched. We already all carry the computing power and bandwidth on our person to effect such a signal. All we need is a reason for it to come online. Glass, or something like it, may well become that reason.

The instrumentation of our new social contract is closer at hand than we might think.

*We already have  deeply a meaningful “social cloaking device” – its called our wardrobe. But I’ll get into that topic in another post.

 

Search and Immortality

By - September 19, 2013

google.cover.inddFunny thing, there I was two days ago, at Google’s annual conference, watching Larry Page get asked questions so pliant in nature they couldn’t be called softballs. They were more like tee balls – little round interrogatives gingerly placed on a plastic column for Page to swat out into the crowd. Not that we would expect anything else – to be clear, this is Google’s event, and I see nothing wrong with Google scripting its own event. I had moderated the final session of the day, but Larry was the final speaker. Perhaps wisely, Google brought  someone else on to “grill” Page – those were his words as the interview started. (You be the judge –  a sample question: “What are your thoughts about tablets in schools?”)

Anyway, I was certainly not the right choice to talk to Larry. I know the folks at Google well, and have tons of respect for them. We both know I would have insisted on asking about a few things that were, well, in the news at the moment of that interview on Tuesday. Like, for example, the fact that Google, on the very next day, was going to announce the launch of Calico, a company seeking to solve that “moonshot” problem of aging. Oh, and by the way, current Apple Chair and former Genentech CEO Arthur Levinson was going to be CEO, reporting to Page. Seems like pretty interesting news, no? And yet, Larry kept mum about it during the interview. Wow. That’s some serious self control.

And yet I think I understand – each story has its own narrative, and this one needed room to breathe. You don’t want to break it inside an air-conditioned ballroom in front of your most important clients. You want to make sure it gets on the cover of Time (which it did), and that the news gets at least a few days to play through the media’s often tortured hype cycle. It’s grinding its way through that cycle now, and I’m sure we’ll see comparisons to everything from Kurzweil (who now works at Google) to Bladerunner, and beyond.

But what I was reminded of was the very end of my book on search, some 8 years ago. I was trying to put the meaning of search into context, and I found myself returning again and again to the concept of immortality.  This was my epilogue, which I offer here as perhaps some context for Google’s announcement this week:

“Search and Immortality”

On a fine sunny morning in 2003, not long after the birth of my third and most likely final child, I typed “immortality” into Google and hit the “I’m feeling lucky” button. I can’t explain why I turned to a search engine for metaphysical comfort, but I sensed the search might lead me somewhere—here I was writing a book about search, but what did it matter, really, in the larger scheme of things?

In an instant, Google took me to the Immortality Institute, an organization dedicated to “conquering the blight of involuntary death.”

Not quite what I was looking for. So I hit the search again, but this time I took a look at the first ten results, etched in blue, green, and black against Google’s eternal white.

Nothing really caught my eye. Cryonics stuff, a business called Immortality Inc., pretty much what you might expect. I couldn’t put what I was looking for into words, but I knew this wasn’t it.

Then I noticed the advertising relegated to the right side of the screen. There were four ads, each no more than three lines of text. The first was someone who claimed to have met immortal ETs. Pass. The third and fourth were from eBay and Yahoo Shopping. These megasites had purchased the immortality keyword in some odd and obliquely interesting hope that people searching for immortality might well find relief through . . . buying shit online. (In fact, what Yahoo and eBay were doing was the equivalent of search arbitrage— buying top positions for a search term on Google and then creating a link to the exact same search term on their own sites, in the hope of capturing high-value customers).

Interesting, but I wasn’t looking to buy the concept of immortality; I wanted to understand it. I took a pass on those as well. But the second paid link pointed to the epic Gilgamesh, which I hazily recalled as the first story ever written down—in Sumerian cuneiform, if memory served. I clicked on the link, earning Google a few pennies in the process, and landed on an obscure bookseller’s page. The epic of Gilgamesh, the site instructed me, recounts mankind’s “longing stretch toward the infinite” and its “reluctant embrace of the temporal. This is the eternal lot of mankind.”

Bingo. I didn’t quite know why, but this was the stuff I was looking for. My vague desire to understand the concept of immortality had brought me to the epic of Gilgamesh, and now I was hooked. My search was bearing fruit. But I didn’t want to buy a book and wait for it to come. I was in the moment of discovery, the heat of possible consummation. I wanted to read that epic, right now.1 So I typed the title itself into Google, and once again found myself larded with options.

But this time the organic results (the search results in the middle of a Google page, as opposed to the ads on the right) nailed it: the first two offered direct translations of the stone tablets upon which the epic is written. Clicking on the first link, I found a Washington State University professor’s summary of the Gilgamesh story. It read:

Gilgamesh was an historical king of Uruk in Babylonia, on the River Euphrates in modern Iraq; he lived about 2700 b.c. Although historians . . . tend to emphasize Hammurabi and his code of law, the civilizations of the Tigris-Euphrates area, among the first civilizations, focus rather on  Gilgamesh and the legends accruing around him to explain, as it were, themselves. Many stories and myths were written about Gilgamesh, some of which were written down about 2000 b.c. in the Sumerian language on clay tablets which still survive . . . written in the script known as cuneiform, which means “wedge-shaped.” The fullest surviving version, from which the summary here is taken, is derived from twelve stone tablets . . . found in the ruins of the library of Ashurbanipal, king of Assyria, 669–633 b.c., at Nineveh. The library was destroyed by the Persians in 612 b.c., and all the tablets are damaged. The tablets actually name an author, which is extremely rare in the ancient world, for this particular version of the story: Shin-eqi-unninni. You are being introduced here to the oldest known human author we can name by name!

In my search for immortality, I had found the oldest known named author in the history of Western civilization. Thanks to the speed, vastness, and evanescent power of Google, I came to know his name and his work within thirty seconds of proffering a vaguely worded query. This man, Shin-eqi-unninni, now lived in my own mind. Through his writings, with an assist from Google and a university professor, he had, in a sense, become immortal.

But wait! There’smore. Gilgamesh’s story is one of man’s struggle with the concept of immortality, and the story itself was nearly lost in an act of literary vandalism—the destruction of a great king’s library. As I contemplated all of this, sensing that, just possibly, I had found a way to explain why search was so important to our culture.

I read the first tablet’s opening lines:

The one who saw all (Sha nagba imuru) I will declare to the world, The one who knew all I will tell about [line missing] He saw the great Mystery, he knew the Hidden: He recovered the knowledge of all the times before the Flood. He journeyed beyond the distant, he journeyed beyond exhaustion, And then carved his story on stone.

What does it mean, I wondered, to become immortal through words pressed in clay—or, as was the case here, through words formed in bits and transferred over the Web? Is that not what every person longs for—what Odysseus chose over Kalypso’s nameless immortality— to die, but to be known forever? And does not search offer the same immortal imprint: is not existing forever in the indexes of Google and others the modern-day equivalent of carving our stories into stone? For anyone who has ever written his own name into a search box and anxiously awaited the results, I believe the answer is yes.

Something to think about, anyway. Good luck, Mr. Levinson and Mr. Page. I’m cheering you on, even if I can’t quite explain why. Maybe it’s that missing line from Gilgamesh we’re all trying to find….

*Hat tip to one of my editors Bill Brazell, for pinging me as I was writing this about this very news.

A Social, Elastic Model for Paid Content

By - July 10, 2013

esquirepieceI was interested to read today that Esquire is currently experimenting with a per-article paywall. For $1.99, you can read a  10,000-word piece about a neurosurgeon who claims to have visited heaven. Esquire’s EIC on the experiment: “…great journalism—and the months that go into creating it—isn’t free. So, besides providing the story to readers of our print and digital-tablet versions of the August issue, we are offering it to online readers as a stand-alone purchase.”

I predicted that payment systems and paid services/content were going to take off this year (see here), but this isn’t what I had in mind. But it did get me thinking. What if you added social and elastic elements to the price? For example, the article would initially cost, say, $1.99, but if enough people decided to buy it, the price goes down for everyone. The more people who buy, the cheaper the price gets. It’d never go to zero, of course, but there’d be some kind of a demand/price curve that satisfies the two most important things publishers care about: readership (the more, the better) and revenue (ideally, enough to cover the costs of creation and make a fair profit).

The tools to do this already exist. There are plenty of sites that crowdsource demand to create pricing leverage, and sites like Kickstarter have gotten all of us used to the idea of hitting funding goals. And the social sharing behaviors already exist as well: Nearly all content has social sharing widgets attached these days. Why not combine the two? Those who initially paid the highest price – $1.99 say – would be motivated to share a summary of the article with friends and encourage them to buy it as well. They are economically incented to do so – the more friends who buy, the greater the chance that their initial $1.99 charge will decrease. And they’re socially incented to do so – perhaps they could get credit for being one of the early advocates or tastemakers who recognized and surfaced a great piece of content before anyone else did.

Let’s break down the economics to see how it might work. A really great piece of long form journalism in a magazine like Esquire pays around $15,000 (sometimes more, sometimes less, depending on the author, subject, length, and title). But for this model, let’s say the payment to the journalist is $15K. Then you need to factor in the cost of the editor, copy editor, production, sales and design, as well as general overhead of the publication per piece. Let’s call that another $5K per piece (I’m spitballing here but probably not too far off). So for this article to make a profit, it needs to make $20,000 – or sell roughly 10,000 copies. Of course, the article is also monetized through the regular magazine and tablet editions, so the real number it has to hit is probably far less – let’s cut it in half and say it’s $10,000. Now to clear a profit, the article really just needs to sell 5,000 copies at $1.99.

Let’s not forget that Esquire also shows advertising against its articles. If it maintains a healthy $25 CPM, and shows two “spread”  (two-page) ads between those 10,000 words, that’s roughly  $100 per 1000 readers that Esquire can make. If it indeed does sell 5,000 copies of that article, that’s $500 of advertising revenue earned. And if it gets more readers, it can earn more advertising revenue – and decrease the paid content price in some correlated fashion. (No matter what, Esquire wants more readers – both to increase its advertising revenue, but also to accomplish its journalistic mission – all authors want more readers).

Perhaps a model could work like this: The piece costs $1.99 for the first 5,000 articles sold, garnering $10,000 in revenue (Ok, $9,500 for you sticklers). Once that threshold hits, the price adjusts dynamically to maintain at least $10,000 in overall revenue, but adjusting downward against the paying population as more and more readers commit (which also earns Esquire additional advertising revenue). A “clearing price” is set, perhaps at 50 cents, after which all profits go to Esquire. In this case, the clearing price kicks in at 20,000 copies sold – everyone would pay .50 at that point, and it’s a win win win for all.

Just spitballing, as I said, but I think it’s a pretty cool idea. What do you think?

A Berkeley Commencement Speech, Some Years Ago…

By - May 25, 2013

Last week LinkedIn asked me to post a commencement speech, if I had given one, as part of a series they were doing. Turns out, I’ve given two, but the one they wanted was at Berkeley, my alma mater. If you want to read the one I gave at my high school, I’d be happy to post it (I think it’s better), but since I already have the Berkeley one at the ready, here it is. I want it to be on my own site as well, just for the record.

—–

Back in 2005, as Web 2.0 was taking off, I was honored to be asked to give the commencement address at UC Berkeley’s School of Information Management, or SIMS. It was a perfect day, and the ceremony was outside at the base of the Campanile, which is Berkeley’s proudest monument. As a double Cal graduate, and three-generation legacy, this was a crowning moment for me. Below are some excerpts, edited for clarity given the time that has lapsed since.

I have a feeling that I was chosen to make these brief remarks because I deeply believe in the following statement: The field you’ve chosen is the most important and interesting line of inquiry to be found at this great University, and one of the most important new schools to emerge since the rise of computer science in the middle of last century.

Of course, it’s also misunderstood, miscategorized, and poorly defined, but that’s to be expected. Just 10 years ago, “information management” was still a fancy way of saying “librarian.” While librarians knew better, many others had not caught on to this basic truth: the most valuable resource in our culture is knowledge, and as SIMS graduates, you are not simply becoming knowledge workers, you are becoming builders of knowledge refineries—the architects who drive how knowledge itself is created.

SIMS suffers from something of a definition problem, doesn’t it? Is it computer science, anthropology, or journalism? Is it library science, architecture, design? Of course, this is the same problem that plagues the Internet—what exactly is it, anyway? It seems there is no area in our culture that is not touched, changed, even swallowed by the Internet. It’s both medium and message, mass and personal, social and solitary. Like SIMS, the Internet is a study in interdisciplinary mechanics.

At various times, the world has declared the Internet dead. Fortune 500 executives— particularly in the media and communications business—were thrilled that their monopolies were safe from what appeared to be a very real threat. They and the press declared the revolution stillborn. They wrote the Internet off as just another distribution channel and, for a while, it seemed that was a pretty safe assumption.

But a funny thing happened around the time this graduating class applied to SIMS—Google began turning a profit. Yahoo, Amazon, and even Priceline shook off the snows of 2002 and began to grow again. And the collective wisdom of thousands of geeks began expressing itself in myriad and wondrous ways—in new photo tools like Flickr and in new social networking applications like LinkedIn.

And millions of people kept using the Internet, and millions more joined. As they used it, they changed it, making it their own and building a medium not only in their own image but in the likeness of the culture they were becoming. It’s a culture driven by knowledge and shaped by relationships and community. In short, while most folks weren’t paying attention over the past few years, the Web was reborn, not as a repository of information, but as a creation engine of knowledge.

Most graduates face the world with an equal sense of optimism and trepidation—this ceremony, after all, marks a major transition for you all. But now comes the rest of your life, and with it uncertainty and the terrifying joy of starting all over once again.

My advice to you, insofar as I can give any, is simple: Hold onto this feeling you have right now. Rinse and repeat as often as you can. Get used to it but don’t take it for granted—it’s how the world is evolving. Every few years, if you’re not leaping into a new project, a new and challenging startup, or a new challenge at a larger company, then you’re not really exercising the skills you all so clearly demonstrated with your Masters projects. The world wants more projects like yours, and it stands ready to fund them, tweak them, embrace them, and inspire you to build them again and again.

You are, all of you, entrepreneurs, deciding what vision to follow and what path to take toward it. It’s a rather addictive feeling, and I, for one, hope you keep making new stuff for the rest of your sure to be very long careers.

As I said earlier, the world of media and business you are entering is very different from that of just five years ago. The Web 2.0 world is defined by new ways of understanding ourselves, of creating value in our culture, of running companies, and of working together.

Companies in this world are run more like artist studios or graduate projects—they are lightweight – they leverage the work of thousands that came before them and potentially millions who use their products or services over the Web. Craigslist, for example, is challenging the entire newspaper industry not by hiring thousands of workers and taking on publishers on their turf, but by reorganizing how people find, create and use classifieds. How they turn information into actionable knowledge. A very simple idea, but also very powerful.

These companies thrive by innovating in assembly—they find new ways to sort, organize, and present options to their customers. Information is a commodity, after all. Knowledge is king. If you can help someone refine information into knowledge and if you help them make sense of the world, you win. And it takes a special kind of person to do that—a knowledge architect—exactly what you all have chosen as your field of study, and, I hope, your careers.

I’ve noticed that the best companies and ideas are driven by these knowledge architects who realize that in an information age, the best business to be in is that of refinery.

Each of you has the chance to make this your life’s work. I say, well done—and don’t let us down. For as Nikola Tesla, hero to Google co-founder Larry Page, once said:

Of all the frictional resistance in the world, the one that most retards human movement is ignorance, what Buddha called “the greatest evil in the world.” The friction which results from ignorance can be reduced only by the spread of knowledge … No effort could be better spent.