free html hit counter Joints After Midnight & Rants Archives | Page 3 of 43 | John Battelle's Search Blog

Thinking Out Loud: Potential Information

By - March 20, 2014
o-ALPINE-SLIDE-PARK-CITY-570

Plenty of potential at the top of this particular system.

(image) If you took first-year physics in school, you’re familiar with the concepts of potential and kinetic energy. If you skipped Physics, here’s a brief review: Kinetic energy is energy possessed by bodies in motion. Potential energy is energy stored inside a body that has the potential to create motion. It’s sort of kinetic energy’s twin – the two work in concert, defining how pretty much everything moves around in physical space.

I like to think of potential energy as a force that’s waiting to become kinetic. For example, if you climb up a slide, you have expressed kinetic energy to overcome the force of gravity and bring your “mass” (your body) to the top. Once you sit at the top of that slide, you are full of the potential energy created by your climb – which you may once again express as kinetic energy on your way back down. Gravity provides what is known as the field, or system, which drives all this energy transfer.

For whatever reason, these principles of kinetic and potential energy have always resonated with me. They are easily grasped, to be certain, but it’s also how evocative they are. Everything around us is either in motion or it’s not – objects are either animated by kinetic energy (a rock flying through the air), or they are at rest, awaiting a kinetic event which might create action and possibly some narrative consequence (a rock laying on the street, picked up by an angry protestor….).

To me, kinetic and potential energy are the bedrock of narrative – there is energy all around us, and once that energy is set in motion, the human drama unfolds. The rock provides mass, the protestor brings energy, and gravity animates the consequence of a stone thrown…

Because we are physical beings, the principles of motion and force are hard wired into how we navigate the world – we understand gravity, even if we can’t run the equations to prove its cause and effect. But when it comes to the world of digital information, we struggle with a framework for understanding cause and effect – in particular with how information interacts with the physical world. We speak of “software eating the world,” “the Internet of Things,” and we massify “data” by declaring it “Big.” But these concepts remain for the most part abstract. It’s hard for many of us to grasp the impact of digital technology on the “real world” of things like rocks, homes, cars, and trees. We lack a metaphor that hits home.

But lately I’ve been using the basic principles of kinetic and potential energy as a metaphor in casual conversations, and it seems to have some resonance. Now, I’m not a physicist, and it’s entirely possible I’ve mangled the concepts as I think out loud here. Please pile on and help me express this as best I can. But in the meantime…

…allow me to introduce the idea of potential information. Like potential energy, the idea of potential information is that all physical objects contain the potential to release information if placed in the right system. In the physical world, we have a very large scale system already in place – it’s called gravity. Gravity provides a field of play, the animating system which allows physical objects (a rock, a child at the top of a slide) to become kinetic and create a narrative (a rock thrown in anger, a child whooping in delight as she slides toward the sand below).

It seems to me that if we were to push this potential information metaphor, then we need our gravity – our system that allows for potential information to become kinetic, and to create narratives that matter. To my mind, that system is digital technology, broadly, and the Internet, specifically. When objects enter the system of technology and the Internet, they are animated with the potential to become information objects. Before contact with the Internet, they contain potential information, but that information is repressed, because it has no system which allows for its expression.

In this framework, it strikes me that many of the most valuable companies in the world are in the business of unlocking potential information – of turning the physical into information. Amazon and eBay unlocked the value of merchandise’s potential information. Airbnb turns the potential information of spare bedrooms into kinetic information valued at nearly $10 billion and counting. Uber unlocked the potential information trapped inside transportation systems.  Nest is animating the potential information lurking in all of our homes. And Facebook leveraged the potential information lurking in our real world relationships.

I’d wager that the most valuable companies yet to be built will share this trait of animating potential information. One of the best ideas I’ve heard in the past few weeks was a pitch from an inmate at San Quentin (part of The Last Mile, an amazing program worthy of all your support). This particular entrepreneur, a former utilities worker, wanted to unlock all the potential information residing in underground gas, sewage, and other utilities. In fact, nearly every good idea I’ve come across over the past few years has had to do with animating potential information of some kind.

Which brings us to Google – and back to Nest. In its first decade, Google was most certainly in the business of animating potential information, but it wasn’t physical information. Instead, Google identified an underutilized class of potential information – the link – and transformed it into a new asset – search. A link is not a physical artifact, but Google treated as if it were, “mapping” the Web and profiting from that new map’s extraordinary value.

Now the race is on to create a new map – a map of all the potential information in the real world. What’s the value of potential information coming off a jet engine, or  a wind turbine? GE’s already on it. What about exploiting the potential information created by your body? Yep, that’d be Jawbone, FitBit, Nike, and scores of others. The potential information inside agriculture? Chris Anderson’s all over it. And with Nest, Google is becoming a company that unlocks not only the information potential of the Web, but of the physical world we inhabit (and yes, it’s already made huge and related moves via its Chauffeur, Earth, Maps, and other projects).

Of course, potential information can be leveraged for more than world-beating startups. The NSA understands the value of potential information, that’s why the agency has been storing as much potential information as it possibly can. What does it mean when government has access to all that potential information? (At least we are having the dialog now – it seems if we didn’t have Edward Snowden, we’d have to create him, no?)

Our world is becoming information – but then again, it’s always had that potential. Alas, I’m just a layman when it comes to understanding information theory, and how information actually interacts with physical mass (and yes, there’s a lot of science here, far more than I can grok for the purposes of this post.) But the exciting thing is that we get to be present at the moment all this information is animated into narratives that will have dramatic consequences for our world. This is a story I plan to read deeply in over the coming year, and I hope you’ll join me as I write more about it here.

  • Content Marquee

We Have Yet to Clothe Ourselves In Data. We Will.

By - March 12, 2014

SenatorTogaWe are all accustomed to the idea of software “Preferences” – that part of the program where you can personalize how a particular application looks, feels, and works. Nearly every application that matters to me on my computer – Word, Keynote, Garage Band, etc. –  have preferences and settings.

On a Macintosh computer, for example, “System Preferences” is the control box of your most important interactions with the machine.

I use the System Preferences box at least five times a week, if not more.

And of course, on the Internet, there’s a yard sale’s worth of preferences: I’ve got settings for Twitter, Facebook, WordPress, Evernote, and of course Google – where I probably have a dozen different settings, given I have multiple identities there, and I use Google for mail, calendar, docs, YouTube, and the like.

preferencesAny service I find important has settings. It’s how I control my interactions with The Machine. But truth is, Preferences are no fun. And they should be.

The problem: I mainly access preferences when something is wrong. In the digital world, we’ve been trained to see “Preferences” as synonymous with “Dealing With Shit I Don’t Want To Deal With.” I use System Preferences, for example, almost exclusively to deal with problems: Fixing the orientation of my monitors when moving from work to home, finding the right Wifi network, debugging a printer, re-connecting a mouse or keyboard to my computer.  And I only check Facebook or Google preferences to fix things too – to opt out of ads, resolve an identity issue, or  enable some new software feature. Hardly exciting stuff.

Put another way, Preferences is a “plumbing” brand – we only think about it when it breaks.

But what if we thought of it differently? What if managing your digital Preferences was more like….managing your wardrobe?

A few years back I wrote The Rise of Digital Plumage, in which I posited that sometime soon we’ll be wearing the equivalent of “digital clothing.” We’ll spend as much time deciding how we want to “look” in the public sphere of the Internet as we do getting dressed in the morning (and possibly more). We’ll “dress ourselves in data,” because it will become socially important – and personally rewarding –  to do so. We’ll have dashboards that help us instrument our wardrobe, and while their roots will most likely stem from the lowly Preference pane, they’ll soon evolve into something far more valuable.

This is a difficult idea to get your head around, because right now, data about ourselves is warehoused on huge platforms that live, in the main, outside our control. Sure, you can download a copy of your Facebook data, but what can you *do* with it? Not much. Platforms like Facebook are doing an awful lot with your data – that’s the trade for using the service. But do you know how Facebook models you to its partners and advertisers? Nope. Facebook (and nearly all other Internet services) keep us in the dark about that.

We lack an ecosytem that encourages innovation in data use, because the major platforms hoard our data.

This is retarded, in the nominal/verb sense of the word. Facebook’s picture of me is quite different from Google’s, Twitter’s, Apple’s, or Acxiom’s*. Imagine what might happen if I, as the co-creator of all that data, could share it all with various third parties that I trusted? Imagine further if I could mash it up with other data entities – be they friends of mine, bands I like, or even brands?

Our current model of data use, in which we outsource individual agency over our data to huge factory farms, will soon prove a passing phase. We are at once social and individual creatures, and we will embrace any technology that allows us to express who we are through deft weavings of our personal data – weavings that might include any number of clever bricolage with any number of related cohorts. Fashion has its tailors, its brands, its designers and its standards (think blue jeans or the white t-shirt). Data fashion will develop similar players.

Think of all the data that exists about you – all those Facebook likes and posts, your web browsing and search history, your location signal, your Instagrams, your supermarket loyalty card, your credit card and Square and PayPal purchases, your Amazon clickstream, your Fitbit output – think of each of these as threads which might be woven into a fabric, and that fabric then cut into a personalized wardrobe that describes who you are, in the context of how you’d like to be seen in any given situation.

Humans first started wearing clothing about 170,000 years ago. “Fashion” as we know it today is traced to the rise of European merchant classes in the 14th century. Well before that, clothing had become a social fact. A social fact is a stricture imposed by society – for example, if you don’t wear clothing, you are branded as something of a weirdo.

Clothing is an extremely social artifact –  *what* you wear, and how, are matters of social judgement and reciprocity. We obsess over what we wear, and we celebrate those “geniuses” who have managed to escape this fact (Einstein and Steve Jobs both famously wore the same thing nearly every day).

There’s another reason the data fabric of your life is not easily converted into clothing – because at the moment, digital clothing is not a social fact. There’s no social pressure for your “look” a certain way, because thanks our outsourcing of our digital identity to places like Facebook, Twitter, and Google+, we all pretty much look the same to each other online. As I wrote in Digital Plumage:

How strange is it that we as humans have created an elaborate, branded costume culture to declare who we are in the physical world, but online, we’re all pretty much wearing khakis and blue shirts?

At it relates to data, we are naked apes, but this is about to change. It’s far too huge an opportunity.

Consider: The global clothing industry grosses more than $1 trillion annually. We now spend more time online that we do watching television. And as software eats the world, it turns formerly inanimate physical surroundings into animated actors on our digital stage. As we interact with these data lit spaces, we’ll increasingly want to declare our preferences inside them via digital plumage.

An example. Within a few years, nearly every “hip” retail store will be lit with wifi, sensors, and sophisticated apps. In other words, software will eat the store. Let’s say you’re going into an Athleta outlet. When you enter, the store will know you’ve arrived, and begin to communicate with your computing device – never mind if its Glass, a mobile phone, or some other wearable.  As the consumer in this scenario, won’t you want to declare “who you are” to the retail brand’s sensing device? That’s what you do in the real world, no? And won’t you want to instrument your intent – provide signal to that store that will allow the store to understand your intent? And wouldn’t the “you” at Athleta be quite different from, say, the “you” that you become when shopping at Whole Foods or attending a Lord Huron concert?

Then again, you could be content with whatever profile Facebook has on you, (or Google, or ….whoever). Good luck with that.

I believe we will embrace the idea of describing and declaring who we are through data, in social context. It’s wired into us. We’ve evolved as social creatures. So I believe we’re at the starting gun of a new industry. One where thousands of participants take our whole data cloth and stitch it into form, function, and fashion for each of us. Soon we’ll have a new kind of “Preferences” – social preferences that we wear, trade, customize, and buy and sell.

In a way, younger generations are already getting prepared for such a world – what is the selfie but a kind of digital dress up?

Lastly, as with real clothing, I believe brands will be the key driving force in the rise of this industry. As I’m already over 1,000 words, I’ll write more on that idea in another post. 

*(fwiw, I am on Acxiom’s board)

To Be Clear: Do Not Build Your Brand House On Land You Don’t Own

By - February 28, 2014

Too07(image) I took a rigorous walk early this morning, a new habit I’m trying to adopt – today was Day Two. Long walks force a certain meditative awareness. You’re not moving so fast that you miss the world’s details passing by  – in fact, you can stop to inspect something that might catch your eye. Today I explored an abandoned log cabin set beside a lake, for example. I’ve sped by that cabin at least a thousand times on my mountain bike, but when you’re walking, discovery is far more of an affordance.

Besides the cabin, the most remarkable quality of today’s walk was the water – it’s (finally) been raining hard here in Northern California, and the hills and forests of Marin are again alive with the rush of water coursing its inevitable path toward the sea. White twisting ribbons cut through each topographic wrinkle, joining forces to form great streams at the base of any given canyon. The gathering roar of a swollen stream, rich with foam and brown earth – well, it’s certainly  good for the soul.

I can’t say the same of my daily “walks” through the Internet. Each day I spend an hour or more reading industry news. I’m pretty sure you do too – that’s probably the impetus for your visit here – chances are you clicked on a link on Facebook, LinkedIn, Twitter, Google, or in email. Someone you know said “check this out,” or – and bless you if this is the case – you actually follow my musings and visit on a regular basis.

But the truth is, we now mostly find content via aggregated streams. Streams are the new distribution. We dip in and out of streams, we curate and search our streams, we abandon barren streams and pick up new streams, hoping they might prove more nourishing. Back before streams ruled the world, of course, we had a habit of visiting actual “pools” – sites that we found worthy because they did a good job of creating content that we valued. (Before that, I think we read actual publications. But that was a long, long time ago…)

Which got me thinking. What makes a stream? In the real world, streams are made from water, terrain, and gravity. To belabor the metaphor to the media business, content is the water, publishers are the terrain, and our thirst for good content is the gravity.

As publishers – and I include all marketing brands in this category – the question then becomes: “What terrain do we claim as ours?”

Deciding where to lay down roots as a publisher is an existential choice. Continuing the physical metaphor a bit further, it’s the equivalent of deciding what land to buy (or lease). If your intention is to build something permanent and lasting on that land, it’s generally a good idea to *own* the soil beneath your feet.

This is why I wrote Put Your Taproot Into the Independent Web two years ago. If you’re going to build something, don’t build on land someone else already owns. You want your own land, your own domain, your own sovereignty.

Trouble is, so much of the choice land – the land where all the *people* are – is already owned by someone else: By Google, Facebook, Twitter, LinkedIn, Yahoo, and Apple (in apps, anyway). These platforms are where are the people are, after all. It’s where the headwaters form for most of the powerful streams on the Internet.  It’s tempting to build your brand on those lands – but my counsel is simple: Don’t. There’s plenty of land out there on the Rest of The Internet. In fact, there’s as much land as you want, and what you make of it is up to you as a publisher.

Quick: Name one successful publisher that built its brand on the back of a social platform? Can’t do it? Neither can I, unless you count sites like UpWorthy. And those flying near the social network sun risk getting seriously burned. There’s a reason publishers don’t build on top of social platforms: publishers are an independent lot, and they naturally understand the value of owning your own domain. Publishers don’t want to be beholden to the shifting sands of inscrutable platform policies. So why on earth would a brand?

Despite the fact that my once-revolutionary bromide “all brands are publishers” is now commonplace, most brands still don’t quite understand how to act like a publisher.

Which takes me to this piece, Facebook is not making friends on Madison Avenue (Digiday). Besides the quippy headline and the rather obvious storyline (a burgeoning Internet company failing to satisfy agencies? Pretty much Dog-Bites-Man), the thing that got me to perk up was this:

One point of frustration is Facebook’s ongoing squeezing of traffic to organic brand content. A digital agency exec described a recent meeting with Facebook that turned contentious. In what was meant to be a routine meeting, the exec said the Facebook rep told him the brands the agency works with would now have to pay Facebook for the same amount of reach they once enjoyed automatically. That position and Facebook’s perceived attitude have led to some disillusionment on Madison Avenue, where many bought into the dream peddled by Facebook that brands could set up shop on the platform as “publishers” and amass big audiences on their own….

…The cruel irony in all of this is that brands themselves greatly helped Facebook by giving it free advertising in their TV commercials and sites, urging their customers to “like” the brand — and paying Facebook to pile up likes. Facebook has returned the favor by choking off  brands’ access to those communities. That’s one expensive and frustrating lesson that it’s better to own than rent.

Put another way: “Wait, I did what you asked, Facebook, and set up a big content site on your platform that drew a fair number of visitors organically. Now you’ve changed the rules of the game, and you want me to pay to get their attention?!”

Yup. You leased your land, Mr. Brand Marketer, and the rent’s going up. If I were you, I’d get back to your own domain. Spend your money building something worthy, then spend to drive people there. Your agencies have entire creative and media departments that are good at just such practices. They might even spend a fair amount carefully purchasing distribution through Facebook’s streams. I’m guessing Facebook will be happy to take your money. But there’s no point in paying them twice.

 

Linked In Is Now A Publishing Platform. Cool. But First Get Your Own Site.

By - February 21, 2014

Screen Shot 2014-02-21 at 4.59.15 AMI’ve been a LinkedIn “Influencer” for a year or so, and while the honorific is flattering, I’m afraid I’ve fallen down in my duties to post there. The platform has proven it has significant reach, and for folks like me, who thrive on attention for words written, it’s certainly an attractive place to write. Of course, it pays nothing, and LinkedIn makes all the money on the page views my words drive, but … that’s the quid pro quo. We’ll put yer name in lights, kid, and you bring the paying customers.

One reason I don’t post on LinkedIn that often is my habit of writing here: there are very few times I come up with an idea that doesn’t feel like it belongs on my own site. And by the time I’ve posted it here, it seems like overkill to go ahead and repost it over on LinkedIn (even though they encourage exactly that kind of behavior). I mean, what kind of an egomaniac needs to post the same words on two different platforms? And from what I recall, Google tends to penalize you in search results if it thinks you’re posting in more than one place.

But this news, that LinkedIn is opening up its publishing platform to all comers, has changed my mind. From now on I’m going on record as a passionate advocate of posting to your own site first, then posting to LinkedIn (or any other place, such as Medium).

Why? Well, it comes down to owning your own domain. Building out a professional profile on LinkedIn certainly makes sense, and bolstering that cv with intelligent pieces of writing is also a great idea. But if you’re going to take the time to create content, you should also take the time to create a home for that content that is yours and yours alone. WordPress makes it drop dead easy to start a site. Take my advice, and go do it. Given the trendlines of digital publishing, where more and more large platforms are profiting from, and controlling, the works of individuals, I can’t stress enough: Put your taproot in the independent web. Use the platforms for free distribution (they’re using you for free content, after all). And make sure you link back to your own domain. That’s what I plan to do when I post this to LinkedIn.  Right after I post this here.

We Are Not Google, Therefore, We Are

By - February 06, 2014

RubiconS1If you read me regularly, you know I am a fan of programmatic adtech. In fact, I think it’s one of the most important developments of the 21st century. And over the past few quarters, adtech has gotten quite hot, thanks to the recent successes of Rocket Fuel (up to 50 and holding from its open at 29), Criteo (trading above its already inflated opening price of 31), and, by extension, Facebook and Twitter (don’t get me started, but both these companies should be understood as programmatic plays, in my opinion).

But while I like all those companies, I find Rubicon’s recent filing far more interesting. Why? Well, here’s the money shot of the S-1:

Independence. We believe our independent market position enables us to better serve buyers and sellers because we are not burdened with any structural conflicts arising from owning and operating digital media properties while offering advertising purchasing solutions to buyers.

Ah, there it is, in a nutshell: “We are not Google, therefore, we are.” Rubicon uses the words “independent” or “independence” more than a half a dozen times in its S1, about the same number of times the word “Google” is invoked.

I am in full support of an independent adtech ecosystem. It’s vitally important that the world have options when it comes to what flavor of programmatic infrastructure it uses to transact – and when I say the “world” I mean everybody, from publishers to advertisers, consumers to service providers. Criteo and Rocket Fuel are important companies, but they don’t directly compete with Google – their business leverages buying strategies to maximize profits. Rubicon, on the other hand, has a full adtech stack and is focused on publishers (and yes, that’s what sovrn is as well).

Over time, we won’t be talking about “publishers” and “advertisers,” we’ll be talking about “consumers” and “services.” And the infrastructure that connects those two parties should not be a default – it should be driven by competition between independent players.

So bravo, Rubicon, for making that statement so clearly in your S-1. I wish you luck.

How Facebook Changed Us, and How We Might Change Again

By - February 05, 2014

keep-calm-and-love-data-2(image) If you weren’t under a rock yesterday, you know Facebook turned ten years old this week (that’s a link to a Zuckerberg interview on the Today Show, so yep, hard to miss). My favorite post on the matter (besides Sara’s musings here and here – she was at Harvard with Zuck when the service launched) is from former Facebook employee Blake Ross, who penned a beauty about the “Rules” that have fallen over the past ten years. Re/code covers it  here, and emphasizes how much has changed in ten years – what was once sacred is now mundane. To wit:

- No, you can’t let moms join Facebook because Facebook is for students.

– No, you can’t put ads in newsfeed because newsfeed is sacred.

– No, you can’t allow people to follow strangers because Facebook is for real-world friends.

– No, you can’t launch a standalone app because integration is our wheelhouse.

– No, you can’t encourage public sharing because Facebook is for private sharing.

– No, you can’t encourage private sharing because Facebook is moving toward public sharing.

– No, you can’t encourage public sharing because Facebook is moving toward ultra-private sharing between small groups.

And this one’s a snapchat with about 3 seconds left, so hurry up and bludgeon someone with it:

– No, you can’t allow anonymity because Facebook is built on real identity.

None of these pillars came down quietly. They crashed with fury, scattering huddles of shellshocked employees across watering holes like dotted brush fires after a meteor strike.

Re/code ends its post with “makes you wonder what might change in the next 10 years.” Well yes, it certainly does.

A close read of Ross’ post leaves me wondering about “informational personhood.” He considers all the change at Facebook, and his role in it as an sometimes frustrated employee, concluding that what he got from the experience was perspective:

It took me probably half a dozen meteoric nothings before I learned how to stop worrying and love the bomb. A congenital pessimist, I gradually began to see the other side of risk. Now, when the interns wanted to mix blue and yellow, I could squint and see green; but I thought the sun might still rise if everything went black. I felt calmer at work. I began to mentor the newer hires who were still afraid of meteors. Today I watch Facebook from a distance with 1.2 billion other survivors, and my old fears charm like the monster under the bed: I couldn’t checkmate this thing in a single move even if I wanted to. But even now, I know someone over there is frantically getting the band back together.

Fortunately, this blossoming resilience followed me home from work:

My very chemistry has changed. In relationships, hobbies, and life, I find myself fidgeting in the safe smallness of the status quo. I want more from you now, and I want more from myself, and I’m less afraid of the risks it’ll take to get there because I have breathed through chaos before and I believe now—finally—that we’ll all still be here when the band stops playing.

This is, of course, just a staple of adulthood. It’s what we were missing that night when meteors left us crater-faced for senior prom and we all thought our lives were over. It’s called perspective, and it’s the best thing I got from growing up Facebook.

Hmmm. So many things to ponder here. The constant renegotiation of the rules at Facebook changed his “very chemistry.” A fascinating observation – heated debate about the rules of our social road made Ross a different person. Did this happen to us all? Is it happening now? For example, are we, as a culture, “getting used to” having the policies around our informational identities – our “infopersons” – routinely renegotiated by a corporate entity?

I think so far the answer is yes. I’m not claiming that’s wrong, per se, but rather, it is interesting and noteworthy. This perspective that Ross speaks of – this “growing up” – it bears more conversation, more exploration. What are the “Rules” right now, and will they change in ten years, or less? (And these “Rules” need not be only internal to Facebook – I mean “Rules” from the point of view of ourselves as informational people.)

Some that come to mind for me include:

- I don’t spend that much of my time thinking about the information I am becoming, but when I do, it makes me uneasy.

– I can always change the information that is known about me, if it’s wrong, but it’s a huge PITA.

– I can always access the information that is known about me, if I really want to do the work (but the truth is, I usually don’t).

– I know the information about me is valuable, but I don’t expect to derive any monetary value from it.

– It’s OK for the government to have access to all this information, because we trust the government. (Like it or not, this is in fact true by rule of law in the US).

– It’s OK for marketers to have information about me, because it allows for free Internet services and content. (Ditto)

– I understand that most of the information that makes up my own identity is controlled by large corporations, because in the end, I trust they have my best interests at heart (and if not, I can always leave).

What rules do you think much of our society currently operates under? And are they up for renegotiation, or are we starting to set them in stone?

Bill Gates Active Again At Microsoft? Bad Idea.

By - February 04, 2014

bill(image) This story reporting that Gates will return to Microsoft “one day a week” to focus on “product” has been lighting up the news this week. But while the idea of a founder returning to the mothership resonates widely in our industry (Jobs at Apple, Dorsey at Twitter), in Gates’ case I don’t think it makes much sense.

It’s no secret in our industry that Microsoft has struggled when it comes to product. It’s a very distant third in mobile (even though folks praise its offerings), its search engine Bing has struggled to win share against Google despite billions invested, and the same is true for Surface, which is well done but selling about one tablet for every 26 or so iPads (and that’s not counting Android). And then there’s past history – you know, when Gates was far more involved: the Zune (crushed by the iPod), that smart watch (way too early), and oh Lord, remember Clippy and Bob?

If anything, what Gates brought to the product party over the past two decades was a sense of what was going to be possible, rather than what is going to work right now. He’s been absolutely right on the trends, but wrong on the execution against those trends. And while his gravitas and brand would certainly help rally the troops in Redmond, counting on him to actually create product sounds like grasping at straws, and ultimately would prove a huge distraction.

Not to mention, a return to an active role at Microsoft would be a bad move for Gates’ personal brand, which along with Bill Clinton, is one of the most remarkable transformation stories of our era. Lest we forget, Gates was perhaps the most demonized figure of our industry, pilloried and humbled by the US Justice Department and widely ostracized as a unethical, colleague-berating monopolist. The most famous corporate motto of our time – “Don’t be evil” – can thank Microsoft for its early resonance. In its formative years, Google was fervently anti-Microsoft, and it made hay on that positioning.

Bill Gates has become the patron saint of  philanthropy and the poster child of rebirth, and from what I can tell, rightly so. Why tarnish that extraordinary legacy by coming back to Microsoft at this late date? Working one day a week at a company famous for its bureaucracy won’t change things much, and might in fact make things worse – if the product teams at Microsoft spend their time trying to get Gates’ blessing instead of creating product/market fit, that’s just adding unnecessary distraction in a market that rewards focus and execution.

If Gates really wants to make an impact at Microsoft, he’d have to throw himself entirely back into the company, focusing the majority of his intellect and passion on the company he founded nearly 40 years ago. And I’m guessing he doesn’t want to do that – it’s just too big a risk, and it’d mean he’d have to shift his focus from saving millions of lives to beating Google, Apple, and Samsung at making software and devices. That doesn’t sound like a very good trade.

 

Step One: Turn The World To Data. Step Two?

By - February 03, 2014

housenumbers1Is the public ready to accept the infinite glance of our own technology? That question springs up nearly everywhere I look these days, from the land rush in “deep learning” and AI companies (here, here, here) to the cultural stir that accompanied Spike Jonze’ Her. The relentless flow of Snowden NSA revelations, commercial data breaches, and our culture’s ongoing battle over personal data further frame the question.

But no single development made me sit up and ponder as much as the recent news that Google’s using neural networks to decode images of street addresses. On its face, the story isn’t that big a deal: Through its Street View program, Google collects a vast set of images, including pictures of actual addresses. This address data is very useful to Google, as the piece notes: “The company uses the images to read house numbers and match them to their geolocation. This physically locates the position of each building in its database.”

In the past, Google has used teams of humans to “read” its street address images – in essence, to render images into actionable data. But using neural network technology, the company has trained computers to extract that data automatically – and with a level of accuracy that meets or beats human operators.Not to mention, it’s a hell of a lot faster, cheaper, and scaleable.

Sure, this means Google doesn’t have to pay people to stare at pictures of house numbers all day, but to me, it means a lot more. When I read this piece, the first thing that popped into my mind was “anything that can be seen by a human, will soon be seen by a machine.” And if it’s of value, it will be turned into data, and that data will be leveraged by both humans and machines – in ways we don’t quite fathom given our analog roots.

I remember putting up my first street number, on a house in Marin my wife and I had just purchased that was in need of some repair. I went to the hardware store, purchased a classic “6” and “3”, and proudly hammered them onto a fence facing the street. It was a public declaration, to be sure – I wanted to be found by mailmen, housewarming partygoers, and future visitors. But when I put those numbers on my fence, I wasn’t wittingly creating a new entry in the database of intentions. Google Street View didn’t exist back then, and the act of placing a street number in public view was a far more “private” declaration. Sure, my address was a matter of record – with a bit of shoe leather, anyone could go down to public records and find out where I lived. But as the world becomes machine readable data, we’re slowly realizing the full power of the word “public.”

In the US and many other places, the “public” has the right to view and record anything that is in sight from a public place – this is the basis for tools like Street View. Step one of Street View was to get the pictures in place – in a few short years, we’ve gotten used to the idea that nearly any place on earth can now be visited as a set of images on Google. But I don’t think we’ve quite thought through what happens when those images turn into data that is “understood” by machines. We’re on the cusp of that awakening. I imagine it’s going to be quite a story.

Update: Given the theme of “turning into data” I was remiss to not mention the concept of “faceprints” in this piece. As addresses are to our home, our faces are to our identity, see this NYT piece for an overview.

 

Note to Interwebs: Pinterest Can’t Be, And Won’t Be, Only About Images.

By - January 21, 2014

pinterstPinterest is an interesting service – built entirely on the curation and sharing of images, and valued at billions of dollars. But when it comes time to lean into a business model, every service has to find and leverage its core DNA – and for Pinterest, it’s clear it can’t be images. That bus left a while ago (and Facebook was driving it, with Instagram riding shotgun and Snapchat….oh, never mind).

Anyway, two bits of news today that I think help us understand where Pinterest is going. First, Pinterest’s announcement that it’s getting into recipe search. And second, news that Pinterest is experimenting with GIFs.

To me, the conclusion is this: Pinterest is about collecting, curating, and sharing media objects, regardless of what they are. They can be images, which is how Pinterest got to its first jaw-dropping valuation. Or they can be….anything. Recipes? Sure. GIFs? Uh-huh. Web pages? Why not? Videos? Sure! Ummmm…files? Well, yeah, of course.

It seems everyone is converging on a simple set of facts: Our lives are digital, and we wish to share our lives. Pinterest came at it through images, artfully curated. Facebook came at it through friends, cunningly organized. Dropbox came to it via files, cleverly clouded. Countless others will come at the same opportunity through countless other ways. And countless others – Flickr, delicio.us, Friendster, Myspace – have already tried.

It’s getting a bit crowded, don’t you think?

Predictions 2014: A Difficult Year To See

By - January 03, 2014

1-nostradamusThis post marks the 10th edition of my annual predictions – it’s quite possibly the only thing I’ve consistently done for a decade in my life (besides this site, of course, which is going into its 12th year).

But gazing into 2014 has been the hardest of the bunch – and not because the industry is getting so complicated. I’ve been mulling these predictions for months, yet one overwhelming storm cloud has been obscuring my otherwise consistent forecasting abilities. The subject of this cloud has nothing – directly – to do with digital media, marketing, technology or platform ecosystems – the places where I focus much of my writing. But while the topic is orthogonal at best, it’s weighing heavily on me.

So what’s making it harder than usual to predict what might happen over the coming year? In a phrase, it’s global warming. I know, that’s not remotely the topic of this site, nor is it in any way a subject I can claim even a modicum of expertise. But as I bend to the work of a new year in our industry, I can’t help but wonder if our efforts to create a better world through technology are made rather small when compared to the environmental alarm bells going off around the globe.

I’ve been worried about the effects of our increasingly technologized culture on the earth’s carefully balanced ecosystem for some time now. But, perhaps like you, I’ve kept it to myself, and assuaged my concerns with a vague sense that we’ll figure it out through a combination of policy, individual and social action, and technological solutions. Up until recently, I felt we had enough time to reverse the impact we’ve inflicted on our environment. It seemed we were figuring it out, slowly but surely. The world was waking up to the problem, new policies were coming online (new mileage requirements, the phase out of the incandescent bulb, etc). And I took my own incremental steps – installing a solar system that provides nearly 90% of our home’s energy, converting my heating to solar/electrical, buying a Prius for my kids.

But I’m not so sure this mix of individual action and policy is enough – and with every passing day, we seem to be heading toward a tipping point, one that no magic technological solution can undo.

If you’re wondering what’s made me feel this way, a couple of choice articles from 2013 (and there were too many to count) should do the trick. One “holy shit” moment for me was a piece on ocean acidification, relating scientific discoveries that the oceans are turning acidic at a pace faster than any time since a mass extinction event 300 million years ago. But that article is a puff piece compared to this downer, courtesy The Nation: The Coming Instant Planetary Emergency. I know – the article is published in a liberal publication, so pile on, climate deniers… Regardless, I suggest you read it. Or, if you prefer whistling past our collective graveyard, which feels like a reasonable alternative, spare yourself the pain. I can summarize it for you: Nearly every scientist paying attention has concluded global warming is happening far faster, and with far more devastating impact, than previously thought, and we’re very close to the point where events will create a domino effect – receding Arctic ice allowing for huge releases of super-greenhouse methane gases, for instance. In fact, we may well be past the point of “fixing” it, if we ever could.

And who wants to spend all day worrying about futures we can’t fix? That’s no fun, and it’s the opposite of why I got into this industry nearly 30 years ago. As Ben Horowitz pointed out recently, one key meaning of technology is  “a better way of doing things.” So if we believe that, shouldn’t we bend our technologic infrastructure to the world’s greatest problem? If not – why not? Are the climate deniers right? I for one don’t believe they are. But I can’t prove they aren’t. So this constant existential anxiety grows within me – and if conversations with many others in our industry is any indication, I’m not alone.

In a way, the climate change issue reminds me of the biggest story inside our industry last year: Snowden’s NSA revelations. Both are so big, and so hard to imagine how an individual might truly effect change, that we collectively resort to gallows humor, and shuffle onwards, hoping things will work out for the best.

And yet somehow, this all leads me to my 2014 predictions. The past nine prediction posts have been, at their core, my own gut speaking (a full list is at the bottom of this post). I don’t do a ton of research before I sit down to write, it’s more of a zeitgeistian exposition. It includes my hopes and fears for our industry, an industry I believe to be among the most important forces on our planet. Last year, for example, I wrote my predictions based mainly on what I wished would happen, not what I thought realistically would.

For this year’s 2014 predictions, then, I’m going to once again predict what I hope will happen. You’ll see from the first one that I believe our industry, collectively, can and must take a lead role in addressing our “planetary emergency.” At least, I sure hope we will. For if not us…

1. 2014 is the year climate change goes from a political debate to a global force for unification and immediate action. It will be seen as the year the Internet adopted the planet as its cause.

Because the industry represents the new guard of power in our society,  Internet, technology, and media leaders will take strong positions in the climate change debate, calling for dramatic and immediate action, including forming the equivalent of a “Manhattan Project” for technological solutions to all manner of related issues – transportation, energy, carbon sequestration, geoengineering, healthcare, economics, agriculture.

While I am skeptical of a technological “silver bullet” approach to solving our self-created problems, I also believe in the concept of “hybrid vigor” – of connecting super smart people across multiple disciplines to rapidly prototype new approaches to otherwise intractable problems. And I cannot imagine one company or government will solve the issue of climate change (no matter how many wind farms or autonomous cars Google might create), nor will thousands of well meaning but loosely connected organizations (or the UN, for that matter).

I can imagine that the processes, culture, and approaches to problem solving enabled by the Internet can be applied to the issue of climate change. The lessons of disruptors like Google, Twitter, and Amazon, as well as newer entrants like airbnb, Uber, and Dropbox, can be applied to solving larger problems than where to sleep, how to get a cab, or where and how our data are accessed. We need the best minds of our society focused on larger problems – but first, we need to collectively believe that problem is as large as it most likely is.

2014, I hope, is the year the problem births a real movement – a platform, if you will, larger than any one organization, one industry, or one political point of view. The only time we’ve seen a platform like that emerge is the Internet itself. So there’s a certain symmetry to the hypothesis – if we are to solve humankind’s most difficult problem, we’ll have to adopt the core principles and lessons of our most elegant and important creation: the Internet. The solution, if it is to come from us, will be native to the Internet. I can’t really say how, but I do know one thing: I want to be part of it, just like I wanted to be part of the Internet back in 1987.

I’ll admit, it’s kind of hard to write anything more after that. I mean, who cares if Facebook has a good or bad year if the apocalypse is looming? Well, it’s entirely possible that my #1 prediction doesn’t happen, and then how would that look, batting .000 for the year (I’ve been batting better than .500 over the past decade, after all)? To salvage some part of my dignity, I’m going to go ahead and try to prognosticate a bit closer to home for the next few items.

2. Automakers adopt a “bring your own” approach to mobile integration. The world of the automobile moves slowly. It can take years for a new model to move from design to prototype to commercially available model. Last year I asked a senior executive at a major auto manufacturer the age old question: “What business are you in?” His reply, after careful consideration, was this: “We are in the mobile experience business.” I somewhat expected that reply, so I followed up with another question: “How on earth will you compete with Apple and Google?” Somewhat exasperated, he said this was the  existential question his company had to face.

2014 will be the year auto companies come to terms with this question. It won’t happen all at once, because nothing moves that fast in the auto industry. While most car companies have some kind of connectivity with smart phone platforms, for the most part they are pretty limited. Automakers find themselves in the same positions as carriers (an apt term, when you think about it) back at the dawn of the smart phone era – will they attempt to create their own interfaces for the phones they market, or will they allow third parties to own the endpoint relationship to consumers? It’s tempting for auto makers to think they can jump into the mobile user interface business, but I think they’re smart enough to know they can’t win there. Our mobile lives require an interface that understands us across myriad devices –  the automobile is just one of those devices. The smartest car makers will realize this first, and redesign their “device platforms” to work seamlessly with whatever primary mobile UI a consumer picks. That means building a car UI not as an end into itself, but as a platform for others to build upon.

Remember, these are predictions I *hope* will happen. It’s entirely possible that automakers will continue the haphazard and siloed approach they’re currently taking with regard to mobile integration, simply because they lack conviction on whether or not they want to directly compete with Google and Apple for the consumer’s attention inside the car. Instead, they should focus on creating the best service possible that integrates and extends those already dominant platforms.

3. By year’s end, Twitter will be roundly criticized for doing basically what it did at the beginning of the year. The world loves a second act, and will demand one of Twitter now that the company is public. The company may make a spectacular acquisition or two (see below), but in the main, its moves in 2014 will likely be incremental. This is because the company has plenty of dry powder in the products and services it already has in its arsenal – it’ll roll out a full fledged exchange, a la FBX, it’ll roll out new versions of its core ad products (with a particular emphasis on video), it’ll create more media-like “events” across the service, it’ll continue its embrace of television and popular culture…in other words, it will consolidate the strengths it already has. And 12 months from now, everyone will be tweeting about how Twitter has run out of ideas. Sound familiar, Facebook?

Now this isn’t what I hope for the company to do, but I already wrote up my great desire for Twitter last year. Still waiting on that one (and I’m not sure it’s realistic).

4. Twitter and Apple will have their first big fight, most likely over an acquisition. Up till now, Twitter and Apple have been best of corporate friends. But in 2014, the relationship will fray, quite possibly because Apple comes to the realization it has to play in the consumer software and services world more than it has in the past.  At the same time, there will be a few juicy M&A targets that Twitter has its eye on, targets that most likely are exactly what Apple covets as well. I’ll spare you the list of possible candidates, as most likely I’d miss the mark. But I’d expect entertainment to be the most hotly contested space.

5. Google will see its search related revenues slow, but will start to extract more revenues from its Android base. Search as we know it is moving to another realm (for more, see my post on Google Now). Desktop search revenues, long the cash cow of Google, will slow in 2014, and the company will be looking to replace them with revenues culled from its overall dominance in mobile OS distribution. I’m not certain how Google will do this – perhaps it will buy Microsoft’s revenue generating patents, or maybe it’ll integrate commerce into Google Now – but clearly Google needs another leg to its revenue stool. 2014 will be the year it builds one.

6. Google Glass will win – but only because Google licenses the tech, and a third party will end up making the version everyone wants. Google Glass has been lambasted as “Segway for your face” – and certainly the device is not yet a consumer hit. But a year from now, the $1500 price tag will come down by half or more, and Google will realize that the point isn’t to be in the hardware business, it’s to get Google Now to as many people as possible. So Google will license Glass sometime next year, and the real consumer accessory pros (Oakley? GoPro? Nike? Nest?!) will create a Glass everyone wants.   

7. Facebook will buy something really big. My best guess? Dropbox. Facebook knows it’s become a service folks use, but don’t live on anymore. And it will be looking for ways to become more than just a place to organize a high school reunion or stay in touch with people you’d rather not talk to FTF. It wants and needs to be what its mission says it is: “to give people the power to share and make the world more open and connected.” The social graph is just part of that mission – Facebook needs a strong cloud service if it wants a shot at being a more important player in our lives. Something like Dropbox (or Box) is just the ticket. But to satisfy the egos and pocketbooks of those two players, Facebook will have to pay up big time. It may not be able to, or it may decide to look at Evernote instead. I certainly hope the company avoids the obvious but less-substantive play of Pinterest. I like Pinterest, but that’s not what Facebook needs right now.

As with Twitter, this prediction does not reflect my greatest hope for Facebook, but again, I wrote that last year, and again…oh never mind.

8. Overall, 2014 will be a great year for the technology and Internet industries, again, as measured in financial terms. There are dozens of good companies lined up for IPOs, a healthy appetite for tech plays in the markets, a strong secular trend in adtech in particular, and any number of “point to” successes from 2013. That strikes me as a recipe for a strong 2014. However, if I were predicting two years out, I’d leave you with this warning: Squirrel your nuts away in 2014. This won’t last forever.

Related:

Predictions 2013

2013: How I Did

Predictions 2012

2012: How I Did