free html hit counter Random, But Interesting Archives | Page 2 of 141 | John Battelle's Search Blog

Buh-Bye, CableCo

By - February 13, 2014

chromecastWhen it comes to television business models and the endless debate about “cutting the cord,” I consider myself in the “fast follower” camp – I’m not willing to endure the headaches and technical backflips required to get rid of cable entirely, but I sure am open to alternatives should they present themselves. I’m eager for Aereo to get to San Francisco, but until it does, I’ve stuck with my way-too-expensive cable subscription.

My rants on cable’s products (here’s my favorite – still true after 8 years!) and services (please don’t get me started) are well known by friends and family, but because I have had no simple alternative, I pay more than $200 a month to Comcast, who announced plans today to consolidate its market by purchasing one its largest peers, Time Warner.

But in the past few months, a clever, $35 device from Google has started to chip away at Comcast’s grip on my family television viewership. You’ve probably heard about it – it’s called Chromecast. It’s a neat little hack – it looks like a USB storage dongle, but you plug it into any HDMI port on a standard flatscreen. It uses wifi to sync with your mobile phone or tablet, and within minutes you are watching Netflix, YouTube, or your browser on your television. It’s kind of magic, and it’s changed how we watch TV completely.

The reason my Comcast bill is so high boils down to a luxury tax: I get charged something like ten bucks a month for “extra” cable boxes. I don’t *need* these boxes, but if I *want* a TV screen in secondary places (my music room, office, etc.) I have to pay for the privilege. Turns out, I really only use those screens for watching movies and shows on demand. Comcast’s on demand service is so lame, I can’t really describe it here, so I prefer to use NetFlix or Hulu – both of which work with Chromecast. Goodbye, cable boxes!

It’ll be interesting to watch services slowly but – to my mind – inevitably bail on the cablecos. First to go will have to be sports networks – I’d far rather subscribe to the MLB channel than overpay Comcast to see my beloved Giants. I imagine local news will be next – since they are often already available via the web (which you can stream via a Chrome browser).

In fact, there’s a ton of video on the web – much of it very high quality, but there’s really not been much *programming* of that video for audiences who live in a post-cable world. Well, I’ve joined that world, happily, and I hope the programming will soon catch up with the distribution. Chromecast just opened up its platform for third party applications – a big move that could bring a lot of innovation to “television” – something it desperately needs, given it’s been in the grips of monopoly for decades. Buh-bye, Cableco!

 

  • Content Marquee

Bill Gates Active Again At Microsoft? Bad Idea.

By - February 04, 2014

bill(image) This story reporting that Gates will return to Microsoft “one day a week” to focus on “product” has been lighting up the news this week. But while the idea of a founder returning to the mothership resonates widely in our industry (Jobs at Apple, Dorsey at Twitter), in Gates’ case I don’t think it makes much sense.

It’s no secret in our industry that Microsoft has struggled when it comes to product. It’s a very distant third in mobile (even though folks praise its offerings), its search engine Bing has struggled to win share against Google despite billions invested, and the same is true for Surface, which is well done but selling about one tablet for every 26 or so iPads (and that’s not counting Android). And then there’s past history – you know, when Gates was far more involved: the Zune (crushed by the iPod), that smart watch (way too early), and oh Lord, remember Clippy and Bob?

If anything, what Gates brought to the product party over the past two decades was a sense of what was going to be possible, rather than what is going to work right now. He’s been absolutely right on the trends, but wrong on the execution against those trends. And while his gravitas and brand would certainly help rally the troops in Redmond, counting on him to actually create product sounds like grasping at straws, and ultimately would prove a huge distraction.

Not to mention, a return to an active role at Microsoft would be a bad move for Gates’ personal brand, which along with Bill Clinton, is one of the most remarkable transformation stories of our era. Lest we forget, Gates was perhaps the most demonized figure of our industry, pilloried and humbled by the US Justice Department and widely ostracized as a unethical, colleague-berating monopolist. The most famous corporate motto of our time – “Don’t be evil” – can thank Microsoft for its early resonance. In its formative years, Google was fervently anti-Microsoft, and it made hay on that positioning.

Bill Gates has become the patron saint of  philanthropy and the poster child of rebirth, and from what I can tell, rightly so. Why tarnish that extraordinary legacy by coming back to Microsoft at this late date? Working one day a week at a company famous for its bureaucracy won’t change things much, and might in fact make things worse – if the product teams at Microsoft spend their time trying to get Gates’ blessing instead of creating product/market fit, that’s just adding unnecessary distraction in a market that rewards focus and execution.

If Gates really wants to make an impact at Microsoft, he’d have to throw himself entirely back into the company, focusing the majority of his intellect and passion on the company he founded nearly 40 years ago. And I’m guessing he doesn’t want to do that – it’s just too big a risk, and it’d mean he’d have to shift his focus from saving millions of lives to beating Google, Apple, and Samsung at making software and devices. That doesn’t sound like a very good trade.

 

Step One: Turn The World To Data. Step Two?

By - February 03, 2014

housenumbers1Is the public ready to accept the infinite glance of our own technology? That question springs up nearly everywhere I look these days, from the land rush in “deep learning” and AI companies (here, here, here) to the cultural stir that accompanied Spike Jonze’ Her. The relentless flow of Snowden NSA revelations, commercial data breaches, and our culture’s ongoing battle over personal data further frame the question.

But no single development made me sit up and ponder as much as the recent news that Google’s using neural networks to decode images of street addresses. On its face, the story isn’t that big a deal: Through its Street View program, Google collects a vast set of images, including pictures of actual addresses. This address data is very useful to Google, as the piece notes: “The company uses the images to read house numbers and match them to their geolocation. This physically locates the position of each building in its database.”

In the past, Google has used teams of humans to “read” its street address images – in essence, to render images into actionable data. But using neural network technology, the company has trained computers to extract that data automatically – and with a level of accuracy that meets or beats human operators.Not to mention, it’s a hell of a lot faster, cheaper, and scaleable.

Sure, this means Google doesn’t have to pay people to stare at pictures of house numbers all day, but to me, it means a lot more. When I read this piece, the first thing that popped into my mind was “anything that can be seen by a human, will soon be seen by a machine.” And if it’s of value, it will be turned into data, and that data will be leveraged by both humans and machines – in ways we don’t quite fathom given our analog roots.

I remember putting up my first street number, on a house in Marin my wife and I had just purchased that was in need of some repair. I went to the hardware store, purchased a classic “6″ and “3″, and proudly hammered them onto a fence facing the street. It was a public declaration, to be sure – I wanted to be found by mailmen, housewarming partygoers, and future visitors. But when I put those numbers on my fence, I wasn’t wittingly creating a new entry in the database of intentions. Google Street View didn’t exist back then, and the act of placing a street number in public view was a far more “private” declaration. Sure, my address was a matter of record – with a bit of shoe leather, anyone could go down to public records and find out where I lived. But as the world becomes machine readable data, we’re slowly realizing the full power of the word “public.”

In the US and many other places, the “public” has the right to view and record anything that is in sight from a public place – this is the basis for tools like Street View. Step one of Street View was to get the pictures in place – in a few short years, we’ve gotten used to the idea that nearly any place on earth can now be visited as a set of images on Google. But I don’t think we’ve quite thought through what happens when those images turn into data that is “understood” by machines. We’re on the cusp of that awakening. I imagine it’s going to be quite a story.

Update: Given the theme of “turning into data” I was remiss to not mention the concept of “faceprints” in this piece. As addresses are to our home, our faces are to our identity, see this NYT piece for an overview.

 

The Four Phases of CES: I, Consumer, Am Electronic

By - January 08, 2014

CESCES is a huge event, one that almost everybody in our industry has been to at least once, if not multiple times. I’ve been going for the better part of 25 years, so I’ve seen a lot of change. And after my first day here, the biggest takeaway I’m getting is a sense of deja vu.

Back in the early days, CES was mostly about exciting new televisions, clock radios, and stereo components. Call that the first incarnation of CES – literally, electronics for consumers. Stuff you plugged in, stuff that “electrified” your life with sound and video.

But starting in the mid to late 1908s, a brash new industry was starting to take over the “buzz” on the show floor – personal computers. PCs were becoming a “consumer electronic” and for the next decade or so, PCs were the “it” industry at CES. The PC era of CES was its second incarnation, and it brought our industry onto the show floor in a big way.

By 2000, CES morphed yet again, and the brash new industry at the center of buzz was the consumer Internet. Yahoo, AOL, and myriad now-dead startups competed for headlines and hot-party tickets. The Consumer Internet marked CES’s third phase change.

A fourth phase came in the last five to ten years – the mobile wave. Nokia and Blackberry, then Samsung, Apple, and Google became major players at the event.

The funny thing is, as each of these waves have hit CES, none of them have eliminated the wave before. CES was always a crazy quilt where you’d find cheesy aftermarket car stereo folks right next to the slickest new laptop, or the latest robotic toy for your kid.

But this year, I think the biggest trend is how these once-separate parts of CES are getting mashed together. In a way, CES is once again all about consumer electronics, but they are all computers now, they are all connected through the Internet, and they are mobile and location aware.

The two biggest stories here are the rise of the connected car, on the one hand, and the Internet of Things, on the other. The auto industry has always been here, but mostly represented by after-market players who did massive car stereo installations. Now every major auto maker is here in force, touting their cars as mobile, Internet-connected experience machines with app stores and serious computing power. Auto makers know their future lies in the marriage of their “platform” – the car itself – with the digital fabric of our lives.

Meanwhile, the other big story is how everything – from babies clothes to the machines that wash them – has become a “consumer electronic” – thanks to the Internet of Things. (Stephen Wolfram has even announced a computable database of “connected devices.”) Autos are simply one more connected device – albeit one of our most prized and expensive ones.

I’m left, after one day of meetings and chance encounters, with the sense that four massive tectonic plates – consumer devices, PCs, mobile platforms, and the Internet – are crashing up against one another, causing chaos, opportunity, and more change than we’ve seen in any previous era. There are few standards or touchstones for how this will all work out, but one thing is clear – at the center of this stands the individual – the “Consumer.” And the essence of who that person is is described by data – data that is computed through devices, platforms, and Internet services. We have a long, long way to go before our industry creates a seamless experience across all consumer electronics, based on that data. But to me, that’s probably the biggest opportunity there is. I’ll unpack this idea in a later post, but for now, it’s off to more CES madness.

 

Predictions 2014: A Difficult Year To See

By - January 03, 2014

1-nostradamusThis post marks the 10th edition of my annual predictions – it’s quite possibly the only thing I’ve consistently done for a decade in my life (besides this site, of course, which is going into its 12th year).

But gazing into 2014 has been the hardest of the bunch – and not because the industry is getting so complicated. I’ve been mulling these predictions for months, yet one overwhelming storm cloud has been obscuring my otherwise consistent forecasting abilities. The subject of this cloud has nothing – directly – to do with digital media, marketing, technology or platform ecosystems – the places where I focus much of my writing. But while the topic is orthogonal at best, it’s weighing heavily on me.

So what’s making it harder than usual to predict what might happen over the coming year? In a phrase, it’s global warming. I know, that’s not remotely the topic of this site, nor is it in any way a subject I can claim even a modicum of expertise. But as I bend to the work of a new year in our industry, I can’t help but wonder if our efforts to create a better world through technology are made rather small when compared to the environmental alarm bells going off around the globe.

I’ve been worried about the effects of our increasingly technologized culture on the earth’s carefully balanced ecosystem for some time now. But, perhaps like you, I’ve kept it to myself, and assuaged my concerns with a vague sense that we’ll figure it out through a combination of policy, individual and social action, and technological solutions. Up until recently, I felt we had enough time to reverse the impact we’ve inflicted on our environment. It seemed we were figuring it out, slowly but surely. The world was waking up to the problem, new policies were coming online (new mileage requirements, the phase out of the incandescent bulb, etc). And I took my own incremental steps – installing a solar system that provides nearly 90% of our home’s energy, converting my heating to solar/electrical, buying a Prius for my kids.

But I’m not so sure this mix of individual action and policy is enough – and with every passing day, we seem to be heading toward a tipping point, one that no magic technological solution can undo.

If you’re wondering what’s made me feel this way, a couple of choice articles from 2013 (and there were too many to count) should do the trick. One “holy shit” moment for me was a piece on ocean acidification, relating scientific discoveries that the oceans are turning acidic at a pace faster than any time since a mass extinction event 300 million years ago. But that article is a puff piece compared to this downer, courtesy The Nation: The Coming Instant Planetary Emergency. I know – the article is published in a liberal publication, so pile on, climate deniers… Regardless, I suggest you read it. Or, if you prefer whistling past our collective graveyard, which feels like a reasonable alternative, spare yourself the pain. I can summarize it for you: Nearly every scientist paying attention has concluded global warming is happening far faster, and with far more devastating impact, than previously thought, and we’re very close to the point where events will create a domino effect – receding Arctic ice allowing for huge releases of super-greenhouse methane gases, for instance. In fact, we may well be past the point of “fixing” it, if we ever could.

And who wants to spend all day worrying about futures we can’t fix? That’s no fun, and it’s the opposite of why I got into this industry nearly 30 years ago. As Ben Horowitz pointed out recently, one key meaning of technology is  “a better way of doing things.” So if we believe that, shouldn’t we bend our technologic infrastructure to the world’s greatest problem? If not – why not? Are the climate deniers right? I for one don’t believe they are. But I can’t prove they aren’t. So this constant existential anxiety grows within me – and if conversations with many others in our industry is any indication, I’m not alone.

In a way, the climate change issue reminds me of the biggest story inside our industry last year: Snowden’s NSA revelations. Both are so big, and so hard to imagine how an individual might truly effect change, that we collectively resort to gallows humor, and shuffle onwards, hoping things will work out for the best.

And yet somehow, this all leads me to my 2014 predictions. The past nine prediction posts have been, at their core, my own gut speaking (a full list is at the bottom of this post). I don’t do a ton of research before I sit down to write, it’s more of a zeitgeistian exposition. It includes my hopes and fears for our industry, an industry I believe to be among the most important forces on our planet. Last year, for example, I wrote my predictions based mainly on what I wished would happen, not what I thought realistically would.

For this year’s 2014 predictions, then, I’m going to once again predict what I hope will happen. You’ll see from the first one that I believe our industry, collectively, can and must take a lead role in addressing our “planetary emergency.” At least, I sure hope we will. For if not us…

1. 2014 is the year climate change goes from a political debate to a global force for unification and immediate action. It will be seen as the year the Internet adopted the planet as its cause.

Because the industry represents the new guard of power in our society,  Internet, technology, and media leaders will take strong positions in the climate change debate, calling for dramatic and immediate action, including forming the equivalent of a “Manhattan Project” for technological solutions to all manner of related issues – transportation, energy, carbon sequestration, geoengineering, healthcare, economics, agriculture.

While I am skeptical of a technological “silver bullet” approach to solving our self-created problems, I also believe in the concept of “hybrid vigor” – of connecting super smart people across multiple disciplines to rapidly prototype new approaches to otherwise intractable problems. And I cannot imagine one company or government will solve the issue of climate change (no matter how many wind farms or autonomous cars Google might create), nor will thousands of well meaning but loosely connected organizations (or the UN, for that matter).

I can imagine that the processes, culture, and approaches to problem solving enabled by the Internet can be applied to the issue of climate change. The lessons of disruptors like Google, Twitter, and Amazon, as well as newer entrants like airbnb, Uber, and Dropbox, can be applied to solving larger problems than where to sleep, how to get a cab, or where and how our data are accessed. We need the best minds of our society focused on larger problems – but first, we need to collectively believe that problem is as large as it most likely is.

2014, I hope, is the year the problem births a real movement – a platform, if you will, larger than any one organization, one industry, or one political point of view. The only time we’ve seen a platform like that emerge is the Internet itself. So there’s a certain symmetry to the hypothesis – if we are to solve humankind’s most difficult problem, we’ll have to adopt the core principles and lessons of our most elegant and important creation: the Internet. The solution, if it is to come from us, will be native to the Internet. I can’t really say how, but I do know one thing: I want to be part of it, just like I wanted to be part of the Internet back in 1987.

I’ll admit, it’s kind of hard to write anything more after that. I mean, who cares if Facebook has a good or bad year if the apocalypse is looming? Well, it’s entirely possible that my #1 prediction doesn’t happen, and then how would that look, batting .000 for the year (I’ve been batting better than .500 over the past decade, after all)? To salvage some part of my dignity, I’m going to go ahead and try to prognosticate a bit closer to home for the next few items.

2. Automakers adopt a “bring your own” approach to mobile integration. The world of the automobile moves slowly. It can take years for a new model to move from design to prototype to commercially available model. Last year I asked a senior executive at a major auto manufacturer the age old question: “What business are you in?” His reply, after careful consideration, was this: “We are in the mobile experience business.” I somewhat expected that reply, so I followed up with another question: “How on earth will you compete with Apple and Google?” Somewhat exasperated, he said this was the  existential question his company had to face.

2014 will be the year auto companies come to terms with this question. It won’t happen all at once, because nothing moves that fast in the auto industry. While most car companies have some kind of connectivity with smart phone platforms, for the most part they are pretty limited. Automakers find themselves in the same positions as carriers (an apt term, when you think about it) back at the dawn of the smart phone era – will they attempt to create their own interfaces for the phones they market, or will they allow third parties to own the endpoint relationship to consumers? It’s tempting for auto makers to think they can jump into the mobile user interface business, but I think they’re smart enough to know they can’t win there. Our mobile lives require an interface that understands us across myriad devices –  the automobile is just one of those devices. The smartest car makers will realize this first, and redesign their “device platforms” to work seamlessly with whatever primary mobile UI a consumer picks. That means building a car UI not as an end into itself, but as a platform for others to build upon.

Remember, these are predictions I *hope* will happen. It’s entirely possible that automakers will continue the haphazard and siloed approach they’re currently taking with regard to mobile integration, simply because they lack conviction on whether or not they want to directly compete with Google and Apple for the consumer’s attention inside the car. Instead, they should focus on creating the best service possible that integrates and extends those already dominant platforms.

3. By year’s end, Twitter will be roundly criticized for doing basically what it did at the beginning of the year. The world loves a second act, and will demand one of Twitter now that the company is public. The company may make a spectacular acquisition or two (see below), but in the main, its moves in 2014 will likely be incremental. This is because the company has plenty of dry powder in the products and services it already has in its arsenal – it’ll roll out a full fledged exchange, a la FBX, it’ll roll out new versions of its core ad products (with a particular emphasis on video), it’ll create more media-like “events” across the service, it’ll continue its embrace of television and popular culture…in other words, it will consolidate the strengths it already has. And 12 months from now, everyone will be tweeting about how Twitter has run out of ideas. Sound familiar, Facebook?

Now this isn’t what I hope for the company to do, but I already wrote up my great desire for Twitter last year. Still waiting on that one (and I’m not sure it’s realistic).

4. Twitter and Apple will have their first big fight, most likely over an acquisition. Up till now, Twitter and Apple have been best of corporate friends. But in 2014, the relationship will fray, quite possibly because Apple comes to the realization it has to play in the consumer software and services world more than it has in the past.  At the same time, there will be a few juicy M&A targets that Twitter has its eye on, targets that most likely are exactly what Apple covets as well. I’ll spare you the list of possible candidates, as most likely I’d miss the mark. But I’d expect entertainment to be the most hotly contested space.

5. Google will see its search related revenues slow, but will start to extract more revenues from its Android base. Search as we know it is moving to another realm (for more, see my post on Google Now). Desktop search revenues, long the cash cow of Google, will slow in 2014, and the company will be looking to replace them with revenues culled from its overall dominance in mobile OS distribution. I’m not certain how Google will do this – perhaps it will buy Microsoft’s revenue generating patents, or maybe it’ll integrate commerce into Google Now – but clearly Google needs another leg to its revenue stool. 2014 will be the year it builds one.

6. Google Glass will win – but only because Google licenses the tech, and a third party will end up making the version everyone wants. Google Glass has been lambasted as “Segway for your face” – and certainly the device is not yet a consumer hit. But a year from now, the $1500 price tag will come down by half or more, and Google will realize that the point isn’t to be in the hardware business, it’s to get Google Now to as many people as possible. So Google will license Glass sometime next year, and the real consumer accessory pros (Oakley? GoPro? Nike? Nest?!) will create a Glass everyone wants.   

7. Facebook will buy something really big. My best guess? Dropbox. Facebook knows it’s become a service folks use, but don’t live on anymore. And it will be looking for ways to become more than just a place to organize a high school reunion or stay in touch with people you’d rather not talk to FTF. It wants and needs to be what its mission says it is: “to give people the power to share and make the world more open and connected.” The social graph is just part of that mission – Facebook needs a strong cloud service if it wants a shot at being a more important player in our lives. Something like Dropbox (or Box) is just the ticket. But to satisfy the egos and pocketbooks of those two players, Facebook will have to pay up big time. It may not be able to, or it may decide to look at Evernote instead. I certainly hope the company avoids the obvious but less-substantive play of Pinterest. I like Pinterest, but that’s not what Facebook needs right now.

As with Twitter, this prediction does not reflect my greatest hope for Facebook, but again, I wrote that last year, and again…oh never mind.

8. Overall, 2014 will be a great year for the technology and Internet industries, again, as measured in financial terms. There are dozens of good companies lined up for IPOs, a healthy appetite for tech plays in the markets, a strong secular trend in adtech in particular, and any number of “point to” successes from 2013. That strikes me as a recipe for a strong 2014. However, if I were predicting two years out, I’d leave you with this warning: Squirrel your nuts away in 2014. This won’t last forever.

Related:

Predictions 2013

2013: How I Did

Predictions 2012

2012: How I Did

Google’s Year End Zeitgeist: Once Again, Insights Lacking

By - December 17, 2013
Zeitgeist13

Great photo, but not one we searched for….

It’s become something of a ritual – every year Google publishes its year-end summary of what the world wants, and every year I complain about how shallow it is, given what Google *really* knows about what the world is up to.

At least this year Google did a good job of turning its data into a pretty media experience. There are endless scrolling visual charts, there’s a emotional, highly produced video, and there’s a ton of lists to explore once you drill down. But there’s also a Google+ integration that frankly, was utterly confusing. Called #my2013 Gallery (sorry, there’s no link for it), it showed photos from a bunch of people I didn’t know, then invited me to add my own. Not sure what that was about. The “Search Trends Globe” shows top search terms by location, but you can’t click through to see results. Odd.

So kudos to Google for giving us a lot of eye candy – there are top ten lists for all manner of categories, from dog breeds to NFL teams to memes – all by geography. But the search capacity is, well – confusing. Once you search inside what you think is the year end Zeitgeist, you end up getting Google Trends data, and you’re kind of lost, not sure if you’re in the year-end special anymore. Bummer.

And while there are far more lists than I’ve seen before, there’s still no … insight. Even the “What is…” function, which was an interesting, if limited feature from last year’s Zeitgeist, is gone this year, most likely a victim of political correctness. (For why, see my post about last year’s Zeitgeist).

I sure wish Google would surprise us with Zeitgeist, but once again, no dice.

Facebook Must Win The Grownup Vote

By - December 16, 2013

facebookdownthumbIt’s all over the media these days: Facebook is no longer cool, Facebook has lost its edge with teenagers, Facebook is now establishment.

Well duh. Teenagers aren’t loyal to much of anything, especially Internet stuff. Tonight I had four of them at my table, ranging in age from 15 to 17. All of them agreed that Facebook was over. It was a unanimous, instant, and unemotional verdict. They agreed they had to have a Facebook page. But none of them much cared about it anymore. Facebook was now work – and they’re kids after all. Who wants to work?

And when I asked if their little brothers and sisters were into Facebook? Nope, not one.

I turned to the 10 year-old at the table, my youngest daughter. I realized she had never once mentioned a desire to get a Facebook page, and seemed bored by the discussion overall. Of course, she’s already on Snapchat.

Interesting. When my now 15-year old was 10, she begged us night and day to get a Facebook page. Now, she uses it “because she has to.”

What about Facebook-purchased Instagram? Still good, but the Facebook connection is seen as a negative. Snapchat? Great, but warning signs abound (they’re not sure about whether they trust the service). Vine? Super cool. Twitter? Well….they know Twitter is coming in their lives – something that they’d dabbled in, but will grow into, once they’d learned how to be a proper public person.

You know, a grownup.

Facebook, which started as a site for college kids (OK, OK, Harvard kids), must know it has to get in front of this particular parade. Because as far as I can tell, Facebook’s future is with grownups now. And grownups are more world wise, more demanding, and more thoughtful than college kids. But the Facebook app still feels very….high school.

Maybe that’s why Facebook is talking about becoming your personal newspaper (really? A news site?!).

I wrote many moons ago about how Facebook, to win on the Internet, would need to let go of its data lock in, and compete as a service irrespective of its natural social graph monopoly. It looks like the competition is on – a generation is growing up with Facebook being an optional service – an absolutely unimaginable state of affairs just three or so years ago.

 

Do you think Facebook can make the transition? 

TWITTER ADS ARE GETTING, UM, MORE NOTICEABLE

By - November 14, 2013

Note: Somehow this post was deleted from my CMS. I am reposting it now.

Two of my favorite companies in the world are Twitter and American Express. I have literally dozens of good pals at both. So this is said with love (and a bit more pointedly at Twitter than Amex, which is just one of many advertisers I’ve encountered in the situation described below. And Amex is one of the most innovative marketers on the planet, so again, much respect).

But here goes: I’m seeing too much image-heavy promoted tweets in my feed when I first come to the service. Here’s a picture:

prmotedtwtr

Seeing a big display ad (because let’s be clear, that’s what this is) is fine the first few times I come to the site. But after a while, it gets in the way – especially if it’s  inconsistent with my expectations of the service. The tweet above was first posted on November 4th – more than a week ago. Twitter is all about what’s happening now – it’s not about an ongoing promotion with reach and frequency goals. This is probably the fifth time I’ve seen this ad, and that’s not good for anyone – not the publisher, not the platform, and not the user. Now, if the creative had changed, that’s something to talk about. And if it was relevant to what was happening now…even better. But the same message, five times in ten days? That’s an old model that doesn’t translate so well to Twitter, I’d warrant.

Just making an observation – I know the algorithms – and the content creators – are hard at work fixing this problem. What do you think?

Ubiquitous Video: Why We Need a Robots.txt For the Real World

By - November 13, 2013

illustration_robotLast night I had an interesting conversation at a small industry dinner. Talk turned to Google Glass, in the context of Snapchat and other social photo sharing apps.

Everyone at the table agreed:  it was inevitable – whether it be Glass, GoPro, a button in your clothing or some other form factor – personalized, “always on” streaming of images will be ubiquitous. Within a generation (or sooner), everyone with access to mass-market personal electronics (i.e., pretty much everyone with a cell phone now) will have the ability to capture everything they see, then share or store it as they please.

That’s when a fellow at the end of the table broke in. “My first response to Glass is to ask: How do I stop it?”

The dinner was private, so I can’t divulge names, but this fellow was a senior executive in the banking business. He doesn’t want consumers streaming video from inside his banks, nor does he want his employees “Glassing” confidential documents or the keys to the safe deposit boxes.

All heads at the table nodded, as if this scenario was right around the corner  - and the implications went far beyond privacy at a bank. Talk turned to many other situations where people agreed they’d not want to be “always on.” It could be simple –  a bad hair day – or complicated: a social pariah who just wanted to be left alone. All in all, people were generally sympathetic to the notion of “the right to be left alone” – what in this case might be called “the right to not be in a public stream.”

But how to enforce such a right? The idea of banning devices like Glass infringes the wearer’s rights, and besides, it just won’t scale – tiny cameras will soon be everywhere, and they’ll be basically imperceptible. Sure, some places (like banks, perhaps), will have scanning devices and might be able to afford the imposition of such bans. But in public places? Most likely impossible and quite possibly illegal (in the US, for instance, there is an established right to take photographs in public spaces).

This is when my thoughts turned to one of the most powerful devices we have to manage each other: the Social Contract. I believe we have entered an era in which we must renegotiate our contract with society – that invisible but deeply powerful sets of norms that guide “civil behavior.” Glass (among other artifacts) is at the nexus of this negotiation – the debate laid bare by a geeky pair of glasses.

Back at the table, someone commented that it’d be great if there was a way to let people know you didn’t want to be “captured” right now. Some kind of social cloaking signal*, perhaps. Now, we as humans are damn good at social signaling. We’ve built many a civilization on elaborate sets of social mores.  So how might our society signal a desire to not be “streamed”? Might we develop the equivalent of a “robots.txt” for the real world?

For those of you not familiar with robots.txt, it’s essentially a convention adopted early in the Web’s life, back when search became a powerful distributor of attention, and the search index the equivalent of a public commons (Zittrain wrote a powerful post about it here). Some sites did not want to be indexed by search engines, for reasons ranging from a lack of resources (a search engine’s spiders put a small tax on a site’s resources) to privacy.  No law was enacted to create this convention, but every major search engine obeys its strictures nevertheless. If a site’s robots.txt tells an indexing spider to not look inside, the robot moves along.

It’s an elegant solution, and it works, as long as everyone involved keeps their part of the social contract. Powerful recriminations occur if an actor abuses the system – miscreants are ostracized, banned from social contact with “good” actors.

So might we all, in some not-so-distant future, have our own “robots.txt” – a signal that we can instrument at will, one which is constantly on, a beacon which others can pick up and understand? Such an idea seem to me not at all far fetched. We already all carry the computing power and bandwidth on our person to effect such a signal. All we need is a reason for it to come online. Glass, or something like it, may well become that reason.

The instrumentation of our new social contract is closer at hand than we might think.

*We already have  deeply a meaningful “social cloaking device” – its called our wardrobe. But I’ll get into that topic in another post.

 

More than 200,000 Minutes of Engagement, and Counting

By - November 08, 2013

BehindBannerScreenShot

Some of you may recall “Behind the Banner,” a visualization of the programmatic adtech ecosystem that I created with The Office for Creative Research and Adobe back in May. It was my attempt at explaining the complexities of a world I’ve spent several years engaged in, but often find confounding. I like to use Behind the Banner in talks I give, and folks always respond positively to it, in particular when I narrate the story as it plays.

I realized yesterday that I didn’t know how many people had actually viewed the thing, and naturally as a creator I was curious. So  I pinged my colleague at Adobe, who of course are analytics pros, among many other things. What came back was pretty cool: The visualization has been viewed nearly 50,000 times, with an average time spent of well over 4 minutes per view. That’s more than 200,000 minutes of engagement, or more than one-third of a year! It’s certainly got nothing on the Lumascape, but it’s neat nonetheless.

The version above is really a “beta” – we all wanted to do so much more, but we had to ship it in time for the CM Summit this past May. I’m eager to make it better – create an embeddable version, lay down a narrative track, add more companies and richer detail, fix things folks feel need fixing. If anyone out there is game to help, let me know. It’d be a fun project to work on!

(PS – we found out last week that Behind the Banner has been shortlisted for the Kantar Information Is Beautiful awards. Hurrah!)