free html hit counter Media/Tech Business Models Archives | Page 16 of 151 | John Battelle's Search Blog

Facebook As Storyteller

By - September 25, 2011

1316765387_5.jpeg

(image) Recently I was in conversation with a senior executive at a major Internet company, discussing the role of the news cycle in our industry. We were both bemoaning the loss of consistent “second day” story telling – where a smart journalist steps back, does some reporting, asks a few intelligent questions of the right sources, and writes a longer form piece about what a particular piece of news really means.

Instead, we have a scrum of sites that seem relentlessly engaged in an instant news cycle, pouncing on every tidbit of news in a race to be first with the story. And sure, each of these sites also publish smart second-day analysis, but it gets lost in the thirty to fifty new stories which are posted each day. I bet if someone created a venn diagram of the major industry news sites by topic, the overlap would far outweigh the unique on any given day (or even hour).

This is all throat clearing to say that with the Facebook story last week, I am sensing a bit more of a “pause and consider” cycle developing. Sure, everyone jumped on the new Timeline and Open Graph news, but by day two, I noticed a lot more thought pieces, and most of them were either negative in tone, or sarcastic (or both.) Exmples include:

Can Facebook Become the Web? (Fortune)

The Facebook Timeline is the nearest thing I’ve seen to a digital identity (and it’s creepy as hell) (benwerd)

Dazed and Confused? Welcome to the Club (PC)

Facebook Just Shifted From Scale to Engagement (AdAge)

Facebook’s terrible plan to get us to share everything we do on the Web. (Slate)

@ F8: Zuckerberg Wants Users’ Whole Lives, But To What End? (PC)

Analysis of F8, Timeline, Ticker and Open Graph (Chris Saad)

All of life has been utterly (Dan Lyon)

Now, I am not endorsing all these pieces as perfect second day posts, but collectively, they do give us a fairly good sense of the issues raised by Facebook’s big news.

I’d like to add one more thought. Perhaps this might be called a “second week” post, given it’s been four or five days since the big news. In any case, the thing I find most interesting about the new approach to sharing and publishing on Facebook lies in what Mark Zuckerberg said his new product would deliver: “The story of your life.”

Now, long time readers know where I stand when it comes to telling the “story of your life.” I’m firmly in the camp that believes that story belongs to you, and should be told on your own domain, your own terms, and with a very, very clear understanding of who owns that story (that’d be you.) And this applies to brands as well: Your brand story should not be located or dependent on any third party platform. That’s the point of the web – anyone can publish, and no one has rights over what you publish (unless, of course, you break established law).

It was our inherent desire to tell “stories of our lives” that led to the explosion of blogging ten or so years ago. And crafting a rich narrative is just that, a craft (some elevate it to art). Yet Facebook’s new timeline, combined with the promiscuous sharing features of the Open Graph and some clever algorithms, promises to build a rich narrative timeline of your life, one that is rife with personal pictures, shared media objects (music, movies, publications), and lord knows what else (meals, trips, hookups – anything that might be recorded and shared digitally).

Now, I don’t find much wrong with this – most folks won’t spend their days obsessing over their timelines so as to present a perfectly crafted media experience. I’m guessing Facebook is counting on the vast majority of its users continuing to do what they’ve always done with Facebook’s curation of their data – ignore it, for the most part, and let the company’s internal algorithms manage the flow.

But our culture has always had a small percentage of folks who are native storytellers, people who do, in fact, obsess over each narrative they find worthy of relating. And to those people (which include media companies and brands falling over themselves to integrate with Open Graph), I once again make this recommendation: Don’t invest your time, or your narrative exertions, building your stories on top of the Facebook platform. Make them elsewhere, and then, sure, import them in if that’s what works for you. But individual stories, and brand stories, should be born and nurtured out in the Independent Web.

I’ve got plenty of philosophical reasons for saying this, which I wont’ get into in this post (some are here). But allow me to relate a more economic argument: At present, there’s no way for our story tellers to make money directly from Facebook for the favor of crafting engaging narratives on top of the company’s platform. And from what I can divine, Facebook plans to make a fair amount of money selling advertising next to these new timeline profiles. As they get richer and more multi-media, so will the advertisements. Do you think Facebook intends to cut its 800 million narrative agents into those advertising dollars? I didn’t think so.

Which is just fine, for most folks – for people who don’t see the “stories of their lives” as a way to make a living. But if crafting narrative is your business, or even just a hobby that brings in grocery money, I’d counsel staying on the open web. (BTW, crafting narratives is *every* brand’s business.) For you, Facebook is a wonderful distribution and community building platform. But it shouldn’t be where you build your house.

  • Content Marquee

Just In Case Ya Missed It, Google Is Pushing Google+, Hard

By - September 21, 2011

HugeG+Ad.png

This is what users who are not logged in see on the home page of Google. Clearly, folks at Google would very much like you to sign up for Google+.

There’s a lot more to say on this subject, but I’m on the road. Just wanted to capture this for posterity. Google+ is a major play by the company to put digital mortar between all of its offerings, and create a new sense of what the brand *means* – far more than search. It’s Google’s clear declaration that it will be a platform player alongside Microsoft and Apple. More on this over the weekend.

Now, given the antitrust fever that has hit Washington and other international capitals, this move might be viewed dimly by some competitors, depending on how things play out over time. It’s clearly “tying” dominance in one market – search – to another – social media. Then again, no one is forcing you to use Google….

The Future of Twitter Ads

By - September 14, 2011

twitter-money.png

(image) As I posted earlier, last week I had a chance to sit down with Twitter CEO Dick Costolo. We had a pretty focused chat on Twitter’s news of the week, but I also got a number of questions in about Twitter’s next generation of ad products.

As usual, Dick was frank where he could be, and demurred when I pushed too hard. (I’ll be talking to him at length at Web 2 Summit next month.) However, a clear-enough picture emerged such that I might do some “thinking out loud” about where Twitter’s ad platform is going. That, combined with some very well-placed sources who are in a position to know about Twitter’s ad plans, gives me a chance to outline what, to the best of my knowledge, will be the next generation of Twitter’s ad offerings.

I have to say, if the company pulls it off, the company is sitting on a Very Big Play. But if you read my post Twitter and the Ultimate Algorithm, you already knew that.

In that post, I laid out what I thought to be Twitter’s biggest problem/opportunity: surfacing the right content, in the right context, to the right person at the right time. It’s one of the largest computer science and social engineering problems on the web today, a fascinating opportunity to leverage what is becoming a real time database of folks’ implicit and explicitly declared interests.

I also noted that should Twitter crack this code, its ad products would follow. As I wrote: “If Twitter can assign a rank, a bit of context, a “place in the world” for every Tweet as it relates to every other Tweet and to every account on Twitter, well, it can do the same job for every possible advertiser on the planet, as they relate to those Tweets, those accounts, and whatever messaging the advertiser might have to offer. In short, if Twitter can solve its signal to noise problem, it will also solve its revenue scale problem.”

Well, I’ve got some insights on how Twitter plans to make its first moves toward these ends.

First, Dick made it clear last week that Twitter will be widening the rollout of its “Promoted Tweets” product, which pushes Tweets from advertisers up to the top of a logged-in user’s timeline (coverage). Previously, brands could promote tweets only to people who followed those brands. (This of course drove advertisers to use Twitter’s “Promoted Accounts” product, which encouraged users to follow a brand’s Twitter handle. After all, if Promoted Tweets are only seen by your followers, you better have a lot of them).

Just recently, Twitter began to allow brands to push their Promoted Tweets to non-followers. This adds a ton of scale to a product that previously had limited reach. Remember, Twitter announced some pretty big numbers last week: more than 100 million “logged in” users, and nearly 400 million users a month on its website alone. Not to mention around 230 million tweets generated a day. All of these metrics are growing at a very strong clip, Twitter tells me.

All this begs we step back and ask an important question. Now that advertisers can push their Tweets to non-followers, how might they be able to target these ads?

Twitter’s answer, in short, is this: We’ll handle that, at least for now. The first iteration of the product does not allow the advertiser to determine who sees the promoted tweet. Instead, Twitter will find “lookalikes” – people who are similar in interests to folks who follow the brand. Characteristically, Twitter is going slow with this launch – as I understand it, initially just ten percent of its users will see this product.

(The implication of Twitter finding “lookalikes” should not be ignored – it means Twitter is confident in its ability to relate the interest graphs of its users one to another, at scale. This is part of the issue I wrote about in the “Ultimate Algorithm” post, a major and important development that is worth noting).

Now, I’ve spent many years working with marketers, and even if Twitter’s lookalike approach has scale, I know brands won’t be satisfied with a pure “black box” answer from the service. They’ll want some control over how they target, who they target to, and when their ads show up, among other things. Google, for example, gives advertisers an almost overwhelming number of data points as input to their AdWords and AdSense products. Facebook, of course, has extremely rich demographic and interest based targeting.

So how will Twitter execute targeting? Here are my thoughts:

- Interest targeting. Twitter will expose a dashboard that allows advertisers to target users based on a set of interests. I’d expect, for example, that a movie studio launching a summer action film might want to target Twitter users have shown interest in celebrities, Hollywood, and, of course, action movies.

How might that interest be known? There are plenty of clear signals: What a user posts, of course. But also what he or she retweets, replies to, clicks on in someone else’s tweet, or who they follow (and who that followed person follows, and, and….).

- Geotargeting. Say that movie is premiering in just ten cities across the country. Clearly, that movie studio will want to target its ads just in those regions. Nearly every major advertiser demands this capability – consumer packaged goods companies like P&G, for example, will want to compare their geotargeted ads to “shelf lift” in a particular region.

Twitter has told me it will have geotargeting capabilities shortly.

- Audience targeting. I’d expect that at some point, Twitter will expose various audience “buckets” to the marketer for targeting based on unique signals that Twitter alone has views into. These might include “active retweeters,” “influencers,” or “tastemakers” – folks who tend to find things first.

- Demographic targeting. This one I’m less certain of – Twitter doesn’t have a clear demographic dataset, the way Facebook does. However, neither does Google, and it figured out a way to include demos in its product line.

- Device/location targeting. Do you want your Promoted Tweets only on the web, or only on Windows? Maybe just iPads, or iOS more broadly? Perhaps just mobile, or only Android? And would you like location with that? You get the picture….

Given all this targeting and scale, the next question is: How will advertisers actually buy from Twitter? I think it’s clear that Twitter will adopt a model based on two familiar features: a cost-per-engagement model (the company already uses engagement as a signal to rank an ads efficacy) and a real-time second-price bidded auction. The company already exposes dashboards to its marketing partners on no less than five metrics, allowing them to manage their marketing presence on Twitter in real time. And its recently announced analytics product only adds on to that suite. Twitter has also said a self-serve platform will be open for business shortly, one that will allow smaller businesses to play on the service.

Next up? APIs that allows third parties to run Promoted Tweets, as well as help marketers manage their Twitter presence. Just as with Facebook and Google, expect a robust “SEO/SEM” ecosystem to develop around these APIs.

The cost per engagement model is worth a few more lines. If an ad does not resonate – is not engaged with in some way by users – it will fall off the page, an approach that has clearly worked well for Google. The company is very pleased with its early tests on engagement, which one source tells me is one to two orders of magnitude above traditional banner ads.

activity.png

Finally, recall that Twitter also announced, and couched as very good news, that a large percentage of its users are “not logged in,” but rather consume Twitter content just as you or I might read a blog post. Fred writes about this in his post The Logged Out User. In that post, he estimates that nearly three in four folks on Twitter.com are “logged out.” That’s a huge audience. Expect ad products for those folks shortly, including – yes – display ads driven by cookies and/or other modeling parameters.

In short, after staring at this beast for many years, I think Twitter is well on its way to cracking the code for revenue. But let’s not forget the key part of this equation: The product itself. Ad product development is nearly always in lockstep with user product development.

Twitter recently surfaced a new tab for some of its users called “Activity”, and I was lucky enough to get it in my stream. It makes my timeline far better than it was. The “Mentions” tab (which we see as our own handle) is also far richer, showing follows, retweets, and favorites as well as replies and mentions. But there’s much, much more to do. My sense of the company now, however, is that it’s going to deliver on the opportunity we’ve all known it has ahead. It’s mostly addressed its infrastructure issues, Costolo told me, and is now focused on delivering product improvements through rapid iteration, testing, and deployment. I look forward to seeing how it all plays out.

On Location, Brand, and Enterprise

By - September 11, 2011

HP IO.pngFrom time to time I have the honor of contributing to a content series underwritten by one of FM’s marketing partners. It’s been a while since I’ve done it, but I was pleased to be asked by HP to contribute to their Input Output site. I wrote on the impact of location – you know I’ve been on about this topic for nearly two years now. Here’s my piece. From it:

Given the public face of location services as seemingly lightweight consumer applications, it’s easy to dismiss their usefulness to business, in particular large enterprises. Don’t make that mistake. …

Location isn’t just about offering a deal when a customer is near a retail outlet. It’s about understanding the tapestry of data that customers create over time, as they move through space, ask questions of their environment, and engage in any number of ways with your stores, your channel, and your competitors. Thanks to those smartphones in their pockets, your customers are telling you what they want – explicitly and implicitly – and what they expect from you as a brand. Fail to listen (and respond) at your own peril.

More on the Input Output site.

Twitter Makes a Statement

By - September 08, 2011

twitter 100mm.png

I could not make Twitter’s press event today, but I did get a chance to sit with CEO Dick Costolo (the Web 2 Summit dinner speaker this year) yesterday afternoon, and got a chance to do a deep dive on today’s news. I’ll write up more on that as soon as I can, but the recap:

100 million active users around the globe turn to Twitter to share their thoughts and find out what’s happening in the world right now. More than half of these people log in to Twitter each day to follow their interests. For many, getting the most out of Twitter isn’t only about tweeting: 40 percent of our active users simply sign in to listen to what’s happening in their world.

This is from their blog post, but there are a lot more stats to share (400 million people visit Twitter.com each month, for example), as well as insights and thoughts from our conversation yesterday. Stay tuned.

Google As Content Company – A Trend Worth Watching

By -

mayer zagat.png

It’s been a while since I’ve said this, but I’ll say it again: Google is a media company, and at some point, most media companies get pretty deep into the Making Original Content business. With the acquisition of Zagat* Google has clearly indicated it’s going to play in a space it once left to the millions of partners who drove value in its search and advertising business. Google is walking a thin line here – media partners are critical to its success, but if its seen as favoring its “owned and operated” content over those who operate in the open or independent web, well, lines may be redrawn in the media business.

Now, it’s easy to argue that this was a small, strategic buy to support Google’s local offering. Then again…Blogger, YouTube, and GoogleTV are not small efforts at Google. And if I were an independent publisher who focused on the travel and entertainment category, I’d be more than a bit concerned about how my content might rank in Google compared to Zagat. Just ask Yelp.

So…what other content-driven categories might Google find the need to get into? Well, ask yourself this question: What other content-driven business categories are important to Google?

Answering that question falls into the category of “things that make you go….huh…”

I’ll have more thinking on this soon, I hope. But I wanted to note the sale as indicative or a larger trend worth watching.

*Zagat has had a commercial relationship with FM, a company I founded, but not a material one on either side as I understand it.

We Need An Identity Re-Aggregator (That We Control)

By - August 29, 2011

FINGERPRINT.jpeg

The subject of “owning your own domain” has been covered to death in our industry, with excellent posts from Anil Dash and others (Fred) explaining the importance of having your own place on the web. I’ve also weighed in on the importance of “The Independent Web,” where creators have control, as opposed to the Dependent Web, where platforms ultimately control how your words, data, and expression are leveraged.

But not everyone gravitates toward having their own, independent site – at least not initially. Even those who do have sites don’t necessarily see those sites as the best place to express themselves. I was reminded of this reading a Quora thread over the weekend entitled “What’s it like to have your film flop at the box office?” (The subtitle of the thread is hilarious: “Don’t they know how bad it is before it comes out?”)

The question elicited a well written, funny, and informative post by one Sean Hood, a professional “fixer” of scripts who had worked on the recent “Conan the Barbarian” movie – apparently a big-time summer flop.

It’s clear that Hood was inspired to write a wonderful post not because he wanted to muse out loud on his own blog (he does, it turns out, have one), but because of something particularly social in nature about Quora.

The same, I’d wager, can be said of Google+, where a lot of folks, including well know “traditional bloggers” like Robert Scoble are content to post at length, regardless of the fact that Google+, unlike blogging software like WordPress, is not a platform that they “control.” Ditto places like the Huffington Post, Facebook, ePinions, Amazon Reviews – you get the picture. The web is full of places where the value is created by authors, but control and monetization accrues, in the main, to the company, not the individual.

Scoble, who is paid by the hosting company Rackspace to be nearly omnipresent, is clearly an edge case. He’s a professional blogger, but he doesn’t really care where his words live, as long as they get a lot of attention. Traditional authors, like, for example, the folks behind Dooce or The Awl, are far less likely to leave their core value – their words – all over the web, and in particular, they don’t see the point of given that value away for free, when their own sites provide their economic lifeblood (both sites are FM partners, but there are tens of thousands of others as well.).

The downsides of not owning your own words, on your own platform, are not limited simply to money. Over time, the words and opinions one leaves all over the web form a web of identity – your identity – and controlling that identity feels, to me, a human right. But unless you are a sophisticated netizen, you’re never going to spend the time and effort required to gather all your utterances in one place, in a fashion that best reflects who you are in the world.

Every site has a different terms of service – rules which guide what rights you have when you post on the site. I haven’t read them all (most of us don’t), but I’d imagine most of the would allow you to take your own words and cut and paste them on your own site, should you be so inspired. On his personal blog, Sean Hood, the film writer, has linked to many of his past answers on Quora. But he hasn’t “re-posted” them – which I think is a shame. Because while Quora is a great service, should it go dark, Sean’s words will be lost.

Earlier in the year I wrote a piece called “The Rise of Meta Services,” in which I posited that we need a new class of services that help us make sense of the fractured nature of all the sites, apps, and platforms we’re using. I’d wager there’s a great opportunity to create such a service that follows individuals around the web, noting, indexing, and reposting everything he or she writes back to his or her own domain.

Or maybe there’s already a WordPress plugin for that?!

More on Twitter's Great Opportunity/Problem

By - August 10, 2011

Itwitter-bird.pngn the comments on this previous post, I promised I’d respond with another post, as my commenting system is archaic (something I’m fixing soon). The comments were varied and interesting, and fell into a few buckets. I also have a few more of my own thoughts to toss out there, given what I’ve heard from you all, as well as some thinking I’ve done in the past day or so.

First, a few of my own thoughts. I wrote the post quickly, but have been thinking about the signal to noise problem, and how solving it addresses Twitter’s advertising scale issues, for a long, long time. More than a year, in fact. I’m not sure why I finally got around to writing that piece on Friday, but I’m glad I did.

What I didn’t get into is some details about how massive the solving of this problem really is. Twitter is more than the sum of its 200 million tweets, it’s also a massive consumer of the web itself. Many of those tweets have within them URLs pointing to the “rest of the web” (an old figure put the percent at 25, I’d wager it’s higher now). Even if it were just 25%, that’s 50 million URLs a day to process, and growing. It’s a very important signal, but it means that Twitter is, in essence, also a web search engine, a directory, and a massive discovery engine. It’s not trivial to unpack, dedupe, analyze, contextualize, crawl, and digest 50 million URLs a day. But if Twitter is going to really exploit its potential, that’s exactly what it has to do.

The same is true of Twitter’s semantic challenge/opportunity. As I said in my last post, tweets express meaning. It’s not enough to “crawl” tweets for keywords and associate them with other related tweets. The point is to associate them based on meaning, intent, semantics, and – this is important – narrative continuity over time. No one that I know of does this at scale, yet. Twitter can and should.

Which gets me to all of your comments. I heard both in the written comments, on Twitter, and in extensive emails offline, from developers who are working on parts of the problems/opportunities I outlined in my initial post. And it’s true, there’s really quite a robust ecosystem out there. Trendspottr, OneRiot, Roundtable, Percolate, Evri, InfiniGraph, The Shared Web, Seesmic, Scoopit, Kosmix, Summify, and many others were mentioned to me. I am sure there are many more. But while I am certain Twitter not only benefits from its ecosystem of developers, it actually *needs* them, I am not so sure any of them can or should solve this core issue for the company.

Several commentators noted, as did Suamil, “Twitter’s firehose is licensed out to at least publicly disclosed 10 companies (my former employer Kosmix being one of them and Google/Bing being the others) and presumably now more people have their hands on it. Of course, those cos don’t see user passwords but have access to just about every other piece of data and can build, from a systems standpoint, just about everything Twitter can/could. No?”

Well, in fact, I don’t know about that. For one, I’m pretty sure Twitter isn’t going to export the growing database around how its advertising system interacts with the rest of Twitter, right? On “everything else,” I’d like to know for certain, but it strikes me that there’s got to be more data that Twitter holds back from the firehose. Data about the data, for example. I’m not sure, and I’d love a clear answer. Anyone have one? I suppose at this point I could ask the company….I’ll let you know if I find out anything. Let me know the same. And thanks for reading.

The Future of The Internet (And How to Stop It) – A Dialog with Jonathan Zittrain Updating His 2008 Book

By - August 06, 2011

segment_9081_460x345.jpeg

(image charlie rose) As I prepare for writing my next book (#WWHW), I’ve been reading a lot. You’ve seen my review of The Information, and In the Plex, and The Next 100 Years. I’ve been reading more than that, but those made it to a post so far.

I’m almost done with Sherry Turkle’s Alone Together, with which I have an itch to quibble, not to mention some fiction that I think is informing to the work I’m doing. I expect the pace of my reading to pick up considerably through the Fall, so expect more posts like this one.

Last week I finished The Future of The Internet (And How to Stop It), by Harvard scholar Jonathan Zittrain. While written in 2008, this is an ever-more important book, for many reasons, in that it makes a central argument about what we’ve built so far, and where we might be going if we ignore the lessons we’ve learned as we’ve all enjoyed this E-ticket ride we call the Internet industry.

The book’s core argument has to do with a concept Zittrain calls “generativity” – the ability of a product or service to generate innovation, new ideas, new services, independent of centralized, authoritative control. It is, of course, very difficult to create generative technologies on a grand scale – it’s a statement of faith and shared values to do such a thing, and it really rubs governments and powerful interests the wrong way over time. Jonathan goes on to point out that truly open, generative systems are inherently subject to the tragedy of the commons – practices such as malware, bad marketing tactics, hacking etc. These threats are only growing, and provide a good reason to shut down generativity in the name of safety and order.

The Internet, as it turned out for the first ten or fifteen years, is one of the greatest generative technologies we’ve ever produced. And yes, I mean ever – as in, since we all figured out fire, or the wheel, or … well, forgive me for getting all Wired Manifesto on you, but it’s a very big deal.

But like Lessig before him, Zittrain is very worried that the essence of what has made the Internet special is changing, in particular, as the mainstream public falls deeper in love with services like Facebook and Apple’s iPhone.

His book is a meditation and a lecture, of sorts, on the history, meaning, and implications of this idea. After I read it, I was inspired to email Jonathan. I sent him this note:

“Hi Jonathan -

Wondering if, to start off an interview process (for my book), you might want to do a back and forth email interview that I’d publish on my site. It’d be mostly related to your book and some questions about how you view things have progressed since it came out. That would be both a good way for me to “review” the book on my site as well as to delve into some of the issues it raises in a fresh light. You game?”

To which he responded:

“Sure!”

And my questions, and his response, in lightly edited form, are below. I think you’ll enjoy his thoughts updating his thesis over the past three years. Really good stuff. I have bolded what I, as a magazine editor, might turn into a “pullquote” were I laying this out on a printed page.

JBAT:

- You wrote the Future of the Internet three years ago. It warned of a lack of awareness with regard to what we’re building, and the consequences of that lack of attention. it also warned of data silos and early lockdown. Three years later, how are we doing? Are things better, worse, the same?

And a follow up. On a scale of one to ten, where one is “actively helping” and ten is “pretty much evil,” how do the following companies rate in terms of the debate you frame in the book?

- Google (you can break this down into Android, Search, Apps, etc)

- Facebook (which was really not at full scale when you published)

- Apple

- Twitter

- Microsoft (again break it down if you wish)

Thanks!

JONATHAN ZITTRAIN:

Sorry this took me so long! I got a little carried away in answering –

- You wrote the Future of the Internet three years ago. It warned of a lack of awareness with regard to what we’re building, and the consequences of that lack of attention. it also warned of data silos and early lockdown. Three years later, how are we doing? Are things better, worse, the same?

It’s the best of times and the worst of times: the digital world offers us more every day, while we continue to set ourselves up for levels of surveillance and control that will be hard to escape as they gel.

That’s because the plus is also the minus: more and more of our activities are mediated by gatekeepers who make life easier, but who also can watch what we do and set boundaries on it — either for their own purposes, or under pressure from government authorities.

On the book’s specific predictions, Apple’s ethos remains a terrific bellwether. The iPhone — released in ’07 — has proved not only a runaway success, but the principles of its iOS have infused themselves across the spectrum. There’s less reason than ever to need a traditional PC, and by that I mean one that lets you run whatever code you want. OS X Lion points the way to a much more controlled PC zone, anyway, as it more and more funnels its software through a single company’s app store rather than from anywhere. I’d be surprised if Microsoft weren’t thinking along similar lines for Windows.

Google has offered a counterpoint, since the Android platform, while including an app store, allows outside code to be run. In part that’s because Google’s play is through the cloud. Google seeks to make our key apps based somewhere within the google.com archipelago, and to offer infrastructure that outside apps can’t resist, such a easy APIs to geographic mapping or user location. It’s important to realize that a cloud-based setup like Google Docs or APIs, or Facebook’s platform offer control similar to that of a managed device like an iPhone or a Kindle. All represent the movement of technology from product to service. Providers of a product have little to say about it after it changes hands. Providers of services are different: they don’t go away, and a choice of one over another can have lingering implications for months and even years.

At the time of the book’s drafting, the alternatives seemed stark: the “sterile” iPhone that ran only Apple’s software on the one hand, and the chaotic PC that ran anything ending in .exe on the other. The iPhone’s openness to outside code beginning in ’08 changed all that. It became what I call “contingently generative” — it runs outside code after approval (and then until it doesn’t). The upside is that the vast creativity of outside coders has led to a software renaissance on mobile devices, including iPhones, from the sublime to the ridiculous. And Apple’s gatekeeping has seemed to be with a light touch; apps not allowed in the store pale in comparison to the torrents of stuff let through. But that masks entire categories of applications that aren’t allowed — namely anything disruptive to Apple’s business model or that of its partners or regulators. No p2p, no alternate email clients, browsers with limited functionality.

More important, the ability to limit code is what makes for the ability to control content. More and more we see content, whether a book, or a magazine subscription, represented in and through an app. It’s sheer genius for a platform maker to demand a cut of in-app purchases. Can you imagine if, back in the day, the only browser allowed on Windows was IE, and further, all commerce conducted through that browser — say, buying a book through Amazon — constituted an “in-app purchase” for which Microsoft was due 30%?

A natural question is why competition isn’t the answer here — or at least reason to not worry about the question. If people thought the iPhone made for a bad deal, why would they want one? The reason they want one is the same thing that made the Mac so appealing when it first came on the scene: it was elegant and intuitive and it just worked. No blue screen of death. Consistency across apps. And, as viruses and worms naturally were designed for the most common platform, Windows, those 5% with Macs weren’t worth the trouble of corrupting.

We’ve seen a new generation of Mac malware as its numbers grow, and in the meantime a first defense is that of curation: the app store provides a rough filter for bad code, and accountability against its makers if something goes wrong even after it’s been approved. So that’s why the market likes these architectures. I’ll bet few Android users actually go “off-roading” with apps not obtained through the official Android app channels. But the fact that they can provides a key safety valve: if Google were to try the same deal as Apple with content providers for in-app content, the content providers could always offer their wares directly to Android users. I’m worried that a piece of malware could emerge on Android that would cause the safety valve of outside code to be changed, either formally by Google, or in practice as people become unwilling to drive outside the lanes.

So how about competition between platforms? Doesn’t that keep each competitor honest, even if all the platforms are curated? I suppose: the way that Prodigy and CompuServe and AOL competed with one another to offer different services as each chased subscribers. (Remember the day when AOL members couldn’t email CompuServe users and vice versa?) That was competition of a sort, but the Internet and the Web put them all to shame — even as the Internet arose from no business plan at all.

Here’s another way to think about it. Suppose you were going buy a new house. There are lots of choices. It’s just that each house is “curated” by its seller. Once you move in, that seller will get to say what furnishings can go in, and collects 30% of the purchase price of whatever you buy for the house. That seller has every reason to want to have a reputation for being generous about what goes in — but it still doesn’t feel very free when, two years after you’re living in the house, a particular coffee table or paint color is denied. There is competition in this situation — just not the full freedom that we rightly associate with inhabiting our dwellings. A small percentage of people might elect to join gated communities with strict rules about what can go inside and outside each house — but most people don’t want to have to consult their condo association by-laws before making choices that affect only themselves.

[I guess the Qs below (about each company) are answered above!]

—-####—-

I guess now my question is, what kind of place are we going to build next?

Thanks for your thoughts, Jonathan! What do you all think?

Twitter and the Ultimate Algorithm: Signal Over Noise (With Major Business Model Implications)

By - August 05, 2011

Note: I wrote this post without contacting anyone at Twitter. I do know a lot of folks there, and as regular readers know, have a lot of respect for them and the company. But I wanted to write this as a “Thinking Out Loud” post, rather than a reported article. There’s a big difference – in this piece, I am positing an idea. It’s entirely possible my lack of reporting will make me look like an uninformed boob. In the reported piece I’d posit the idea privately, get a response, and then report what I was told. Given I’m supposedly on a break this week, and I’ve wanted to get this idea out there for some time, I figured I’d just do so. I honestly have no idea if Twitter is actually working on the ideas I posit below. If you have more knowledge than me, please post in the comments, or ping me privately. Thanks! twitter issue.png

—-

I find Twitter to be one of the most interesting companies in our industry, and not simply because of its meteoric growth, celebrity usage, founder drama, or mind-blowing financings. To me what makes Twitter fascinating is the data the company sits atop, and the dramatic tension of whether the company can figure out how to leverage that data in a way that will insure it a place in the pantheon of long-term winners – companies like Microsoft, Google, and Facebook. I don’t have enough knowledge to make that call, but I can say this: Twitter certainly has a good shot at it.

My goal in this post is to outline what I see as the biggest challenge/opportunity in the company’s path. And to my mind, it comes down to this: Can Twitter solve its signal to noise problem?

Many observers have commented on how noisy Twitter is: That once you follow more than about fifty or so folks, your feed becomes unmanageable. If you follow hundreds, like I do, it’s simply impossible to extract value from your stream in any structured or consistent fashion (see image from my stream at left). Twitter’s answers to this issue has been anemic. One product manager even insisted that your Twitter feed should be viewed as a stream you dip into from time to time, using it as a thirsty person might use a nearby water source. I disagree entirely. I have chosen nearly 1,000 folks who I feel are interesting enough to follow. On average, my feed gets a few hundred new tweets every ten minutes. No way can I make sense of that unassisted. But I know there’s great stuff in there, if only the service could surface it in a way that made sense to me.

You know – in a way that feels magic, the way Google was the first time I used it.

I want Twitter to figure out how to present that stream in a way that adds value to my life. It’s about the visual display of information, sure, but it’s more than that. It requires some Really F*ing Hard Math, crossed with some Really Really Hard Semantic Search, mixed with more Super Ridiculous Difficult Math. Because we’re talking about some super big numbers here: 200 million tweets a day across hundreds of millions of accounts. And that’s growing bigger by the hour.

A mini industry has evolved to address this issue – I use News.me, Paper.li, TweetDeck (recently purchased by Twitter), Percolate and others, but the truth is, they are not fully integrated, systemic solutions to the problem. Only Twitter has access to all of Twitter. Only Twitter can see the patterns of usage and interest and turn meaningful insights and connections into algorithms which feed the entire service. In short, it’s Twitter that has to address this problem. Because, of course, this is not just Twitter’s great problem, it is also Twitter’s great opportunity.

Why? Because if Twitter can provide me a tool that makes my feed really valuable, imagine what it can do for advertisers. As with every major player that has scaled to the land of long-term platform winners (as I said, Google, Microsoft, Facebook), product comes first, and business model follows naturally (with Microsoft, the model was software sales of its OS and apps, not advertising).

If Twitter can assign a rank, a bit of context, a “place in the world” for every Tweet as it relates to every other Tweet and to every account on Twitter, well, it can do the same job for every possible advertiser on the planet, as they relate to those Tweets, those accounts, and whatever messaging the advertiser might have to offer. In short, if Twitter can solve its signal to noise problem, it will also solve its revenue scale problem. It will have built the foundation for a real time “TweetWords” – an auction driven marketplace where advertisers can bid across those hundreds of millions of tweets for the the right to position relevant messaging in real time. If this sounds familiar, it should – this is essentially what Google did when it first cracked truly relevant search, and then tied it to AdWords.

Now, I do know that Twitter sees this issue as core to its future, and that it’s madly working on solving it. What I don’t know is how the company is attacking the problem, whether it has the right people to succeed, and, honestly, whether the problem is even soluble regardless of all those variables. After all, Google solved the problem, in part, by using the web’s database of words as commodity fodder, and its graph of links as a guide to value. Tweets are more than words, they comprise sentiments, semantics, and they have a far shorter shelf life (and far less structure) than an HTML document.

In short, it’s a really, really, really hard problem. But it’s a terribly exciting one. If Twitter is going to succeed at scale, it has to totally reinvent search, in real time, with algorithms that understand (or at least replicate patterns of) human meaning. It then has to take that work and productize it in real time to its hundreds of millions of users (because while the core problem/opportunity behind Twitter is search, the product is not a search product per se. It’s a media product.)

To my mind, that’s just a very cool problem on which to work. But I sense that Twitter has the solution to the problem within its grasp. One way to help solve it is to throw open the doors to its data, and let the developer community help (a recent move seems to point in that direction). That might prove too dangerous (it’s not like Google is letting anyone know how it ranks pages). But it could help in certain ways.

Earlier in the week I was on the phone with someone who works very closely in this field (search, large scale ad monetization, media), and he said this of Twitter: “There’s definitely a $100 billion company in there.”

The question is, can it be built?

What do you think? Am I off the reservation here? And who do you know who’s working on this?