free html hit counter Joints After Midnight & Rants Archives - Page 16 of 43 - John Battelle's Search Blog

No, In Fact, We Haven't Seen This Movie Before

By - January 13, 2011


Thanks to monster private financings from Groupon and Facebook, as well as the promise of major IPOs from Demand, LinkedIn, Zynga and others, the predictable “watch out, here we go again” buzz is rising up in the press. This article from Ad Age, subtitled “With Billion-Dollar Dot-com Valuations Back in a Big Way, It’s Time for Alarm Bells to Start Ringing,” is typical of the bunch. With a “we’ve seen this movie before” tone, it points out that most of the successful companies of today had models that were tried ten years ago, and in the main they failed.

But I’d like to point out a couple pretty obvious differences between the dot com busts of a decade ago, and the companies that are now earning billion dollar valuations. To wit:

- Each of the companies earning these valuations have revenues in the hundreds of millions or more, and operating profits in the tens of millions, if not more. Most also have operating histories of many years, and/or executives and boards who have extensive histories operating in the Internet economy.

- The markets overall have changed dramatically, on many different fronts. First of all, nearly every consumer in the developed world is comfortable spending money using the web. Second, the web is firmly a mobile medium, enabling business models that were mere dreams a decade ago. And third, the markets have been mostly closed to public investment in the “Internet thesis” for most of the past ten years, so there is a very strong pent up demand to invest in what many see as the future of how business will be done.

Combine these factors and you have what I view as a pretty solid environment: a strong demand for quality companies, and quality companies to fulfill that demand. Is $50 billion too high for Facebook, or $5billion too high for Groupon? Well, we’ll see. As the initial surge of IPO demand abates, newly public companies will prove their value in the long term by delivering growth. At least they have strong platforms of revenues and profits, as well as extraordinary market positions, from which to start. Remember, Google went public in 2004 at under $100, and nearly everyone thought the company was overvalued.

Back in the dot com era, most retail Internet investors were buying on the come, on promises that the hand waving and affirmations of Web 1.0 entrepreneurs would magically come true. Almost none of the companies that went public back then could boast the metrics today’s private winners do. Truth be told, the promises of the Internet hand wavers are coming true, but for investors in the 1990s, it’s a decade too late.

We’re in an entirely different place, as an industry, than we were ten years ago. I very much doubt we’ll see the same mistakes made again. If money losing companies with nothing but an idea and some VC backers manage to go public, I’ll be the first to write a post about our collective amnesia.

And this is not to say that marginal companies won’t attempt to go public in coming years, or that there won’t be flameouts and losers over time. There always are. But compared to the late 1990s, the companies lining up to offer themselves to the public look healthy, well positioned, and very, very real.

  • Content Marquee

What Everyone Seems to Miss In Facebook's Private or Public Debate…

By - January 04, 2011

No FBook Filings.png

…is the core reason it makes sense for Facebook to be public: Accountability to its customers. The rest of this debate is simply financial folks arguing amongst themselves.

Facebook is the greatest repository of data about people’s intentions, relationships, and utterances that ever has been created. Period. And a company that owns that much private data should be accountable to the public. The public should be able to review its practices, its financials, and question its intentions in a manner backed by our collective and legally codified will. That’s the point of a public company – accountability, transparency, and thorough reporting.

If Facebook wants to stay private and not be part of the social mores which we’ve built that govern major corporations (flawed though they may be), well I think that would be a major strategic blunder, one that would ultimately doom the company. It’s fine for all sorts of companies to stay private, for all sorts of reasons. But a company like Facebook, with its unprecedented grasp of our social data, should be accountable to the public. If it isn’t, we’ll migrate our “social graphs” to a company that is.

If Zuckerberg doesn’t want to be a public CEO and deal with the realities of that, as this Reuters piece argues, well, he should find a CEO who is willing to do the job.

(If you’re not up on the debate, here’s a primer.)

Predictions 2011

By - January 03, 2011

crystal ball-tm.jpg

InnostraD-tm-3-tm-tm-tm.jpg the eighth version of my annual predictions, I’ll try to stay focused and clear, the better to score myself a year from now. And while I used the past two weeks of relatively fallow holiday time as a sort of marination period, the truth is I pretty much just sat down and banged these predictions out in one go, just as I have the past seven years. It works for me, and I hope you agree, or at least find them worth your time. So here we go:

1. We’ll see the rise of a meme which I’ll call “The Web Reborn” – a response to the idea that mobile and apps have killed the web as we know it. In fact, we’ll come to realize that the web is the foundation of nearly everything we do, and we’ll start to expect, as consumers, that all our service providers honor and build in basic principles of “web friendliness” – data portability and user-controlled identity most important among them. Call it a return to the original principles of “Web 2.0″.

2. Voice will become a critical interface for computing (especially mobile apps). This is just not true now, but in a year’s time, there will be a handful of very popular apps that are driven by voice, and in particular, by weaving together voice, text, and identity.

3. DSPs (Demand Side Platforms) will fade into the fabric of larger marketing platforms. In the end, DSPs are the handle by which we understand the concept of technology-driven ad networks. And those have been with us for over a decade. Exchanges, DSPs, SSPs, etc. are all important, but in the end, what matters is that advertisers have scale and efficiency, and consumers have control.

4. Related, MediaBank will emerge as a major independent player in the marketing world, playing off its cross channel reach (outside of digital) and providing an alternative to the conflicted digital platforms at Facebook, Microsoft, Google, and Yahoo. I could imagine a major tech or telco player trying to buy MediaBank as the world realizes that marketing is, in essence, a massive IT business (among many other things).

5. The Mac App Store will be a big hit, at least among Mac users, and may well propel Mac sales beyond expectations.

6. Related, Apple will attempt to get better at social networking, fail, and cut a deal with Facebook.

7. Also related, Apple will begin to show signs of the same problems that plagued Microsoft in the mid 90s, and Google in the past few years: Getting too big, too full of themselves, and too focused on their own prior success.

8. Microsoft will have a major change in leadership. I am not predicting Ballmer will leave, but I think he and the company will most likely bring in very senior new talent to open new markets or shift direction in important current markets like media/marketing/social.

9. The public markets will be surprisingly open to major new Internet deals, despite the current rise of “private IPOs” and the growing belief that the IPO process is broken. In the end, there’s just too many good reasons for public companies to be, well, public. (See Gurley).

10. The tablet market will have a year of incoherence. Apple will dominate with the iPad due to a lack of an alternative touchstone. Google will focus on providing a clear, consistent experience through Android for tablets and mobile, but it will take a third party to unify the experience. I don’t see that happening this year.

11. “Social deals” will morph to become a standard marketing outlet for all business, and by year’s end be seen as a standard part of any marketer’s media mix. Groupon will lead here, but nearly every major player will have an offering, often by partnering with leaders. I’m tempted to say Facebook will abandon its own Deals offering for a deal with Groupon, but I’m not sure that will unfold in one year.

12. Related, Groupon will fend off an acquisition by a major carrier, probably AT&T or Verizon. It’s possible they’ll sell, but I doubt it.

13. Facebook will decline as a force in the Internet world, as measured by buzz. The company will continue to be seen as Big Brother in the press, and struggle with internal issues related to growth. Also, it will lose some attention/share to upstarts. However, its share of marketing dollars and reach will increase.

14. Related, we’ll see major privacy related legislation in the US brought to the floor of Congress, and then fail for lack of consensus. But that will drive a significant shift in how our culture understands its relationship to the world our industry is building, and that’s a good thing.

I’d love to keep going, but I think those are the major ones, at least from my vantage point. Thanks for reading, it was a great year. I’m not going to make predictions about my own work this year, as I’ve got too much inside knowledge on that topic! Let me know your thoughts in comments, and have a great 2011!


Predictions 2010

2010 How I Did

2009 Predictions

2009 How I Did

2008 Predictions

2008 How I Did

2007 Predictions

Thinking Out Loud: What's Driving Groupon?

By - December 17, 2010


In the current issue of the New Yorker, columnist James Surowiecki, who I generally admire, gets it exactly wrong when it comes to Groupon.

He writes:

” But it seems unlikely that it’s going to become a revolutionary company, along the lines of YouTube, Facebook, Twitter, and Google. ….Groupon, by contrast, is a much more old-school business. It doesn’t have any obvious technological advantage. Its users don’t really do anything other than hit the “buy” button. And its business requires lots of hands-on attention…”

Well, that’s a defensible opinion, but after visiting CEO Andrew Mason this week in Chicago, and thinking about it a bit, I must say that I wholeheartedly diasagree.

Many folks think of Groupon as a relatively simple idea. A daily deal, a large sales force, and that’s about it. Too easy to copy (there are scores of “Groupon clones”), and too labor intensive (the more small businesses you want to work with, the more sales and service people you need).

All this is true. But it fails to understand the power of Groupon’s model. To sum it up: Groupon has built a new channel into the heart of the the world’s economic activity: Small businesses. And it is that channel where the true power lies.

First, the economic math: Small businesses create more than 50% of US GDP and create more than 75% of net new jobs each year. But small businesses represent a fragmented, maddeningly difficult sector of our economy – 23 million small pieces loosely joined. Any platform that has connected them and added value to their bottom line has turned into a massive new business.


Over the past century, there have been two such new platforms. The most recent is Google, a proxy for the rise of the web as a platform for small business lead generation. Before that, it was the Yellow Pages, a proxy for the rise of the telephone as a platform for lead generation.

Groupon, I believe, has the potential to be a new proxy – one that subsumes the platforms of both the Internet and the telephone, and adds multiple dimensions beyond them.

I know that’s a stretch, but hear me out.

First, let’s review the Yellow Pages. What is it? Well, for the most part, it’s a paper-based publishing platform that combines a curated local business phone directory with advertising listings. Nearly every single small business with a phone number is listed in the Yellow Pages, and a large percentage of them also buy advertising to promote their wares as well.

In short, the Yellow Pages is a platform that connects every single consumer with a phone to every single local business with a phone.

As a business, the Yellow Pages consists of folks who manage the listings and produce the books, as well as a very large sales force which calls on local businesses. Once a year, the product turns over, and a new book is made.

That’s it. Simple (and certainly not technologically defensible), and while it’s clearly in decline, the Yellow Pages is currently a $15 billion revenue business in the US alone. Now, the Yellow Pages is also an online business, but they were late the party, and have pretty much lost to Google when it comes to the platform play.

Google represents a second new platform which connects consumers and small businesses. Many forget that it was small business that drove early adoption of AdWords (as well as Overture, its early competition). And while not every small business is yet online – 36% of US small businesses still don’t have a web site – a the clear majority of them do, and milllions of them use AdWords, as well as organic search, to drive leads to their business. Google makes billions of dollars leveraging its platform, which, by the way, has subsumed the Yellow Pages business and grown well past it into any number of other markets, including most major international regions.

Google alone is on a $30 billion revenue run rate, and it’s only ten years old. That’s twice the US revenues of the Yellow Pages, which were built up over more than 50 years.

So to review, the Yellow Pages leveraged the telephone to create a massively scaled and profitable platform connecting consumers and businesses. Google did the same, but leveraged the Internet (and subsumed the telephone as well).

And Groupon is doing it again, subsuming the telephone, the Internet, and leveraging an entirely new platform: the mobile web. google_logo1.jpg

Now, before you yell at me and claim that Groupon is anything BUT a mobile-driven company (the company sends email to 40mm US subscribers, for example), recall my definition of mobile is a bit more complicated than most.

Remember MOLRS? As I said in that post: “if you are going to think about mobile, you have to think about social, local, and real time.” In short, mobile is meaningless without context: Where someone is (or is about to go), who someone is with (or about to go meet), and why someone is where they are (or with who they are with). And, of course, when someone is where they are (and with whomever they are there with…).

Whew. Sorry, but you get the picture.

Now, let’s think about MOLRS in relation to small business. First, small business owners (SBOs) care deeply about location. Are they in a good location? Will customers be able to find them? Is there parking? A good neighborhood? Strong foot traffic?

Second, SBOs care deeply about relationships and word of mouth (or what we will call social). Do people refer their friends and family to the business? Are people happy with the service? Will they say nice things?

Third, SBOs care very much about timing (what I call “real time” in my MOLRS breakdown). What are the best hours for foot traffic? What are the best times to run promotions? How can I bring in more business during slow times? How does seasonality effect my business? When should I have a sale?

In short, SBOs are driven by local, social, and real time.

Turns out, so is Groupon.

Now, ask any small business owner what they wish for more of, and they’ll give you a resounding answer: More customers. It’s why they pay for the Yellow Pages ad, and it’s why they buy AdWords from Google.

And it’s why they are starting to buy Groupon’s product, at a breakneck pace. Sure, some of them buy too much of it, or fail to do the math and lose money on the come. They’ll adjust, and if they don’t, smarter SBOs will eat their lunch, and the world will move on.

To my mind, the proof is in Groupon’s growth rate. I’ve never seen anything like it – well, since Google. And just as Google lapped the Yellow Pages in a fraction of the time, Groupon seems to be on track to do the same to Google.

Good sources have told me that Groupon is growing at 50 percent a month, with a revenue run rate of nearly $2 billion a year (based on last month’s revenues). By next month, that run rate may well hit $2.7 billion. The month after that, should the growth continue, the run rate would clear $4 billion.

Google’s run rate, when revealed in its IPO filing six years ago, was staggering – it grew from under $200 million to $1.6 billion in less than three years. Groupon is on track to do the same – but in less than one year.

That’s pretty extraordinary. But remember, Groupon has figured out a way to deliver what SBOs want most: more customers in their stores. And unlike Google or the Yellow Pages, Groupon doesn’t sell advertising. Instead, it takes 50% of the actual revenue driven by its platform. Trust me, that’s potentially a much bigger number.

Actually, it’s pretty interesting to see how the business model of driving leads to business has shifted as each platform has risen to dominance. The Yellow Pages charge a set price for a display ad, with no guarantee that the ad would drive any leads. Google turned the model upside down, and charged only when people clicked on the ad. Groupon doesn’t charge anything at all: It simply takes half the revenue generated when a deal is fulfilled by its platform.

So to summarize, I think those who claim Groupon’s business is too simple are focused on the wrong things. Sure, there are other deal sites. But none have Groupon’s scale. Sure, Groupon’s model of one deal in one city on one day is limited, but it’s easy to see how the product scales against category, zip code, time of day, and many other variables. And sure, Groupon has a lot of people who have to touch a lot of businesses and a lot of customers every day. But to me, that’s the company’s strength: SBOs are in the people business, and therefore, so must Groupon be.

And this, to my mind, is why Facebook or Google can’t compete with Groupon. Imagine Facebook or Google with 1,000 people who do nothing but talk to customers all day long? Yep, I can hear the laughter from here….

While I was visiting earlier this week, CEO Mason told me that a significant percentage of Groupon’s customer service reps are members of Chicago’s vibrant improv scene. That makes sense to me – if you are going to deal with possibly upset people all day, it helps to have a culture of humor and thinking on your feet.

That culture will serve Groupon well as it attempts to deal with world record-breaking growth. While there is no certainty the company won’t blow its lead, it’s already a major international player. And while Mason would not comment on the rampant speculation over a $6 billion offer from Google that reportedly fell apart last week, in the end, it may be that the idea of Groupon being purchased by Google is as silly as the idea that the Regional Bell Operating Companies, who originally had the monopoly on the Yellow Pages market, could or should have bought Google.

In the end, it wouldn’t have been a fit.

We've (Still) Lost the Backlink, and I For One Want It Back.

By - December 16, 2010

backrub-tm.jpgRemember back in the halcyon days of the web, when bloggers shared a sense of community with each other, linking back and forth to each other as a matter of social grace and conversation, as opposed to calculated consideration?

Well, if not, that’s how it was back in 2003 or so, when I started blogging. Now, that signal (who linked to you recently) is gone, and honestly, not just for blogging. It’s also gone for most of the web. Of course, you can find it, if you want to geek out in your refer logs. But honestly, why have we buried it there?

The funny thing is, this is the very signal Larry Page was looking for when he came upon the idea for Google with Sergey. Backrub, remember?

I sense there’s about to be some serious reconsideration of the value of declarative and transparent backlinks. I don’t know why, but call it an itch I’m scratchin’, rather like that of RSS….

All of this brought on by my continued and early explorations of Tumblr….

Signal, Curation, Discovery

By - December 11, 2010

discovery co. logos.png

This past week I spent a fair amount of time in New York, meeting with smart folks who collectively have been responsible for funding and/or starting companies as varied as DoubleClick, Twitter, Foursquare, Tumblr, Federated Media (my team), and scores of others. I also met with some very smart execs at American Express, a company that has a history of innovation, in particular as it relates to working with startups in the Internet space.

I love talking with these folks, because while we might have business to discuss, we usually spend most of our time riffing about themes and ideas in our shared industry. By the time I reached Tumblr, a notion around “discovery” was crystallizing. It’s been rattling around my head for some time, so indulge me an effort to Think It Out Loud, if you would.

Since its inception, the web has presented us with a discovery problem. How do we find something we wish to pay attention to (or connect with)? In the beginning this problem applied to just web sites – “How do I find a site worth my time?” But as the web has evolved, the problem keeps emerging again – first with discrete pieces of content – “How do I find the answer to a question about….” – and then with people: “How do I find a particular person on the web?” And now we’ve started to combine all of these categories of discovery: “How do I find someone to follow who has smart things to say about my industry?” In short, over time, the problem has not gotten better, it’s gotten far more complicated. If all search had to do was categorize web content, I’d wager it’d be close to solved by now.

But I’m getting ahead of myself.

Our first solution to the web’s initial discovery problem was to curate websites into directories, with Yahoo being the most successful of the bunch. Yahoo became a crucial driver of the web’s first economic model: banner ads. It not only owned the largest share of banner sales, but it drove traffic to the lion’s share of second-party sites who also sold banner ads.

But directories have clumsy interfaces, and they didn’t scale to the overwhelming growth in the number of websites. There were too many sites to catalog, and it was hard to determine relative rank of one site to another, in particular in context of what any one individual might find relevant (this is notable – because where directories broke down was essentially around their inflexibility to deal with individual’s specific discovery needs. Directories failed at personalization, and because they were human-created, they failed to scale. Ironically, the first human-created discovery product failed to feel…human).

Thus, while Yahoo remains to this day a major Internet company, its failure to keep up with the Internet’s discovery problem left an opening for a new startup, one that solved discovery for the web in a new way. That company, of course, was Google. By the end of the 1990s, five years into the commercial web, discovery was a mess. One major reason was that what we wanted to discover was shifting – from sites we might check out to content that addressed our specific needs.

Google exploited the human-created link as its cat-herding signal. While one might argue around the edges, what Google did was bring the web’s content to heel. Instead of using the site as the discrete unit of discovery, it used the page – a specific unit of content. (Its core algorithm, after all, was called PageRank – yes, named after co-founder Larry Page, but the entendre stuck because it was apt).

Google search not only revolutionized discovery, it created an entire ecosystem of economic value, one that continues to be the web’s most powerful (at least for now). As with the Yahoo era, Google became not only the web’s largest seller of advertising, it also became the largest referrer of traffic to other sites that sold advertising. Google proved the thesis that if you find a strong signal (the link), and curate it at scale (the search engine), you can become the most important company in the Internet economy. With both, of course, the true currency was human attention.

But once again, what we want to pay attention to is changing. Sure, we still want to find good sites (Yahoo’s original differentiation), and we want to find just the right content (Google’s original differentiation). But now we also want to find out “What’s Happening” and “Who’s Doing What”, as well as “Who Might I Connect With” in any number of ways.*

All of these questions are essentially human in nature, and that means the web has pivoted, as many have pointed out, from a site- and content-specific axis to a people-specific axis. Google’s great question is whether it can pivot with the web – hence all the industry speculation about Google’s social strategy, its sharing of data with Facebook (or not), and its ability to integrate social signal into its essentially HTML-driven search engine.

While this drama plays out, the web once again is becoming a mess when it comes to discovery, and once again new startups have sprung up, each providing new approaches to curate signal from the ever-increasing noise. They are, in order of founding, Facebook, Twitter, and Tumblr, and oddly enough, while each initially addressed an important discovery problem, they also in turn created a new one, in the process opening up yet another opportunity – one that subsequent (or previous) companies may well take advantage of.

Let me try to explain, starting with Facebook. When Facebook started, it was a revelation for most – a new way to discover not only what mattered on the web, but a way to connect with your friends and family, as well as discover new people you might find interesting or worthy of “friending.” Much as Google helped the web pivot from sites to content, Facebook became the axis for the web’s pivot to people. The “social graph” became an important curator of our overall web experience, and once again, a company embarked on the process of dominating the web: find a strong signal (the social graph), curate it at scale (the Facebook platform), and you may become the most important company in the Internet economy (the jury is out on Facebook overtaking Google for the crown, but I’d say deliberations are certainly keeping big G up at night).

But a funny thing has started to happen to Facebook – at least for me, and a lot of other folks as well. It’s getting to be a pretty noisy place. The problem is one, again, of scale: the more friends I have, the more noise there is, and the less valuable the service becomes. Not to mention the issue of instrumentation: Facebook is a great place for me to instrument my friend graph, but what about my interests, my professional life, and my various other contextual identities? Not to mention, Facebook wasn’t a very lively place to discover what’s up, at least not until the newsfeed was forced onto the home page.

Credit Twitter for that move. Twitter’s original differentiation was its ability to deliver a signal of “what’s happening”. Facebook quickly followed suit, but Twitter remains the strongest signal, in the main because of its asymmetrical approach to following, as opposed to symmetric friending. Twitter is yet another company that has the potential to be “the next Yahoo or Google” when it comes to signal, discovery, and curation, but it’s not there yet. Far too many folks find Twitter to be mostly noise and very little signal.

In its early years, things were even worse. When I first started using Twitter, I wrote quite a bit about Twitter’s discovery problem – it was near impossible to find the right folks to follow, and once you did, it was almost as difficult to curate value from the stream of tweets those people created.

Twitter’s first answer to its discovery problem – the Suggested User List – was pretty much Yahoo 1994: A subjective, curated list of interesting tweeters. The company’s second attempt, “Who To Follow,” is a mashup of Google 2001 and Facebook 2007: an algorithm that looks at what content is consumed and who your follow, then suggests folks to follow. I find this new iteration very useful, and have begun to follow a lot more folks because of it.

But now I have a new discovery problem: There’s simply too much content for me to grok. (For more on this, see Twitter’s Great Big Problem Is Its Massive Opportunity). Add in Facebook (people) and Google search (a proxy for everything on the web), and I’m overwhelmed by choices, all of them possibly good, but none of them ranked in a way that helps me determine which I should pay attention to, when, or why.

It’s 1999 all over again, and I’m not talking about a financing bubble. The ecosystem is ripe for another new player to emerge, and that’s one of the reasons I went to see the folks at Tumblr yesterday.

As I pointed out in Social Editors and Super Nodes – An Appreciation of RSS, Tumblr is growing like, well, Google in 2002, Facebook in 2006, or Twitter in 2008. The question I’d like to know is….why?

I’m just starting to play with the service, but I’ve got a thesis: Tumblr combines the best of self expression (Facebook and blogging platforms) with the best of curation (Twitter and RSS), and seems to have stumbled into a second-order social interest graph to boot (I’m still figuring out the social mores of Tumblr, but I am quite certain they exist). People who use Tumblr a lot tell me it “makes them feel smarter” about what matters in the web, because it unpacks all sorts of valuable pieces of content into one curated stream – a stream curated by people who you find interesting. It’s sort of a rich media Twitter, but the stuff folks are curating seems far more considered, because they are in a more advanced social relationship with their audience than with folks on Twitter. In a way, it feels like the early days of blogging, crossed with the early days of Twitter. With a better CMS and a dash of social networking, and a twist. If that makes any sense at all.

Tumblr, in any case, has its drawbacks: It feels a bit like a walled garden, it doesn’t seem to play nice with the “rest of the web” yet, and – here’s the kicker – finding people to follow is utterly useless, at least in the beginning.

Just as with Twitter in the early days, it’s nearly impossible to find interesting people to follow on Tumblr, even if you know they’re there. For example, I knew that Fred Wilson, who I respect greatly, is a Tumblr user (and investor), so as soon as I joined the service, I typed his name into the search bar at the top of Tumblr’s “dashboard” home page. No results. That’s because that search bar only searches what’s on your page, not all of Tumblr itself. In short, Tumblr’s search is deeply broken, just like Twitter’s search was back in the day (and web search was before Google). I remember asking Evan Williams, in 2008, the best way to find someone on Twitter, and his response was “Google them, and add the word Twitter.” I’m pretty sure the same is true at present for Tumblr. (It’s how I found Fred, anyway).

Continuing the echoes of past approaches to the same problem, Tumblr currently provides a “suggested users” like directory on its site, highlighting folks you might find interesting. I predict this will not be around for long – because it simply doesn’t solve the problem we want it to solve. I want to find the right users for me to follow, not ones that folks at Tumblr find interesting.

If Tumblr can iron out these early kinks, well, I’d warrant it will take its place in the pantheon of companies who have found a signal, curated it at scale, and solved yet another important discovery problem. The funny thing is, all of them are still in the game – even Yahoo, who I’ve spent quite a bit of time with over the past few months. I’m looking forwarding to continuing the conversation about how they approach the opportunity of discovery, and how each might push into new territories. Twitter, for example, seems clearly headed toward a Tumblr-like approach to content curation and discovery with its right hand pane. Google continues to try to solve for people discovery, and Facebook has yet to prove it can scale as a true content-discovery engine.

The folks at Google used to always say “search is a problem that is only five-percent solved.” I think now they might really mean “discovery is a problem that will always need to be solved.” Keep trying, folks. It gets more interesting by the day.

* I’m going to leave out the signals of commerce (What I want to buy) and location (Where I am now) for later ruminations. If you want my early framing thoughts, check out Database of Intentions Chart – Version 2, Updated for Commerce, The Gap Scenario,and My Location Is A Box of Cereal for starters.

Is This a Story?

By - December 06, 2010

Exclusive.pngHere’s one for you, folks: A few folks “in the know” told me that a company is thinking about doing something with another company, but that second company has no idea about it, and in order for the whole thing to play out, a whole lot of things need to happen first, most of which are totally dependent on other things happening over which the sources have no control!

Great story, eh?

Well, that’s the entire content of this Reuters story about AOL, one that has gotten a lot of play (including a spot on tech leaderboard TechMeme).

This piece is yet another example of the kind of “journalism” that is increasingly gaining traction in the tech world – pageview-baiting single-sourced speculation, with very little reporting, tossed online for the hell of it to see what happens.

It’s lazy, it’s unsupportable, and it’s tiresome.

To me, it’s not even about the crass commercialist drive to be “first” or to drive the most pageviews. It’s about the other side of the story – the sources. Reuters, in particular, as a bastion of “traditional” journalistic mores, should know that the “source” who gave this reporter the story has his or her own agenda. More likely than not, that agenda is to lend credibility to the idea that AOL and Yahoo should merge. It’s a huge disservice to the craft of journalism to let your obligations to your readers be so easily manipulated.

I miss the Industry Standard sometimes.

Social Editors and Super Nodes – An Appreciation of RSS

By - December 03, 2010

RSS comments.pngYesterday I posted what was pretty much an offhand question – Is RSS Dead? I had been working on the FM Signal, a roundup of the day’s news I post over at the FM Blog. A big part of editing that daily roundup is spent staring into my RSS reader, which culls about 100 or so feeds for me.

I realized I’ve been staring into an RSS reader for the better part of a decade now, and I recalled the various posts I’d recently seen (yes, via my RSS reader) about the death of RSS. Like this one, and this one, and even this one, from way back in 2006. All claimed RSS was over, and, for the most part, that Twitter killed it.

I wondered to myself – am I a dinosaur? I looked at Searchblog’s RSS counter, which has been steadily growing month after month, and realized it was well over 200,000 (yesterday it added 4K folks, from 207K to 211K). Are those folks all zombies or spam robots? I mean, why is it growing? Is the RSS-reading audience really out there?

So I asked. And man, did my RSS readers respond. More than 100 comments in less than a day – the second most I’ve ever gotten in that period of time, I think. And that’s from RSS readers – so they had to click out of their comfy reader environment, come over to the boring HTML web version of my site, pass the captcha/spam test, put in their email, and then write a comment. In short, they had to jump through a lot of hoops to let me know they were there. Hell, Scoble – a “super node” if ever there was one – even chimed in.

I’ve concluded that each comment someone takes the time to leave serves as a proxy for 100 or so folks who probably echo that sentiment, but don’t take the time to leave a missive. It’s my rough guess, but I think it’s in the ballpark, based on years of watching traffic flows and comment levels on my posts. So 100 comments in 24 hours equates to a major response on this small little site, and it’s worth contemplating the feedback.

One comment that stood out for me came from Ged Carroll, who wrote:

Many people are happy to graze Twitter, but the ‘super nodes’ that are the ‘social editors’ need a much more robust way to get content: RSS. If you like RSS is the weapon of choice for the content apex predator, rather than the content herbivores.

A “content apex predator”! Interesting use of metaphor – but I think Ged is onto something here. At Federated, we’ve made a business of aligning ourselves with content creators who have proved themselves capable of convening an engaged and influential audience. That’s the heart of publishing – creating a community of readers/viewers/users who like what you have to say or the service you offer.

And while more and more folks are creating content of value on the web, that doesn’t mean they are all “publishers” in the sense of being professionals who make their living that way. Ged’s comment made me think of Gladwell’s “connectors” – there certainly is a class of folks on the web who derive and create value by processing, digesting, considering and publishing content, and not all of them are professionals in media (in fact, most of them aren’t).

In my post I posited that perhaps RSS was receding into a “Betamax” phase, where only “professionals” in my industry (media) would be users of it. I think I got that wrong, at least in spirit. There is most definitely an RSS cohort of sorts, but it’s not one of “media professionals.” Instead, I think “social editors” or “super nodes” is more spot on. These are the folks who feel compelled to consume a lot of ideas (mainly through the written word), process those ideas, and then create value by responding or annotating those ideas. They derive social status and value by doing so – we reward people who provide these services with out attention and appreciation. They have more Twitter followers than the average bear. They probably have a blog (like Ged does). And they’re most likely the same folks who are driving the phenomenal growth of Tumblr.

Social editors who convene the largest audiences can actually go into business doing what they love – that’s the premise of FM’s initial business model.

But there orders of magnitude more folks who do this well, but may not want to do it full time as a business, or who are content with the influence of an audience in the hundreds or thousands, as opposed to hundreds of thousands or millions, like many FM sites.

I’m learning a lot about this cohort via FM’s recent acquisition of BigTent and Foodbuzz – both of these businesses have successfully created platforms where influential social editors thrive.

RSS is most certainly not dead, but as many commentators noted, it may evolve quite a bit in the coming years. It has so much going for it – it’s asynchronous, it’s flexible, it’s entirely subjective (in that you pick your feeds yourself, as opposed to how Facebook works), it allows for robust UIs to be built around it. It’s a fundamentally “Independent Web” technology.

But RSS also has major problems, in particular, there’s not a native monetization signal that goes with it. Yesterday’s post proves I have a lot of active RSS readers, but I don’t have a way to engage them with intelligent marketing (save running Pheedo or Adsense, which isn’t exactly what I’d call a high-end solution). I, like many others, pretty much gave up on RSS as a brand marketing vehicle a couple of years back. There was no way to “prove” folks were actually paying attention, and without that proof, marketers will only buy on the come – that’s why you see so many direct response ads running in RSS feeds.

It does seem that no one is really developing “for” RSS anymore.

Except, I am. At FM we use RSS in various robust fashions to pipe content from our network of hundreds (now thousands) of talented “social editors” into multitudes of marketing and content programs. (Check out FoodPress for just one example). We’ve even developed a product we call “superfeeds” that allows us to extend what is possible with RSS. In short, we’d be lost without RSS, and from the comments on my post, it seems a lot of other folks would be too, and in particular, folks who perform the critical role of “super node” or “social editor.”

So long live the social editor, and long live RSS. Perhaps it’s time to take another look at how we might find an appropriate monetization signal for the medium. I’m pretty sure that marketers would find conversing with couple hundred thousand “super nodes” valuable – if only we could figure a way to make that value work for all involved.


Google's "Opinion" Sparks Interesting Dialog On Tying of Services to Search

By - December 02, 2010

the search cover.pngYesterday’s post on Google having an algorithmic “opinion” about which reviews were negative or positive sparked a thoughtful response from Matt Cutts, Google’s point person on search quality, and for me raised a larger question about Google’s past, present, and future.

In his initial comment (which is *his* opinion, not Google’s, I am sure), Cutts remarked:

“…the “opinion” in that sentence refers to the fact our web search results are protected speech in the First Amendment sense. Court cases in the U.S. (search for SearchKing or Kinderstart) have ruled that Google’s search results are opinion. This particular situation serves to demonstrate that fact: Google decided to write an algorithm to tackle the issue reported in the New York Times. We chose which signals to incorporate and how to blend them. Ultimately, although the results that emerge from that process are algorithmic, I would absolutely defend that they’re also our opinion as well, not some mathematically objective truth.”

While Matt is simply conversing on a blog post, the point he makes is not just a legal nit, it’s a core defense of Google’s entire business model. In two key court cases, Google has prevailed with a first amendment defense. Matt reviews these in his second comment:

“SearchKing sued Google and the resulting court case ruled that Google’s actions were protected under the first amendment. Later, KinderStart sued Google. You would think that the SearchKing case would cover the issue, but part of KinderStart’s argument was that Google talked about the mathematical aspects of PageRank in our website documentation. KinderStart not only lost that lawsuit, but KinderStart’s lawyer was sanctioned for making claims he couldn’t back up… After the KinderStart lawsuit, we went through our website documentation. Even though Google won the case, we tried to clarify where possible that although we employ algorithms in our rankings, ultimately we consider our search results to be our opinion.”

The key point, however, is made a bit later, and it’s worth highlighting:

“(the) courts have agreed … that there’s no universally agreed-upon way to rank search results in response to a query. Therefore, web rankings (even if generated by an algorithm) are are an expression of that search engine’s particular philosophy.”

Matt reminded us that he’s made this point before, on Searchblog four years ago:

“When savvy people think about Google, they think about algorithms, and algorithms are an important part of Google. But algorithms aren’t magic; they don’t leap fully-formed from computers like Athena bursting from the head of Zeus. Algorithms are written by people. People have to decide the starting points and inputs to algorithms. And quite often, those inputs are based on human contributions in some way.”

Back then, Matt also took pains to point out that his words were his opinion, not Google’s.

So let me pivot from Matt’s opinion to mine. All of this is fraught, to my mind, with implications of the looming European investigation. The point of the European action, it seems to me, is to find a smoking gun that proves Google is using a “natural monopoly” in search to favor its own products over those of competitors.

Danny has pointed out the absurdity of such an investigation if the point is to prove Google favors its search results over the search results of competitors like Bing or others. But I think the case will turn on different products, or perhaps, a different definition of what constititues “search results.” The question isn’t whether Google should show compeitors standard search results, it’s whether Google favors its owned and operated services, such as those in local (Google Places instead of Foursquare, Facebook etc), commerce (Checkout instead of Paypal), video (YouTube instead of Hulu etc.), content (Google Finance instead of Yahoo Finance or others, Blogger instead of WordPress, its bookstore over others, etc.), applications (Google Apps instead of MS Office), and on and on.

That is a very tricky question. After all, aren’t those “search results” also? As I wrote eons ago in my book, this most certainly is a philosophical question. Back in 2005, I compared Yahoo’s approach to search with Google’s:

Yahoo makes no pretense of objectivity – it is clearly steering searchers toward its own editorial services, which it believes can satisfy the intent of the search. … Apparent in that sentiment lies a key distinction between Google and Yahoo. Yahoo is far more willing to have overt editorial and commercial agendas, and to let humans intervene in search results so as to create media that supports those agendas. Google, on the other hand, is repelled by the idea of becoming a content- or editorially-driven company. While both companies can ostensibly lay claim to the mission of “organizing the world’s information and making it accessible” (though only Google actually claims that line as its mission), they approach the task with vastly different stances. Google sees the problem as one that can be solved mainly through technology – clever algorithms and sheer computational horsepower will prevail. Humans enter the search picture only when algorithms fail – and only then grudgingly. But Yahoo has always viewed the problem as one where human beings, with all their biases and brilliance, are integral to the solution.

I then predicted some conflict in the future:

But expect some tension over the next few years, in particular with regard to content. In late 2004, for example, Google announced they would be incorporating millions of library texts into its index, but made no announcements about the role the company might play in selling those texts. A month later, Google launched a video search service, but again, stayed mum on if and how it might participate in the sale of television shows and movies over the Internet.

Besides the obvious – I bet Google wishes it had gotten into content sales back in 2004, given the subsequent rise of iTunes – there’s still a massive tension here, between search services that the world believes to be “objective” and Google’s desire to compete given how the market it is in is evolving.

Not to belabor this, but here’s more from my book on this issue, which feels pertinent given the issues Google now faces, both in Europe and in the US with major content providers:

… for Google to put itself into the position of media middle man is a perilous gambit – in particular given that its corporate DNA eschews the almighty dollar as an arbiter of which content might rise to the top of the heap for a particular search. Playing middle man means that in the context of someone looking for a movie, Google will determine the most relevant result for terms like “slapstick comedy” or “romantic musical” or “jackie chan film.” For music, it means Google will determine what comes first for “usher,” but it also means it will have to determine what should come first when someone is looking for “hip hop.”

Who gets to be first in such a system? Who gets the traffic, the business, the profits? How do you determine, of all the possibilities, who wins and who loses? In the physical world, the answer is clear: whoever pays the most gets the positioning, whether it’s on the supermarket shelf or the bin-end of a record store. ….But Google, more likely than not, will attempt to come up with a clever technological solution that attempts to determine the most “objective” answer for any given term, be it “romantic comedy” or “hip hop.” Perhaps the ranking will be based on some mix of PageRank, download statistics, and Lord knows what else, but one thing will be certain: Google will never tell anyone how they came to the results they serve up. Which creates something of a Catch-22 when it comes to monetization. Will Hollywood really be willing to trust Google to distribute and sell their content absent the commercial world’s true ranking methodology: cold, hard cash?

…Search drives commerce, and commerce drives search. The two ends are meeting, inexolerably, in the middle, and every major Internet player, from eBay to Microsoft, wants in. Google may be tops in search for now, but in time, being tops in search will certainly not be enough.

Clearly, as a new decade unfolds, search alone is not enough anymore, and my prediction that Google will protect itself with the shield of “objectivity” has been upended. But the question of how Google ties its massive lead in search to its new businesses in local, content, applications, and other major markets remains tricky, and at this point, quite unresolved.

Twitter's Great Big Problem Is Its Massive Opportunity

By - November 30, 2010


One of the many reasons I find Twitter fascinating is that the company seems endlessly at an inflection point. Eighteen months ago I was tracking its inflection point in usage (holy shit, look how it’s growing! Then, holy shit, has it stopped?!), then its inflection in business model (hey, it doesn’t have one! Wait, yes it does, but can it scale?!), and more recently, its inflection point in terms of employees (as in growing from 80+ staff to 350+ in one year – necessitating a shift in management structure….).

Twitter now faces yet another inflection point – one I’ve been tracking for some time, and one that seems to be coming to a head. To me, that inflection has to do with usefulness – can the service corral all the goodness that exists in its network and figure out a way to make it useful to its hundreds of millions of users?

To me, this inflection point is perhaps its most challenging, and its greatest opportunity, because it encompasses all the others. If Twitter creates delightful instrumentations of the unique cacophony constantly crossing its servers, it wins big time. Users will never leave, marketers will never get enough, and employees will pine to join the movement (witness Facebook now, and Google five years ago).

Now, I’m not saying Twitter isn’t already a success. It is. The service has a dedicated core of millions who will never leave the service (I’m one of them). And I’m going to guess Twitter gets more resumes than it knows what to do with, so hiring isn’t the problem. And lastly, I’ve been told (by Ev, onstage at Web 2) that the company has more marketing demand than it can fulfill.

But therein lies the rub. Twitter has the potential to be much more, and everyone there knows it. It has millions of dedicated users, but it also has tens of millions who can’t quite figure out what the fuss is all about. And you can’t hire hundreds of engineers and product managers unless you have a job for them to do – a scaled platform that has, at its core, a product that everyone and their mother understands.

As for that last point – the surfeit of marketing demand – well that’s also a problem. Promoted tweets, trends, and accounts are a great start, but if you don’t have enough inventory to satisfy demand, you’ve not crossed the chasm from problem to opportunity.

In short, Twitter has a publishing problem. Put another way, it has a massive publishing opportunity.

Oh, I know, you’re saying “yeah Battelle, there you go again, thinking the whole world fits neatly into your favorite paradigm of publishing.”

Well yes, indeed, I do think that. To me, publishing is the art of determining what is worth paying attention to, by whom. And by that definition, Twitter most certainly is a publishing platform, one used by nearly 200 million folks.

The problem, of course, is that while Twitter makes it nearly effortless for folks to publish their own thoughts, it has done far too little to help those same folks glean value from the thoughts of others.

It was this simple truth that led FM to work with Microsoft to create ExecTweets, and AT&T to create the TitleTweets platform. It’s the same truth that led to the multi-pane interface of Tweetdeck as well as countless other Twitter apps, and it was with an eye toward addressing this problem that led to the introduction of Lists on and its associated APIs.

But while all those efforts are worthy, they haven’t yet solved the core problem or addressed the massive opportunity. At its core, publishing is about determining signal from noise. What’s extraordinary about Twitter is the complexity of that challenge – one man’s noise is another man’s signal, and vice versa. And what’s signal now may well be noise tomorrow – or two minutes from now. Multiply this by 200 million or so, then add an exponential element. Yow.

There is both art and science to addressing this challenge. What we broadly understand to be “the media” have approached the problem with a mostly one-to-many approach: We define an audience, determine what topic most likely will want to pay attention to, then feed them our signal, one curated and culled from the noise of all possible information associated with that topic. Presto, we make Wired magazine, Oprah, or Serious Eats.

Facebook has done the same with information associated with our friend graph. The newsfeed is, for all intents and purposes, a publication created just for you. Sure, it has its drawbacks, but it’s pretty darn good (though its value is directly determined by the value you place in your Facebook friend graph. Mine, well, it don’t work so well, for reasons of my own doing).

So how might Twitter create such a publication for each of its users? As many have pointed out (including Twitter’s CEO Dick Costolo), Twitter isn’t a friend graph, it’s more of an interest graph, or put another way, an information graph – a massive set of points interconnected by contextual meaning. To the uninitiated, this graph is daunting.

Twitter’s current approach to navigating this graph centers around following human beings – at first with its “suggested users” list, which simply didn’t scale. Twitter soon replaced suggested users with “Who to follow” – a more sophisticated, algorithm-driven list of folks who seem to match your current set of followers and, to some extent, your interests. When you follow someone who’s a big foodie, for example, Twitter will suggest other folks who tweet about food. It does so, one presumes, by noting shared interests between users.

The question is, does Twitter infer those interests via the signal of who follows who, or does it do it by actually *understanding* what folks are tweeting about?

Therein, I might guess, lies the solution. The former is a proxy for a true interest graph – “Hey, follow these folks, because they seem to follow folks who are like the folks you already follow.” But latter *is* an interest graph – “Hey, follow these folks, because they tweet about things you care about.”

From that logically comes the ability to filter folks’ streams based on interests, and once you can do that, well, things get really…interesting. You could follow interests, instead of people, for example. It’s like search meets social! And hey – isn’t that kind of the holy grail?

If Twitter can make the interest graph explicit to its users, and develop products and features which surface that graph in real time, it wins on all counts. That is a very big problem to solve, and a massive opportunity to run after.

For more on this, read Making Twitter an Information Network, by Mike Champion, and “The unbearable lameness of web 2.0″, by Kristian Köhntopp, as well as the wonderful but too short The Algorithm + the Crowd are Not Enough, by Rand Fishkin. These and many more have informed my ongoing thinking on this topic. What do you think?