free html hit counter Joints After Midnight & Rants Archives | Page 16 of 42 | John Battelle's Search Blog

Thinking Out Loud: What's Driving Groupon?

By - December 17, 2010

groupon-logo.jpg

In the current issue of the New Yorker, columnist James Surowiecki, who I generally admire, gets it exactly wrong when it comes to Groupon.

He writes:

” But it seems unlikely that it’s going to become a revolutionary company, along the lines of YouTube, Facebook, Twitter, and Google. ….Groupon, by contrast, is a much more old-school business. It doesn’t have any obvious technological advantage. Its users don’t really do anything other than hit the “buy” button. And its business requires lots of hands-on attention…”

Well, that’s a defensible opinion, but after visiting CEO Andrew Mason this week in Chicago, and thinking about it a bit, I must say that I wholeheartedly diasagree.

Many folks think of Groupon as a relatively simple idea. A daily deal, a large sales force, and that’s about it. Too easy to copy (there are scores of “Groupon clones”), and too labor intensive (the more small businesses you want to work with, the more sales and service people you need).

All this is true. But it fails to understand the power of Groupon’s model. To sum it up: Groupon has built a new channel into the heart of the the world’s economic activity: Small businesses. And it is that channel where the true power lies.

First, the economic math: Small businesses create more than 50% of US GDP and create more than 75% of net new jobs each year. But small businesses represent a fragmented, maddeningly difficult sector of our economy – 23 million small pieces loosely joined. Any platform that has connected them and added value to their bottom line has turned into a massive new business.

yellow-pages-logo.gif

Over the past century, there have been two such new platforms. The most recent is Google, a proxy for the rise of the web as a platform for small business lead generation. Before that, it was the Yellow Pages, a proxy for the rise of the telephone as a platform for lead generation.

Groupon, I believe, has the potential to be a new proxy – one that subsumes the platforms of both the Internet and the telephone, and adds multiple dimensions beyond them.

I know that’s a stretch, but hear me out.

First, let’s review the Yellow Pages. What is it? Well, for the most part, it’s a paper-based publishing platform that combines a curated local business phone directory with advertising listings. Nearly every single small business with a phone number is listed in the Yellow Pages, and a large percentage of them also buy advertising to promote their wares as well.

In short, the Yellow Pages is a platform that connects every single consumer with a phone to every single local business with a phone.

As a business, the Yellow Pages consists of folks who manage the listings and produce the books, as well as a very large sales force which calls on local businesses. Once a year, the product turns over, and a new book is made.

That’s it. Simple (and certainly not technologically defensible), and while it’s clearly in decline, the Yellow Pages is currently a $15 billion revenue business in the US alone. Now, the Yellow Pages is also an online business, but they were late the party, and have pretty much lost to Google when it comes to the platform play.

Google represents a second new platform which connects consumers and small businesses. Many forget that it was small business that drove early adoption of AdWords (as well as Overture, its early competition). And while not every small business is yet online – 36% of US small businesses still don’t have a web site – a the clear majority of them do, and milllions of them use AdWords, as well as organic search, to drive leads to their business. Google makes billions of dollars leveraging its platform, which, by the way, has subsumed the Yellow Pages business and grown well past it into any number of other markets, including most major international regions.

Google alone is on a $30 billion revenue run rate, and it’s only ten years old. That’s twice the US revenues of the Yellow Pages, which were built up over more than 50 years.

So to review, the Yellow Pages leveraged the telephone to create a massively scaled and profitable platform connecting consumers and businesses. Google did the same, but leveraged the Internet (and subsumed the telephone as well).

And Groupon is doing it again, subsuming the telephone, the Internet, and leveraging an entirely new platform: the mobile web. google_logo1.jpg

Now, before you yell at me and claim that Groupon is anything BUT a mobile-driven company (the company sends email to 40mm US subscribers, for example), recall my definition of mobile is a bit more complicated than most.

Remember MOLRS? As I said in that post: “if you are going to think about mobile, you have to think about social, local, and real time.” In short, mobile is meaningless without context: Where someone is (or is about to go), who someone is with (or about to go meet), and why someone is where they are (or with who they are with). And, of course, when someone is where they are (and with whomever they are there with…).

Whew. Sorry, but you get the picture.

Now, let’s think about MOLRS in relation to small business. First, small business owners (SBOs) care deeply about location. Are they in a good location? Will customers be able to find them? Is there parking? A good neighborhood? Strong foot traffic?

Second, SBOs care deeply about relationships and word of mouth (or what we will call social). Do people refer their friends and family to the business? Are people happy with the service? Will they say nice things?

Third, SBOs care very much about timing (what I call “real time” in my MOLRS breakdown). What are the best hours for foot traffic? What are the best times to run promotions? How can I bring in more business during slow times? How does seasonality effect my business? When should I have a sale?

In short, SBOs are driven by local, social, and real time.

Turns out, so is Groupon.

Now, ask any small business owner what they wish for more of, and they’ll give you a resounding answer: More customers. It’s why they pay for the Yellow Pages ad, and it’s why they buy AdWords from Google.

And it’s why they are starting to buy Groupon’s product, at a breakneck pace. Sure, some of them buy too much of it, or fail to do the math and lose money on the come. They’ll adjust, and if they don’t, smarter SBOs will eat their lunch, and the world will move on.

To my mind, the proof is in Groupon’s growth rate. I’ve never seen anything like it – well, since Google. And just as Google lapped the Yellow Pages in a fraction of the time, Groupon seems to be on track to do the same to Google.

Good sources have told me that Groupon is growing at 50 percent a month, with a revenue run rate of nearly $2 billion a year (based on last month’s revenues). By next month, that run rate may well hit $2.7 billion. The month after that, should the growth continue, the run rate would clear $4 billion.

Google’s run rate, when revealed in its IPO filing six years ago, was staggering – it grew from under $200 million to $1.6 billion in less than three years. Groupon is on track to do the same – but in less than one year.

That’s pretty extraordinary. But remember, Groupon has figured out a way to deliver what SBOs want most: more customers in their stores. And unlike Google or the Yellow Pages, Groupon doesn’t sell advertising. Instead, it takes 50% of the actual revenue driven by its platform. Trust me, that’s potentially a much bigger number.

Actually, it’s pretty interesting to see how the business model of driving leads to business has shifted as each platform has risen to dominance. The Yellow Pages charge a set price for a display ad, with no guarantee that the ad would drive any leads. Google turned the model upside down, and charged only when people clicked on the ad. Groupon doesn’t charge anything at all: It simply takes half the revenue generated when a deal is fulfilled by its platform.

So to summarize, I think those who claim Groupon’s business is too simple are focused on the wrong things. Sure, there are other deal sites. But none have Groupon’s scale. Sure, Groupon’s model of one deal in one city on one day is limited, but it’s easy to see how the product scales against category, zip code, time of day, and many other variables. And sure, Groupon has a lot of people who have to touch a lot of businesses and a lot of customers every day. But to me, that’s the company’s strength: SBOs are in the people business, and therefore, so must Groupon be.

And this, to my mind, is why Facebook or Google can’t compete with Groupon. Imagine Facebook or Google with 1,000 people who do nothing but talk to customers all day long? Yep, I can hear the laughter from here….

While I was visiting earlier this week, CEO Mason told me that a significant percentage of Groupon’s customer service reps are members of Chicago’s vibrant improv scene. That makes sense to me – if you are going to deal with possibly upset people all day, it helps to have a culture of humor and thinking on your feet.

That culture will serve Groupon well as it attempts to deal with world record-breaking growth. While there is no certainty the company won’t blow its lead, it’s already a major international player. And while Mason would not comment on the rampant speculation over a $6 billion offer from Google that reportedly fell apart last week, in the end, it may be that the idea of Groupon being purchased by Google is as silly as the idea that the Regional Bell Operating Companies, who originally had the monopoly on the Yellow Pages market, could or should have bought Google.

In the end, it wouldn’t have been a fit.

  • Content Marquee

We've (Still) Lost the Backlink, and I For One Want It Back.

By - December 16, 2010

backrub-tm.jpgRemember back in the halcyon days of the web, when bloggers shared a sense of community with each other, linking back and forth to each other as a matter of social grace and conversation, as opposed to calculated consideration?

Well, if not, that’s how it was back in 2003 or so, when I started blogging. Now, that signal (who linked to you recently) is gone, and honestly, not just for blogging. It’s also gone for most of the web. Of course, you can find it, if you want to geek out in your refer logs. But honestly, why have we buried it there?

The funny thing is, this is the very signal Larry Page was looking for when he came upon the idea for Google with Sergey. Backrub, remember?

I sense there’s about to be some serious reconsideration of the value of declarative and transparent backlinks. I don’t know why, but call it an itch I’m scratchin’, rather like that of RSS….

All of this brought on by my continued and early explorations of Tumblr….

Signal, Curation, Discovery

By - December 11, 2010

discovery co. logos.png

This past week I spent a fair amount of time in New York, meeting with smart folks who collectively have been responsible for funding and/or starting companies as varied as DoubleClick, Twitter, Foursquare, Tumblr, Federated Media (my team), and scores of others. I also met with some very smart execs at American Express, a company that has a history of innovation, in particular as it relates to working with startups in the Internet space.

I love talking with these folks, because while we might have business to discuss, we usually spend most of our time riffing about themes and ideas in our shared industry. By the time I reached Tumblr, a notion around “discovery” was crystallizing. It’s been rattling around my head for some time, so indulge me an effort to Think It Out Loud, if you would.

Since its inception, the web has presented us with a discovery problem. How do we find something we wish to pay attention to (or connect with)? In the beginning this problem applied to just web sites – “How do I find a site worth my time?” But as the web has evolved, the problem keeps emerging again – first with discrete pieces of content – “How do I find the answer to a question about….” – and then with people: “How do I find a particular person on the web?” And now we’ve started to combine all of these categories of discovery: “How do I find someone to follow who has smart things to say about my industry?” In short, over time, the problem has not gotten better, it’s gotten far more complicated. If all search had to do was categorize web content, I’d wager it’d be close to solved by now.

But I’m getting ahead of myself.

Our first solution to the web’s initial discovery problem was to curate websites into directories, with Yahoo being the most successful of the bunch. Yahoo became a crucial driver of the web’s first economic model: banner ads. It not only owned the largest share of banner sales, but it drove traffic to the lion’s share of second-party sites who also sold banner ads.

But directories have clumsy interfaces, and they didn’t scale to the overwhelming growth in the number of websites. There were too many sites to catalog, and it was hard to determine relative rank of one site to another, in particular in context of what any one individual might find relevant (this is notable – because where directories broke down was essentially around their inflexibility to deal with individual’s specific discovery needs. Directories failed at personalization, and because they were human-created, they failed to scale. Ironically, the first human-created discovery product failed to feel…human).

Thus, while Yahoo remains to this day a major Internet company, its failure to keep up with the Internet’s discovery problem left an opening for a new startup, one that solved discovery for the web in a new way. That company, of course, was Google. By the end of the 1990s, five years into the commercial web, discovery was a mess. One major reason was that what we wanted to discover was shifting – from sites we might check out to content that addressed our specific needs.

Google exploited the human-created link as its cat-herding signal. While one might argue around the edges, what Google did was bring the web’s content to heel. Instead of using the site as the discrete unit of discovery, it used the page – a specific unit of content. (Its core algorithm, after all, was called PageRank – yes, named after co-founder Larry Page, but the entendre stuck because it was apt).

Google search not only revolutionized discovery, it created an entire ecosystem of economic value, one that continues to be the web’s most powerful (at least for now). As with the Yahoo era, Google became not only the web’s largest seller of advertising, it also became the largest referrer of traffic to other sites that sold advertising. Google proved the thesis that if you find a strong signal (the link), and curate it at scale (the search engine), you can become the most important company in the Internet economy. With both, of course, the true currency was human attention.

But once again, what we want to pay attention to is changing. Sure, we still want to find good sites (Yahoo’s original differentiation), and we want to find just the right content (Google’s original differentiation). But now we also want to find out “What’s Happening” and “Who’s Doing What”, as well as “Who Might I Connect With” in any number of ways.*

All of these questions are essentially human in nature, and that means the web has pivoted, as many have pointed out, from a site- and content-specific axis to a people-specific axis. Google’s great question is whether it can pivot with the web – hence all the industry speculation about Google’s social strategy, its sharing of data with Facebook (or not), and its ability to integrate social signal into its essentially HTML-driven search engine.

While this drama plays out, the web once again is becoming a mess when it comes to discovery, and once again new startups have sprung up, each providing new approaches to curate signal from the ever-increasing noise. They are, in order of founding, Facebook, Twitter, and Tumblr, and oddly enough, while each initially addressed an important discovery problem, they also in turn created a new one, in the process opening up yet another opportunity – one that subsequent (or previous) companies may well take advantage of.

Let me try to explain, starting with Facebook. When Facebook started, it was a revelation for most – a new way to discover not only what mattered on the web, but a way to connect with your friends and family, as well as discover new people you might find interesting or worthy of “friending.” Much as Google helped the web pivot from sites to content, Facebook became the axis for the web’s pivot to people. The “social graph” became an important curator of our overall web experience, and once again, a company embarked on the process of dominating the web: find a strong signal (the social graph), curate it at scale (the Facebook platform), and you may become the most important company in the Internet economy (the jury is out on Facebook overtaking Google for the crown, but I’d say deliberations are certainly keeping big G up at night).

But a funny thing has started to happen to Facebook – at least for me, and a lot of other folks as well. It’s getting to be a pretty noisy place. The problem is one, again, of scale: the more friends I have, the more noise there is, and the less valuable the service becomes. Not to mention the issue of instrumentation: Facebook is a great place for me to instrument my friend graph, but what about my interests, my professional life, and my various other contextual identities? Not to mention, Facebook wasn’t a very lively place to discover what’s up, at least not until the newsfeed was forced onto the home page.

Credit Twitter for that move. Twitter’s original differentiation was its ability to deliver a signal of “what’s happening”. Facebook quickly followed suit, but Twitter remains the strongest signal, in the main because of its asymmetrical approach to following, as opposed to symmetric friending. Twitter is yet another company that has the potential to be “the next Yahoo or Google” when it comes to signal, discovery, and curation, but it’s not there yet. Far too many folks find Twitter to be mostly noise and very little signal.

In its early years, things were even worse. When I first started using Twitter, I wrote quite a bit about Twitter’s discovery problem – it was near impossible to find the right folks to follow, and once you did, it was almost as difficult to curate value from the stream of tweets those people created.

Twitter’s first answer to its discovery problem – the Suggested User List – was pretty much Yahoo 1994: A subjective, curated list of interesting tweeters. The company’s second attempt, “Who To Follow,” is a mashup of Google 2001 and Facebook 2007: an algorithm that looks at what content is consumed and who your follow, then suggests folks to follow. I find this new iteration very useful, and have begun to follow a lot more folks because of it.

But now I have a new discovery problem: There’s simply too much content for me to grok. (For more on this, see Twitter’s Great Big Problem Is Its Massive Opportunity). Add in Facebook (people) and Google search (a proxy for everything on the web), and I’m overwhelmed by choices, all of them possibly good, but none of them ranked in a way that helps me determine which I should pay attention to, when, or why.

It’s 1999 all over again, and I’m not talking about a financing bubble. The ecosystem is ripe for another new player to emerge, and that’s one of the reasons I went to see the folks at Tumblr yesterday.

As I pointed out in Social Editors and Super Nodes – An Appreciation of RSS, Tumblr is growing like, well, Google in 2002, Facebook in 2006, or Twitter in 2008. The question I’d like to know is….why?

I’m just starting to play with the service, but I’ve got a thesis: Tumblr combines the best of self expression (Facebook and blogging platforms) with the best of curation (Twitter and RSS), and seems to have stumbled into a second-order social interest graph to boot (I’m still figuring out the social mores of Tumblr, but I am quite certain they exist). People who use Tumblr a lot tell me it “makes them feel smarter” about what matters in the web, because it unpacks all sorts of valuable pieces of content into one curated stream – a stream curated by people who you find interesting. It’s sort of a rich media Twitter, but the stuff folks are curating seems far more considered, because they are in a more advanced social relationship with their audience than with folks on Twitter. In a way, it feels like the early days of blogging, crossed with the early days of Twitter. With a better CMS and a dash of social networking, and a twist. If that makes any sense at all.

Tumblr, in any case, has its drawbacks: It feels a bit like a walled garden, it doesn’t seem to play nice with the “rest of the web” yet, and – here’s the kicker – finding people to follow is utterly useless, at least in the beginning.

Just as with Twitter in the early days, it’s nearly impossible to find interesting people to follow on Tumblr, even if you know they’re there. For example, I knew that Fred Wilson, who I respect greatly, is a Tumblr user (and investor), so as soon as I joined the service, I typed his name into the search bar at the top of Tumblr’s “dashboard” home page. No results. That’s because that search bar only searches what’s on your page, not all of Tumblr itself. In short, Tumblr’s search is deeply broken, just like Twitter’s search was back in the day (and web search was before Google). I remember asking Evan Williams, in 2008, the best way to find someone on Twitter, and his response was “Google them, and add the word Twitter.” I’m pretty sure the same is true at present for Tumblr. (It’s how I found Fred, anyway).

Continuing the echoes of past approaches to the same problem, Tumblr currently provides a “suggested users” like directory on its site, highlighting folks you might find interesting. I predict this will not be around for long – because it simply doesn’t solve the problem we want it to solve. I want to find the right users for me to follow, not ones that folks at Tumblr find interesting.

If Tumblr can iron out these early kinks, well, I’d warrant it will take its place in the pantheon of companies who have found a signal, curated it at scale, and solved yet another important discovery problem. The funny thing is, all of them are still in the game – even Yahoo, who I’ve spent quite a bit of time with over the past few months. I’m looking forwarding to continuing the conversation about how they approach the opportunity of discovery, and how each might push into new territories. Twitter, for example, seems clearly headed toward a Tumblr-like approach to content curation and discovery with its right hand pane. Google continues to try to solve for people discovery, and Facebook has yet to prove it can scale as a true content-discovery engine.

The folks at Google used to always say “search is a problem that is only five-percent solved.” I think now they might really mean “discovery is a problem that will always need to be solved.” Keep trying, folks. It gets more interesting by the day.

* I’m going to leave out the signals of commerce (What I want to buy) and location (Where I am now) for later ruminations. If you want my early framing thoughts, check out Database of Intentions Chart – Version 2, Updated for Commerce, The Gap Scenario,and My Location Is A Box of Cereal for starters.

Is This a Story?

By - December 06, 2010

Exclusive.pngHere’s one for you, folks: A few folks “in the know” told me that a company is thinking about doing something with another company, but that second company has no idea about it, and in order for the whole thing to play out, a whole lot of things need to happen first, most of which are totally dependent on other things happening over which the sources have no control!

Great story, eh?

Well, that’s the entire content of this Reuters story about AOL, one that has gotten a lot of play (including a spot on tech leaderboard TechMeme).

This piece is yet another example of the kind of “journalism” that is increasingly gaining traction in the tech world – pageview-baiting single-sourced speculation, with very little reporting, tossed online for the hell of it to see what happens.

It’s lazy, it’s unsupportable, and it’s tiresome.

To me, it’s not even about the crass commercialist drive to be “first” or to drive the most pageviews. It’s about the other side of the story – the sources. Reuters, in particular, as a bastion of “traditional” journalistic mores, should know that the “source” who gave this reporter the story has his or her own agenda. More likely than not, that agenda is to lend credibility to the idea that AOL and Yahoo should merge. It’s a huge disservice to the craft of journalism to let your obligations to your readers be so easily manipulated.

I miss the Industry Standard sometimes.

Social Editors and Super Nodes – An Appreciation of RSS

By - December 03, 2010

RSS comments.pngYesterday I posted what was pretty much an offhand question – Is RSS Dead? I had been working on the FM Signal, a roundup of the day’s news I post over at the FM Blog. A big part of editing that daily roundup is spent staring into my RSS reader, which culls about 100 or so feeds for me.

I realized I’ve been staring into an RSS reader for the better part of a decade now, and I recalled the various posts I’d recently seen (yes, via my RSS reader) about the death of RSS. Like this one, and this one, and even this one, from way back in 2006. All claimed RSS was over, and, for the most part, that Twitter killed it.

I wondered to myself – am I a dinosaur? I looked at Searchblog’s RSS counter, which has been steadily growing month after month, and realized it was well over 200,000 (yesterday it added 4K folks, from 207K to 211K). Are those folks all zombies or spam robots? I mean, why is it growing? Is the RSS-reading audience really out there?

So I asked. And man, did my RSS readers respond. More than 100 comments in less than a day – the second most I’ve ever gotten in that period of time, I think. And that’s from RSS readers – so they had to click out of their comfy reader environment, come over to the boring HTML web version of my site, pass the captcha/spam test, put in their email, and then write a comment. In short, they had to jump through a lot of hoops to let me know they were there. Hell, Scoble – a “super node” if ever there was one – even chimed in.

I’ve concluded that each comment someone takes the time to leave serves as a proxy for 100 or so folks who probably echo that sentiment, but don’t take the time to leave a missive. It’s my rough guess, but I think it’s in the ballpark, based on years of watching traffic flows and comment levels on my posts. So 100 comments in 24 hours equates to a major response on this small little site, and it’s worth contemplating the feedback.

One comment that stood out for me came from Ged Carroll, who wrote:

Many people are happy to graze Twitter, but the ‘super nodes’ that are the ‘social editors’ need a much more robust way to get content: RSS. If you like RSS is the weapon of choice for the content apex predator, rather than the content herbivores.

A “content apex predator”! Interesting use of metaphor – but I think Ged is onto something here. At Federated, we’ve made a business of aligning ourselves with content creators who have proved themselves capable of convening an engaged and influential audience. That’s the heart of publishing – creating a community of readers/viewers/users who like what you have to say or the service you offer.

And while more and more folks are creating content of value on the web, that doesn’t mean they are all “publishers” in the sense of being professionals who make their living that way. Ged’s comment made me think of Gladwell’s “connectors” – there certainly is a class of folks on the web who derive and create value by processing, digesting, considering and publishing content, and not all of them are professionals in media (in fact, most of them aren’t).

In my post I posited that perhaps RSS was receding into a “Betamax” phase, where only “professionals” in my industry (media) would be users of it. I think I got that wrong, at least in spirit. There is most definitely an RSS cohort of sorts, but it’s not one of “media professionals.” Instead, I think “social editors” or “super nodes” is more spot on. These are the folks who feel compelled to consume a lot of ideas (mainly through the written word), process those ideas, and then create value by responding or annotating those ideas. They derive social status and value by doing so – we reward people who provide these services with out attention and appreciation. They have more Twitter followers than the average bear. They probably have a blog (like Ged does). And they’re most likely the same folks who are driving the phenomenal growth of Tumblr.

Social editors who convene the largest audiences can actually go into business doing what they love – that’s the premise of FM’s initial business model.

But there orders of magnitude more folks who do this well, but may not want to do it full time as a business, or who are content with the influence of an audience in the hundreds or thousands, as opposed to hundreds of thousands or millions, like many FM sites.

I’m learning a lot about this cohort via FM’s recent acquisition of BigTent and Foodbuzz – both of these businesses have successfully created platforms where influential social editors thrive.

RSS is most certainly not dead, but as many commentators noted, it may evolve quite a bit in the coming years. It has so much going for it – it’s asynchronous, it’s flexible, it’s entirely subjective (in that you pick your feeds yourself, as opposed to how Facebook works), it allows for robust UIs to be built around it. It’s a fundamentally “Independent Web” technology.

But RSS also has major problems, in particular, there’s not a native monetization signal that goes with it. Yesterday’s post proves I have a lot of active RSS readers, but I don’t have a way to engage them with intelligent marketing (save running Pheedo or Adsense, which isn’t exactly what I’d call a high-end solution). I, like many others, pretty much gave up on RSS as a brand marketing vehicle a couple of years back. There was no way to “prove” folks were actually paying attention, and without that proof, marketers will only buy on the come – that’s why you see so many direct response ads running in RSS feeds.

It does seem that no one is really developing “for” RSS anymore.

Except, I am. At FM we use RSS in various robust fashions to pipe content from our network of hundreds (now thousands) of talented “social editors” into multitudes of marketing and content programs. (Check out FoodPress for just one example). We’ve even developed a product we call “superfeeds” that allows us to extend what is possible with RSS. In short, we’d be lost without RSS, and from the comments on my post, it seems a lot of other folks would be too, and in particular, folks who perform the critical role of “super node” or “social editor.”

So long live the social editor, and long live RSS. Perhaps it’s time to take another look at how we might find an appropriate monetization signal for the medium. I’m pretty sure that marketers would find conversing with couple hundred thousand “super nodes” valuable – if only we could figure a way to make that value work for all involved.

Hmmmm…..

Google's "Opinion" Sparks Interesting Dialog On Tying of Services to Search

By - December 02, 2010

the search cover.pngYesterday’s post on Google having an algorithmic “opinion” about which reviews were negative or positive sparked a thoughtful response from Matt Cutts, Google’s point person on search quality, and for me raised a larger question about Google’s past, present, and future.

In his initial comment (which is *his* opinion, not Google’s, I am sure), Cutts remarked:

“…the “opinion” in that sentence refers to the fact our web search results are protected speech in the First Amendment sense. Court cases in the U.S. (search for SearchKing or Kinderstart) have ruled that Google’s search results are opinion. This particular situation serves to demonstrate that fact: Google decided to write an algorithm to tackle the issue reported in the New York Times. We chose which signals to incorporate and how to blend them. Ultimately, although the results that emerge from that process are algorithmic, I would absolutely defend that they’re also our opinion as well, not some mathematically objective truth.”

While Matt is simply conversing on a blog post, the point he makes is not just a legal nit, it’s a core defense of Google’s entire business model. In two key court cases, Google has prevailed with a first amendment defense. Matt reviews these in his second comment:

“SearchKing sued Google and the resulting court case ruled that Google’s actions were protected under the first amendment. Later, KinderStart sued Google. You would think that the SearchKing case would cover the issue, but part of KinderStart’s argument was that Google talked about the mathematical aspects of PageRank in our website documentation. KinderStart not only lost that lawsuit, but KinderStart’s lawyer was sanctioned for making claims he couldn’t back up… After the KinderStart lawsuit, we went through our website documentation. Even though Google won the case, we tried to clarify where possible that although we employ algorithms in our rankings, ultimately we consider our search results to be our opinion.”

The key point, however, is made a bit later, and it’s worth highlighting:

“(the) courts have agreed … that there’s no universally agreed-upon way to rank search results in response to a query. Therefore, web rankings (even if generated by an algorithm) are are an expression of that search engine’s particular philosophy.”

Matt reminded us that he’s made this point before, on Searchblog four years ago:

“When savvy people think about Google, they think about algorithms, and algorithms are an important part of Google. But algorithms aren’t magic; they don’t leap fully-formed from computers like Athena bursting from the head of Zeus. Algorithms are written by people. People have to decide the starting points and inputs to algorithms. And quite often, those inputs are based on human contributions in some way.”

Back then, Matt also took pains to point out that his words were his opinion, not Google’s.

So let me pivot from Matt’s opinion to mine. All of this is fraught, to my mind, with implications of the looming European investigation. The point of the European action, it seems to me, is to find a smoking gun that proves Google is using a “natural monopoly” in search to favor its own products over those of competitors.

Danny has pointed out the absurdity of such an investigation if the point is to prove Google favors its search results over the search results of competitors like Bing or others. But I think the case will turn on different products, or perhaps, a different definition of what constititues “search results.” The question isn’t whether Google should show compeitors standard search results, it’s whether Google favors its owned and operated services, such as those in local (Google Places instead of Foursquare, Facebook etc), commerce (Checkout instead of Paypal), video (YouTube instead of Hulu etc.), content (Google Finance instead of Yahoo Finance or others, Blogger instead of WordPress, its bookstore over others, etc.), applications (Google Apps instead of MS Office), and on and on.

That is a very tricky question. After all, aren’t those “search results” also? As I wrote eons ago in my book, this most certainly is a philosophical question. Back in 2005, I compared Yahoo’s approach to search with Google’s:

Yahoo makes no pretense of objectivity – it is clearly steering searchers toward its own editorial services, which it believes can satisfy the intent of the search. … Apparent in that sentiment lies a key distinction between Google and Yahoo. Yahoo is far more willing to have overt editorial and commercial agendas, and to let humans intervene in search results so as to create media that supports those agendas. Google, on the other hand, is repelled by the idea of becoming a content- or editorially-driven company. While both companies can ostensibly lay claim to the mission of “organizing the world’s information and making it accessible” (though only Google actually claims that line as its mission), they approach the task with vastly different stances. Google sees the problem as one that can be solved mainly through technology – clever algorithms and sheer computational horsepower will prevail. Humans enter the search picture only when algorithms fail – and only then grudgingly. But Yahoo has always viewed the problem as one where human beings, with all their biases and brilliance, are integral to the solution.

I then predicted some conflict in the future:

But expect some tension over the next few years, in particular with regard to content. In late 2004, for example, Google announced they would be incorporating millions of library texts into its index, but made no announcements about the role the company might play in selling those texts. A month later, Google launched a video search service, but again, stayed mum on if and how it might participate in the sale of television shows and movies over the Internet.

Besides the obvious – I bet Google wishes it had gotten into content sales back in 2004, given the subsequent rise of iTunes – there’s still a massive tension here, between search services that the world believes to be “objective” and Google’s desire to compete given how the market it is in is evolving.

Not to belabor this, but here’s more from my book on this issue, which feels pertinent given the issues Google now faces, both in Europe and in the US with major content providers:

… for Google to put itself into the position of media middle man is a perilous gambit – in particular given that its corporate DNA eschews the almighty dollar as an arbiter of which content might rise to the top of the heap for a particular search. Playing middle man means that in the context of someone looking for a movie, Google will determine the most relevant result for terms like “slapstick comedy” or “romantic musical” or “jackie chan film.” For music, it means Google will determine what comes first for “usher,” but it also means it will have to determine what should come first when someone is looking for “hip hop.”

Who gets to be first in such a system? Who gets the traffic, the business, the profits? How do you determine, of all the possibilities, who wins and who loses? In the physical world, the answer is clear: whoever pays the most gets the positioning, whether it’s on the supermarket shelf or the bin-end of a record store. ….But Google, more likely than not, will attempt to come up with a clever technological solution that attempts to determine the most “objective” answer for any given term, be it “romantic comedy” or “hip hop.” Perhaps the ranking will be based on some mix of PageRank, download statistics, and Lord knows what else, but one thing will be certain: Google will never tell anyone how they came to the results they serve up. Which creates something of a Catch-22 when it comes to monetization. Will Hollywood really be willing to trust Google to distribute and sell their content absent the commercial world’s true ranking methodology: cold, hard cash?

…Search drives commerce, and commerce drives search. The two ends are meeting, inexolerably, in the middle, and every major Internet player, from eBay to Microsoft, wants in. Google may be tops in search for now, but in time, being tops in search will certainly not be enough.

Clearly, as a new decade unfolds, search alone is not enough anymore, and my prediction that Google will protect itself with the shield of “objectivity” has been upended. But the question of how Google ties its massive lead in search to its new businesses in local, content, applications, and other major markets remains tricky, and at this point, quite unresolved.

Twitter's Great Big Problem Is Its Massive Opportunity

By - November 30, 2010

twitter-follow-achiever.jpg

One of the many reasons I find Twitter fascinating is that the company seems endlessly at an inflection point. Eighteen months ago I was tracking its inflection point in usage (holy shit, look how it’s growing! Then, holy shit, has it stopped?!), then its inflection in business model (hey, it doesn’t have one! Wait, yes it does, but can it scale?!), and more recently, its inflection point in terms of employees (as in growing from 80+ staff to 350+ in one year – necessitating a shift in management structure….).

Twitter now faces yet another inflection point – one I’ve been tracking for some time, and one that seems to be coming to a head. To me, that inflection has to do with usefulness – can the service corral all the goodness that exists in its network and figure out a way to make it useful to its hundreds of millions of users?

To me, this inflection point is perhaps its most challenging, and its greatest opportunity, because it encompasses all the others. If Twitter creates delightful instrumentations of the unique cacophony constantly crossing its servers, it wins big time. Users will never leave, marketers will never get enough, and employees will pine to join the movement (witness Facebook now, and Google five years ago).

Now, I’m not saying Twitter isn’t already a success. It is. The service has a dedicated core of millions who will never leave the service (I’m one of them). And I’m going to guess Twitter gets more resumes than it knows what to do with, so hiring isn’t the problem. And lastly, I’ve been told (by Ev, onstage at Web 2) that the company has more marketing demand than it can fulfill.

But therein lies the rub. Twitter has the potential to be much more, and everyone there knows it. It has millions of dedicated users, but it also has tens of millions who can’t quite figure out what the fuss is all about. And you can’t hire hundreds of engineers and product managers unless you have a job for them to do – a scaled platform that has, at its core, a product that everyone and their mother understands.

As for that last point – the surfeit of marketing demand – well that’s also a problem. Promoted tweets, trends, and accounts are a great start, but if you don’t have enough inventory to satisfy demand, you’ve not crossed the chasm from problem to opportunity.

In short, Twitter has a publishing problem. Put another way, it has a massive publishing opportunity.

Oh, I know, you’re saying “yeah Battelle, there you go again, thinking the whole world fits neatly into your favorite paradigm of publishing.”

Well yes, indeed, I do think that. To me, publishing is the art of determining what is worth paying attention to, by whom. And by that definition, Twitter most certainly is a publishing platform, one used by nearly 200 million folks.

The problem, of course, is that while Twitter makes it nearly effortless for folks to publish their own thoughts, it has done far too little to help those same folks glean value from the thoughts of others.

It was this simple truth that led FM to work with Microsoft to create ExecTweets, and AT&T to create the TitleTweets platform. It’s the same truth that led to the multi-pane interface of Tweetdeck as well as countless other Twitter apps, and it was with an eye toward addressing this problem that led to the introduction of Lists on Twitter.com and its associated APIs.

But while all those efforts are worthy, they haven’t yet solved the core problem or addressed the massive opportunity. At its core, publishing is about determining signal from noise. What’s extraordinary about Twitter is the complexity of that challenge – one man’s noise is another man’s signal, and vice versa. And what’s signal now may well be noise tomorrow – or two minutes from now. Multiply this by 200 million or so, then add an exponential element. Yow.

There is both art and science to addressing this challenge. What we broadly understand to be “the media” have approached the problem with a mostly one-to-many approach: We define an audience, determine what topic most likely will want to pay attention to, then feed them our signal, one curated and culled from the noise of all possible information associated with that topic. Presto, we make Wired magazine, Oprah, or Serious Eats.

Facebook has done the same with information associated with our friend graph. The newsfeed is, for all intents and purposes, a publication created just for you. Sure, it has its drawbacks, but it’s pretty darn good (though its value is directly determined by the value you place in your Facebook friend graph. Mine, well, it don’t work so well, for reasons of my own doing).

So how might Twitter create such a publication for each of its users? As many have pointed out (including Twitter’s CEO Dick Costolo), Twitter isn’t a friend graph, it’s more of an interest graph, or put another way, an information graph – a massive set of points interconnected by contextual meaning. To the uninitiated, this graph is daunting.

Twitter’s current approach to navigating this graph centers around following human beings – at first with its “suggested users” list, which simply didn’t scale. Twitter soon replaced suggested users with “Who to follow” – a more sophisticated, algorithm-driven list of folks who seem to match your current set of followers and, to some extent, your interests. When you follow someone who’s a big foodie, for example, Twitter will suggest other folks who tweet about food. It does so, one presumes, by noting shared interests between users.

The question is, does Twitter infer those interests via the signal of who follows who, or does it do it by actually *understanding* what folks are tweeting about?

Therein, I might guess, lies the solution. The former is a proxy for a true interest graph – “Hey, follow these folks, because they seem to follow folks who are like the folks you already follow.” But latter *is* an interest graph – “Hey, follow these folks, because they tweet about things you care about.”

From that logically comes the ability to filter folks’ streams based on interests, and once you can do that, well, things get really…interesting. You could follow interests, instead of people, for example. It’s like search meets social! And hey – isn’t that kind of the holy grail?

If Twitter can make the interest graph explicit to its users, and develop products and features which surface that graph in real time, it wins on all counts. That is a very big problem to solve, and a massive opportunity to run after.

For more on this, read Making Twitter an Information Network, by Mike Champion, and “The unbearable lameness of web 2.0″, by Kristian Köhntopp, as well as the wonderful but too short The Algorithm + the Crowd are Not Enough, by Rand Fishkin. These and many more have informed my ongoing thinking on this topic. What do you think?

All Brands Are Politicians

By - October 29, 2010

Obama-kissing-a-baby.jpg(image) Recently I was watching television with my wife, a baseball game if memory serves, when an advertisement caught my eye. It was for a regional restaurant chain (not a national one like Jack in the Box). The ad was pretty standard fare – a call to action (go now!) and a clear value proposition: the amazing amount of tasty-looking food you could have for a bargain price. I can’t find the ad online, but there’s no dearth of similar spots on television – in fact, their plentitude is why the commercial caught my attention in the first place.

In short, the ad offered pretty much all you could eat pasta, fries, and burgers for something like six bucks. Nearly all the food portrayed was processed, fried, and sourced from factory farms – necessarily so, as it’d simply not be possible to offer such a deal were it not for the economies of scale inherent in the US food economy. It’s simple capitalism at work: The chain is taking advantage of our nation’s subsidization of cheap calories to deliver what amounts to an extraordinary bargain to a consumer – all you can eat for less than an hour’s minimum wage!

It’s entirely predictable that such an offering would be in market. What’s not predictable, until recently, is how marketing such an offer might backfire in the coming age of marketing transparency and political unrest.

Allow me to try to explain.

As I watched the ad, and considered how many similar ones I see on a regular basis, I got to thinking about who the chain was targeting.

Certainly the chain wasn’t targeting me. I’m one of the so-called elites living in a bubble – I try to eat only organic foods, grown locally or sustainably if possible. I do this because I believe these foods are healthier for me and better for the world. I know I am in the extreme minority when it comes to my food – in the main because I can afford the prices they command. (And sure, I love hitting a burger joint every so often as a treat, but I also know that the act of considering fast food chains a “treat” is a privilege – I don’t have to rely on those outlets for my main source of sustenance.)

So no, that television ad was most certainly not targeted to me. I’d actually never even heard of the chain (nor had I seen its restaurants near where I live or travel). In short, that chain was wasting its marketing dollars on me, and most likely on a lot of other folks like me who happen to like watching baseball.

So what audience was that chain trying to reach? Experts in food marketing will tell you that the QSR industry is obsessed with reaching young men (and young men do watch baseball). But as I watched that ad, I started to think about another cohort that would clearly be influenced by the ads.

And that “target?” Intentionally or not (most likely not), it struck me that the advertisement would certainly appeal to our nation’s poor, as well as to those in our country who have eating issues, quite often the same folks, from what I read. One in seven people in the US are officially poor (and that bar is pretty damn low – $22K a year or less for a family of four). Nearly one in three are categorized as “obese.” And these two trends have become a seedbed for what are becoming the most politically sensitive issues of our generation: healthcare, wealth distribution, and energy policy. (The link between energy policy and food is expanded upon here).

Now, what happens when marketers like the all-you-can-eat chain, who like most marketers are not spending their money efficiently on TV, start buying data-driven audiences over highly efficient digital platforms? When and if we get to the nirvana that Google, Facebook, Yahoo, Microsoft, Blue Kai, Cadreon, and countless others are pushing at the moment – a perfect world of matching marketers dollars to audience data and increased foot traffic in-store – we’ll be able to discern quite directly who a marketer is influencing.

And while that restaurant chain’s goal might be to influence young men, what happens if the digital advertising ecosystem proves directly that the folks who are responding are deeply effected by what has become a hot potato national issue around food, energy, and health?

And what happens when digital activists reverse engineer that marketing data, and use it as political fodder for issue-based activism? As far as I am concerned, the question isn’t if this is going to happen, the question is when.

Wait a minute, you might protest (if you are a marketer). What about privacy!! Ah, there’s the rub. Today’s privacy conversation is all about the consumer, about protecting the consumer from obtrusive targeting, and informing that consumer how, when, and why he or she is being targeted.

But that same data, which I agree the consumer has a right to access, can be re-aggregated by intelligent services (or industrious journalists using willing consumer sources), and then interpreted in any number of ways. And don’t think it’s just anti-corporate lefties and green freaks who will be making noise, in my research for this article, I found tons of articles on Tea Party sites decrying federal food subsidies. In short, the data genie is out of the bottle, not just for consumers, but for marketers as well.

Get ready, marketers, to be judged in the public square on your previously private marketing practices – because within the ecosystem our industry is rapidly building, the data will out.

I’m not picking on the food industry here, rather I’m simply using it as a narrative example. Increasingly, a company’s marketing practices will become transparent to its customers, partners, competitors, and detractors. And how one practices that marketing will be judged in real time, in a political dialog that defines the value of that brand in the world.

This new reality will force brands to develop a point of view on major issues of the day – and that ain’t an easy thing for brands to do – at least not at present. I’ve written extensively about how brands must become publishers. I’ve now come to the conclusion that they must also become politicians as well. Brands will have to play to their base, cater to interest groups, and answer for their “votes” – how their marketing dollars are spent.

I’d wager that marketers who get in front of this trend and shows leadership on the big issues will be huge winners. What do you think?


Identity and The Independent Web

By - October 21, 2010

social_contract.large.jpg

Are we are evolving our contract with society through our increasing interactions with digital platforms, and in particular, through what we’ve come to call the web?  

I believe the answer is yes. I’m fascinated with how our society’s new norms and mores are developing – as well as the architectural patterns which emerge as we build what, at first blush, feels like a rather chaotic jumble of companies, platforms, services, devices and behaviors.

Here’s one major architectural pattern I’ve noticed: the emergence of two distinct territories across the web landscape. One I’ll call the “Dependent Web,” the other is its converse: The “Independent Web.”

The Dependent Web is dominated by companies that deliver services, content and advertising based on who that service believes you to be: What you see on these sites “depends” on their proprietary model of your identity, including what you’ve done in the past, what you’re doing right now, what “cohorts” you might fall into based on third- or first-party data and algorithms, and any number of other robust signals.

The Independent Web, for the most part, does not shift its content or services based on who you are. However, in the past few years, a large group of these sites have begun to use Dependent Web algorithms and services to deliver advertising based on who you are.

A Shift In How The Web Works?

And therein lies the itch I’m looking to scratch: With Facebook’s push to export its version of the social graph across the Independent Web; Google’s efforts to personalize display via AdSense and Doubleclick; AOL, Yahoo and Demand building search-driven content farms, and the rise of data-driven ad exchanges and “Demand Side Platforms” to manage revenue for it all, it’s clear that we’re in the early phases of a major shift in the texture and experience of the web.

The dominant platforms of the US web – Facebook, Google, and increasingly Twitter- all have several things in common, but the one that comes first to my mind is their sophisticated ability to track your declarations of intent and interpret them in ways that execute, in the main, two things. First, they add value to your experience of that service. Google watches what you search, where you go, and what advertising you encounter, and at near the speed of light, it provides you an answer.

Facebook does the same, building a page each time you click, based on increasingly sophisticated data and algorithms. And Twitter is hard on its parents’ heels – to my mind, Twitter is the child of Google and Facebook, half search, half social. (Search’s rich uncle is the explosion of “user generated content” – what I like to call Conversational Media. Facebook’s rich uncle is clearly the mobile phone, and texting in particular. But I digress….as usual.)

Secondly, these services match their model of your identity to an extraordinary machinery of marketing dollars, serving up marketing in much the same way as the service itself. In short, the marketing is the message, and the message is the service. We knowingly go to Facebook or Google now much as we go to the mall or the public square – to see and be seen, to have our intent responded to, whether those wishes be commercial or public expression.

When we’re “on” Facebook, Google, or Twitter, we’re plugged into an infrastructure (in the case of the latter two, it may be a distributed infrastructure) that locks onto us, serving us content and commerce in an automated but increasingly sophisticated fashion. Sure, we navigate around, in control of our experience, but the fact is, the choices provided to us as we navigate are increasingly driven by algorithms modeled on the service’s understanding of our identity. We know this, and we’re cool with the deal – these services are extremely valuable to us. Of course, when we drop into a friend’s pictures of their kid’s Barmitzvah, we could care less about the algorithms. But once we’ve finished with those pictures, the fact that we’ve viewed them, the amount of time we spent viewing them, the connection we have to the person whose pictures they are, and any number of other data points are noted by Facebook, and added to the infinite secret sauce that predestines the next ad or newsfeed item the service might present to us.

Now I’m not against the idea of scale, or algorithmic suggestions – in particular those driven by a tight loop of my own actions, and those of my friends (in the case of Google, my “friends” are ghost cohorts, and therein lies Google’s social problem, but that’s grist for another post).

But there is another part of the web, one where I can stroll a bit more at my own pace, and discover new territory, rather than have territory matched to a presumed identity. And that is the land of the Independent Web.

What’s My Independent Identity?

What happens when the Independent Web starts leveraging the services of the Dependent Web? Do we gain, do we lose, or is it a push? We seem to be in the process of finding out. It’s clear that more than ads can be driven by the algorithms and services of the Dependent Web. Soon (in the case of Facebook Open Graph, real soon) Independent sits will be able use Dependent Web infrastructure to determine what content and services they might offer to a visitor.

Imagine if nearly all sites used such services. As they stand today, I can’t imagine such a world would be very compelling. We have to do a lot more work on understanding concepts of identity and intent before we could instrument such services – and at present, nearly all that work is being done by companies with Dependent business models (this is not necessarily a bad thing, but it’s a thing). This skews the research, so to speak, and may well constrain the opportunity.

The opportunity is obvious, but worth stating: By leveraging a nuanced understanding of a visitor’s identity, every site or service on the web could deliver content, services, and/or advertising that is equivalent in relevance and experience as the best search result is to us today. The site would read our identity and click path as our intent (thus creating the “query”), then match its content and service offerings to that intent, creating the “result.” Leveraging our identities, Independent Web sites could more perfectly instrument their sites to our tastes. Sites would feel less like impersonal mazes, and more like conversations.

But is that what we want? It depends on the model. In a Dependent Web model, the data and processes used to deliver results is opaque and out of the consumer’s control. What we see depends on how the site interprets pre-conceived models of identity it receives from a third party.

Consider how most display advertising works today. As we roam the web, we are tracked, tagged, and profiled by third parties. An increasingly sophisticated infrastructure is leveraged to place a high-probability advertising match in front of us. In this model, there is no declared intent (no “query”) – our presence and the identity model the system has made for us stands in for the query. Because there is no infrastructure in place for us to declare who we might want to be in the eyes of a particular site, the response to that query makes a ton of assumptions about who we are. Much more often than not, the results are weak, poor, or wasted.

Can’t we do better?

For purposes of this post, I’m not going to wade into what many consider the threat of “our privacy being breached” as more and more personal data is added to our Dependent Web identity models (the ongoing debate about tracking and disclosure is robust, but not what I’m getting at here). Instead I see a threat to the overall value of our industry – if we continue to graft a Dependent Web model onto the architecture of the Independent Web, we most likely will fail to deliver the value that we all intuit is possible for the web. And that’s not good for anyone.

As consumers, we understand (for the most part) that when we are on Dependent sites, we’re going to get Dependent results. It’s part of a pretty obvious bargain. On Facebook, we’re Facebook users – that’s our identity in context of Facebook. But out on the Independent web, no such bargain has yet been struck. On Boing Boing, the Huffington Post, or Serious Eats, we’re someone else. The question is – who are we?

I Am What I Say I Am, For Now…

The interplay between Dependent and Independent services may set the table for a new kind of identity to emerge – one driven not by a model of interaction tracked by the Dependent Web per se, but rather by what each individual wishes to reveal about who they are, in real time. These revelations may be fleeting and situational – as they so often are in the real world. If I alight on a post about a cool new mountain bike, for example, I might chose to reveal that I’m a fan of the Blur XC, a bike made by the Santa Cruz company. But I don’t necessarily want that information to presumptively pass to the owner of that site until I read the post and consider the consequences of revealing that data.

NatChamps09_280209_0486.jpg

Let’s presume, for sake of argument, that the biking site has deeply integrated Facebook’s Open Graph (the way that, for example, Yelp or TripAdvisor does). When I show up, that site will know I’m a fan of Santa Cruz (let’s assume I have “fanned” the Blur XC on Facebook) and surface all sorts of articles and services related to that information. Somehow, that doesn’t feel right. It’s not that I don’t want the site to know that I like Blurs, it’s that I want to reveal myself to a new community on my own terms (and every media site is, at its heart, a community). In this example, the Dependent Web does the revealing for me. I’m not sure that’s a good thing for our industry. It runs counter to how people are wired to work in the real world.

Creating such a nuanced instrumentation of identity and how it might be conveyed across the Web seems a long way off, but I’m not so sure it is. It starts with taking control of your own identity in the Independent Web (for more on that, read A. Dash - from 8 years ago…). Who we believe we are in the world is pretty fundamental to being human, and as we bleed our actual identity into our digital one, it’s worth recalling that so far, at least, we don’t have a system that lets us really instrument who we are online in a fashion that scales to the complexity of true human interaction.  

Let’s take that last bike scenario and play it out in the “real world.” Instead of alighting on a post on some random web site I’ve stumbled across, let’s say I’m having a coffee at a local bakery, and I overhear a group of guys talking about a bike one of them recently purchased. I don’t know these guys, but I find their conversation (the equivalent of a “post”) engaging, and I lean in. The guys notice me listening, and given they’re talking in a public place, they don’t mind. They check me out, reading me, correctly, as a potential member of their tribe – I look like a biker (tribes can recognize potential members by sight pretty easily). At some point in the conversation – based on whether I feel the group would welcome the interjection, for example – I might decide to reveal that I’ve got a Blur XC. That might get a shrug from the leader of the conversation, or it might lead to a spirited debate about the merits of Santa Cruz bikes versus, say, Marin. That in turn may lead to an invitation to join them on a ride, and a true connection could well be made.

But until I engage, and offer new information, I’m just the dude at the next table who’s interested in what the folks next to me are talking about. In web parlance, I’m a lurker. As I lurk, I might realize the guys at the next table are sort of wankers, and I’m not interested in riding with them. I have the sense that this model of information sharing is, at its core, the way identity in what I’m calling “The Independent Web” should probably work. If, however, the Independent Web uses Facebook and/or Google services to determine what content to show me when first alight on a site, the model will be quite different.

A Third Way – The Revealed Identity?

I sense an opportunity to create a new kind of social identity for us to leverage around the web, one that is far more personal and instrumented than a Facebook profile or a Google cookie. It’s an identity that is independent of the one we’ve cultivated on Dependent platforms, but not necessarily separate from them. We can chose to include our Dependent Web profiles, but we don’t have to. At the moment, the model seems pretty black or white. If I’m logged into Facebook and the site I visit is using Facebook’s services, that site knows more about me than probably most of my friends do.

In other words, perhaps it’s time for a Revealed Identity, as opposed to a Public or Dependent Identity. As human beings wandering this earth, we certainly have both. Why don’t we have the same online?

I think it’s worth defining a portion of the web as a place where one can visit and be part of a conversation without the data created by that conversation being presumptively sucked into a sophisticated response platform – whether that platform is Google, Blue Kai, Doubleclick, Twitter, or any other scaled web service. Now, I’m all for engaging with that platform, to be sure, but I’m also interested in the parts of society where one can wander about free of identity presumption, a place where one can chose to engage knowing that you are in control of how your identity is presented, and when it is revealed.

One thing I’m certain of: Who I am according to Google, or Facebook, or any number of other scaled Dependent Web services, is not necessarily who I want to be as I wander this new digital world. I want more instrumentation, more nauance, and more rights.

The question is, however, how to create that better service? Is it in the commercial interests of the dominant Dependent Web players to do so? Are there startups working on this right now who already have the answers?

I think how we manage these questions will define who we are at a very core level in the coming years. As Lessig has written, code becomes law. It took tens of thousands of years for homo sapiens to develop the elaborate social code which defines how we interact with each other in the real world. I’m fascinated with the question of how we translate that code online.

(I’m not certain where I’m going with this post, but as I said it’s an itch I wanted to scratch. I know I’ve not done the reading I should in the topic, and I’m hoping readers might point me in useful directions – further reading, people to meet, companies to watch – so I might get a bit smarter and refine my thinking. This is all, of course, pointing me toward the “next book” which, as books tend to do, is not exactly writing itself.)

The Mac As Just Another i-Screen in an iWorld. NO THANKS.

By - October 20, 2010

wired-pray.gifToday Apple announced a move that, on first blush, seems to push the Mac, its seminal and defining product, into the iWorld. You know, the world of Apple-controlled, closed, manicured gardens a la iPhone, iPod, iPad, and iTunes.

There’s going to be an “app store” for Macs, and the iPad OS is going to be integrated in the next release of the Mac.

If anything, ever, will make me leave Mac for good (and the companies I’ve started have purchased literally thousands of them), it will be the integration of the Mac OS into Steve Jobs’ vision of where mobile is going.

I’ll have a lot more to say about this once I’m well and truly smart on the announcements. But given the trajectory of Apple, which is now driven far more by iWorld than by Mac, I’m not holding out much hope for the Mac continuing to be a computer in any real sense of the word. You know, where a computer means you have choices as to what apps you run on it, what apps get developed for it, and how you express yourself using it.

Feh.