free html hit counter Uncategorized Archives - Page 15 of 77 - John Battelle's Search Blog

Signal, Curation, Discovery

By - December 11, 2010

discovery co. logos.png

This past week I spent a fair amount of time in New York, meeting with smart folks who collectively have been responsible for funding and/or starting companies as varied as DoubleClick, Twitter, Foursquare, Tumblr, Federated Media (my team), and scores of others. I also met with some very smart execs at American Express, a company that has a history of innovation, in particular as it relates to working with startups in the Internet space.

I love talking with these folks, because while we might have business to discuss, we usually spend most of our time riffing about themes and ideas in our shared industry. By the time I reached Tumblr, a notion around “discovery” was crystallizing. It’s been rattling around my head for some time, so indulge me an effort to Think It Out Loud, if you would.

Since its inception, the web has presented us with a discovery problem. How do we find something we wish to pay attention to (or connect with)? In the beginning this problem applied to just web sites – “How do I find a site worth my time?” But as the web has evolved, the problem keeps emerging again – first with discrete pieces of content – “How do I find the answer to a question about….” – and then with people: “How do I find a particular person on the web?” And now we’ve started to combine all of these categories of discovery: “How do I find someone to follow who has smart things to say about my industry?” In short, over time, the problem has not gotten better, it’s gotten far more complicated. If all search had to do was categorize web content, I’d wager it’d be close to solved by now.

But I’m getting ahead of myself.

Our first solution to the web’s initial discovery problem was to curate websites into directories, with Yahoo being the most successful of the bunch. Yahoo became a crucial driver of the web’s first economic model: banner ads. It not only owned the largest share of banner sales, but it drove traffic to the lion’s share of second-party sites who also sold banner ads.

But directories have clumsy interfaces, and they didn’t scale to the overwhelming growth in the number of websites. There were too many sites to catalog, and it was hard to determine relative rank of one site to another, in particular in context of what any one individual might find relevant (this is notable – because where directories broke down was essentially around their inflexibility to deal with individual’s specific discovery needs. Directories failed at personalization, and because they were human-created, they failed to scale. Ironically, the first human-created discovery product failed to feel…human).

Thus, while Yahoo remains to this day a major Internet company, its failure to keep up with the Internet’s discovery problem left an opening for a new startup, one that solved discovery for the web in a new way. That company, of course, was Google. By the end of the 1990s, five years into the commercial web, discovery was a mess. One major reason was that what we wanted to discover was shifting – from sites we might check out to content that addressed our specific needs.

Google exploited the human-created link as its cat-herding signal. While one might argue around the edges, what Google did was bring the web’s content to heel. Instead of using the site as the discrete unit of discovery, it used the page – a specific unit of content. (Its core algorithm, after all, was called PageRank – yes, named after co-founder Larry Page, but the entendre stuck because it was apt).

Google search not only revolutionized discovery, it created an entire ecosystem of economic value, one that continues to be the web’s most powerful (at least for now). As with the Yahoo era, Google became not only the web’s largest seller of advertising, it also became the largest referrer of traffic to other sites that sold advertising. Google proved the thesis that if you find a strong signal (the link), and curate it at scale (the search engine), you can become the most important company in the Internet economy. With both, of course, the true currency was human attention.

But once again, what we want to pay attention to is changing. Sure, we still want to find good sites (Yahoo’s original differentiation), and we want to find just the right content (Google’s original differentiation). But now we also want to find out “What’s Happening” and “Who’s Doing What”, as well as “Who Might I Connect With” in any number of ways.*

All of these questions are essentially human in nature, and that means the web has pivoted, as many have pointed out, from a site- and content-specific axis to a people-specific axis. Google’s great question is whether it can pivot with the web – hence all the industry speculation about Google’s social strategy, its sharing of data with Facebook (or not), and its ability to integrate social signal into its essentially HTML-driven search engine.

While this drama plays out, the web once again is becoming a mess when it comes to discovery, and once again new startups have sprung up, each providing new approaches to curate signal from the ever-increasing noise. They are, in order of founding, Facebook, Twitter, and Tumblr, and oddly enough, while each initially addressed an important discovery problem, they also in turn created a new one, in the process opening up yet another opportunity – one that subsequent (or previous) companies may well take advantage of.

Let me try to explain, starting with Facebook. When Facebook started, it was a revelation for most – a new way to discover not only what mattered on the web, but a way to connect with your friends and family, as well as discover new people you might find interesting or worthy of “friending.” Much as Google helped the web pivot from sites to content, Facebook became the axis for the web’s pivot to people. The “social graph” became an important curator of our overall web experience, and once again, a company embarked on the process of dominating the web: find a strong signal (the social graph), curate it at scale (the Facebook platform), and you may become the most important company in the Internet economy (the jury is out on Facebook overtaking Google for the crown, but I’d say deliberations are certainly keeping big G up at night).

But a funny thing has started to happen to Facebook – at least for me, and a lot of other folks as well. It’s getting to be a pretty noisy place. The problem is one, again, of scale: the more friends I have, the more noise there is, and the less valuable the service becomes. Not to mention the issue of instrumentation: Facebook is a great place for me to instrument my friend graph, but what about my interests, my professional life, and my various other contextual identities? Not to mention, Facebook wasn’t a very lively place to discover what’s up, at least not until the newsfeed was forced onto the home page.

Credit Twitter for that move. Twitter’s original differentiation was its ability to deliver a signal of “what’s happening”. Facebook quickly followed suit, but Twitter remains the strongest signal, in the main because of its asymmetrical approach to following, as opposed to symmetric friending. Twitter is yet another company that has the potential to be “the next Yahoo or Google” when it comes to signal, discovery, and curation, but it’s not there yet. Far too many folks find Twitter to be mostly noise and very little signal.

In its early years, things were even worse. When I first started using Twitter, I wrote quite a bit about Twitter’s discovery problem – it was near impossible to find the right folks to follow, and once you did, it was almost as difficult to curate value from the stream of tweets those people created.

Twitter’s first answer to its discovery problem – the Suggested User List – was pretty much Yahoo 1994: A subjective, curated list of interesting tweeters. The company’s second attempt, “Who To Follow,” is a mashup of Google 2001 and Facebook 2007: an algorithm that looks at what content is consumed and who your follow, then suggests folks to follow. I find this new iteration very useful, and have begun to follow a lot more folks because of it.

But now I have a new discovery problem: There’s simply too much content for me to grok. (For more on this, see Twitter’s Great Big Problem Is Its Massive Opportunity). Add in Facebook (people) and Google search (a proxy for everything on the web), and I’m overwhelmed by choices, all of them possibly good, but none of them ranked in a way that helps me determine which I should pay attention to, when, or why.

It’s 1999 all over again, and I’m not talking about a financing bubble. The ecosystem is ripe for another new player to emerge, and that’s one of the reasons I went to see the folks at Tumblr yesterday.

As I pointed out in Social Editors and Super Nodes – An Appreciation of RSS, Tumblr is growing like, well, Google in 2002, Facebook in 2006, or Twitter in 2008. The question I’d like to know is….why?

I’m just starting to play with the service, but I’ve got a thesis: Tumblr combines the best of self expression (Facebook and blogging platforms) with the best of curation (Twitter and RSS), and seems to have stumbled into a second-order social interest graph to boot (I’m still figuring out the social mores of Tumblr, but I am quite certain they exist). People who use Tumblr a lot tell me it “makes them feel smarter” about what matters in the web, because it unpacks all sorts of valuable pieces of content into one curated stream – a stream curated by people who you find interesting. It’s sort of a rich media Twitter, but the stuff folks are curating seems far more considered, because they are in a more advanced social relationship with their audience than with folks on Twitter. In a way, it feels like the early days of blogging, crossed with the early days of Twitter. With a better CMS and a dash of social networking, and a twist. If that makes any sense at all.

Tumblr, in any case, has its drawbacks: It feels a bit like a walled garden, it doesn’t seem to play nice with the “rest of the web” yet, and – here’s the kicker – finding people to follow is utterly useless, at least in the beginning.

Just as with Twitter in the early days, it’s nearly impossible to find interesting people to follow on Tumblr, even if you know they’re there. For example, I knew that Fred Wilson, who I respect greatly, is a Tumblr user (and investor), so as soon as I joined the service, I typed his name into the search bar at the top of Tumblr’s “dashboard” home page. No results. That’s because that search bar only searches what’s on your page, not all of Tumblr itself. In short, Tumblr’s search is deeply broken, just like Twitter’s search was back in the day (and web search was before Google). I remember asking Evan Williams, in 2008, the best way to find someone on Twitter, and his response was “Google them, and add the word Twitter.” I’m pretty sure the same is true at present for Tumblr. (It’s how I found Fred, anyway).

Continuing the echoes of past approaches to the same problem, Tumblr currently provides a “suggested users” like directory on its site, highlighting folks you might find interesting. I predict this will not be around for long – because it simply doesn’t solve the problem we want it to solve. I want to find the right users for me to follow, not ones that folks at Tumblr find interesting.

If Tumblr can iron out these early kinks, well, I’d warrant it will take its place in the pantheon of companies who have found a signal, curated it at scale, and solved yet another important discovery problem. The funny thing is, all of them are still in the game – even Yahoo, who I’ve spent quite a bit of time with over the past few months. I’m looking forwarding to continuing the conversation about how they approach the opportunity of discovery, and how each might push into new territories. Twitter, for example, seems clearly headed toward a Tumblr-like approach to content curation and discovery with its right hand pane. Google continues to try to solve for people discovery, and Facebook has yet to prove it can scale as a true content-discovery engine.

The folks at Google used to always say “search is a problem that is only five-percent solved.” I think now they might really mean “discovery is a problem that will always need to be solved.” Keep trying, folks. It gets more interesting by the day.

* I’m going to leave out the signals of commerce (What I want to buy) and location (Where I am now) for later ruminations. If you want my early framing thoughts, check out Database of Intentions Chart – Version 2, Updated for Commerce, The Gap Scenario,and My Location Is A Box of Cereal for starters.

  • Content Marquee

Google, China, Wikileaks: The Actual Cable

By - December 08, 2010

Guardian Cable Google China.png

When the Wikileaks story broke, I wrote a short piece chastising folks for blogging the assertion that one of the cables proves the Chinese government was behind the Google hacking which preceded Google’s pulling out of the country. The cable is based on single sources, who are anonymous and second-hand, and that doesn’t pass the journalistic sniff test.

My colleague Matt McAlister at the Guardian has sent me the link to the entire cable, and while I stand by my original take on the story, it sure is intriguing to read. In fact, the details I find most interesting are the interactions alleged between Baidu and the Chinese goverment.

From the cable:

….

Another contact claimed a top PRC leader was actively working with Google competitor Baidu against Google.

….

Google’s recent move presented a major dilemma (maodun) for the Chinese government, not because of the cyber-security aspect but because of Google’s direct challenge to China’s legal restrictions on Internet content. The immediate strategy, XXXXXXXXXXXX said, seemed to be to appeal to Chinese nationalism by accusing Google and the U.S. government of working together to force China to accept “Western values” and undermine China’s rule of law. The problem the censors were facing, however, was that Google’s demand to deliver uncensored search results was very difficult to spin as an attack on China, and the entire episode had made Google more interesting and attractive to Chinese Internet users. All of a sudden, XXXXXXXXXXXX continued, Baidu looked like a boring state-owned enterprise while Google “seems very attractive, like the forbidden fruit.”

….

XXXXXXXXXXXX noted the pronounced disconnect between views of U.S. parent companies and local subsidiaries. PRC-based company officials often downplayed the extent of PRC government interference in their operations for fear of consequences for their local markets. Our contact emphasized that Google and other U.S. companies in China were struggling with the stated Chinese goal of technology transfer for the purpose of excluding foreign competition. This consultant noted the Chinese were exploiting the global economic downturn to enact increasingly draconian product certification and government procurement regulations to force foreign-invested enterprises (FIEs) to transfer intellectual property and to carve away the market share of foreign companies.

Introducing FM's Signal Conference Series

By -

SIGNAL.pngI’m pleased to formally announce Federated Media’s upcoming Signal Series – three full-day conferences in three great cities. Born from FM’s annual Conversational Marketing Summit and my daily Signal newsletter, the Signal conference series focuses on one key topic in one city at a time. These three events will culminate in our annual CM Summit in New York next June during Internet Week.
We’ve nearly completed the program for the first event – Signal LA. The event is February 8th at the SLS Hotel (it’s quite nice!). The focus, as befits an event in LA, is content marketing, one of the more talked about trends in brand marketing today.  Our speaker line-up, as I hope you’ve come to expect, is stellar, and we’re really excited for what we’re sure will be an interesting, informative and impactful day. Please join us!

Confirmed speakers for Signal LA include:

arianna_huffington.jpg

Luke Beatty, VP & GM, Associated Content Yahoo!

Joanne Bradford, Chief Revenue Officer, Demand Media

Deanna Brown, COO, Federated Media

Chris Cunningham, CEO / founder, appssavvy

Arianna Huffington, Founder, Huffington Post

will.png

Peter Guber, Chairman & CEO, Mandalay Entertainment

Ann Lewnes, SVP Global Marketing, Adobe

Joel Lunenfeld, CEO, Moxie Interactive

Suzie Reider, Director of Sales and Marketing, YouTube

biz-stone-twitter.jpg

Rashmi Sinha, Founder, Slideshare

Biz Stone, Co-founder, Twitter

will.i.am, Founder, Dipdive and Black Eyed Peas

We’re still adding great speakers, so watch our site for more updates.

Signal is produced for senior decision makers in the Internet, media, and marketing businesses. If you’re involved in the digital media ecosystem, you belong at Signal. Sign up today.

signal-halfpage-new.jpg

Why Wouldn't Google Mirror Wikileaks?

By - December 07, 2010

02DEC10JulianAssange_800x600_t325.jpg

(image) Consider: Your mission is to “organize the world’s information and make it universally accessible.” You thumbed your nose at Wall Street, and you proved them wrong. You’ve stood up to the entire media industry by purchasing YouTube and defending fair use in the face of extraordinary pressure. You’ve done the same with the political and economic giant that is China*. And you’re hanging the entirety of your defense against European monopoly charges on the premise of free speech.

So why not take a bold step, and stand with Wikileaks? The world’s largest Internet company taking a clear stand would be huge news, and it’d call the bloviating bluff of all the politicians acting out of fear of embarrassment, or worse. The Wikileaks story may well be, as pointed out by many, the most important and defining story of the Internet age.

It just might prove to be the smartest PR move Google ever made. (And it could, of course, prove to be the exact opposite). And it looks, so far, like rival Facebook is leaning toward supporting Wikileaks.

After all, tens of millions of Stieg Larsson readers can’t be wrong…and I’m guessing they all see the charges against Assange as driven by more than trumped up sex scandal or politically motivated condemnation. Honestly, Larsson himself could not have written a better potboiler than what’s unfolding before us.

Just thinking out loud. What do you think?

*(And hey, it turns out Wikileaks may have already done Google a solid when it comes to China…)

Reader Henrik Writes…

By -

Reader Henrik writes: Twitter is great for discovering new and interesting things, but for consistently good sources of information, nothing beats RSS.

What You've Missed In Signal: Incl. RSS Feed for all you RSS Readers Out There

By -

signal.png

It’s been a while since I’ve updated you on my Signal newsletter, which I do each day. Here’s the last week or so of them. If you want to read it in RSS, here’s the feed:

http://feeds.feedburner.com/FederatedMediaSignalBlog

Tues. Signal: Does Your Media Have an Address?

Monday Signal: Clearly, It’s Not About The Money

Friday Signal: Nazis From Space!

Thurs. Signal: Go On, Opt Out. Just Don’t Come Cryin’ To Me …

Weds. Signal: The Journal Gives Marketing the Finger

Tuesday Signal: The iPad Is Yesterday’s News

Is This a Story?

By - December 06, 2010

Exclusive.pngHere’s one for you, folks: A few folks “in the know” told me that a company is thinking about doing something with another company, but that second company has no idea about it, and in order for the whole thing to play out, a whole lot of things need to happen first, most of which are totally dependent on other things happening over which the sources have no control!

Great story, eh?

Well, that’s the entire content of this Reuters story about AOL, one that has gotten a lot of play (including a spot on tech leaderboard TechMeme).

This piece is yet another example of the kind of “journalism” that is increasingly gaining traction in the tech world – pageview-baiting single-sourced speculation, with very little reporting, tossed online for the hell of it to see what happens.

It’s lazy, it’s unsupportable, and it’s tiresome.

To me, it’s not even about the crass commercialist drive to be “first” or to drive the most pageviews. It’s about the other side of the story – the sources. Reuters, in particular, as a bastion of “traditional” journalistic mores, should know that the “source” who gave this reporter the story has his or her own agenda. More likely than not, that agenda is to lend credibility to the idea that AOL and Yahoo should merge. It’s a huge disservice to the craft of journalism to let your obligations to your readers be so easily manipulated.

I miss the Industry Standard sometimes.

Social Editors and Super Nodes – An Appreciation of RSS

By - December 03, 2010

RSS comments.pngYesterday I posted what was pretty much an offhand question – Is RSS Dead? I had been working on the FM Signal, a roundup of the day’s news I post over at the FM Blog. A big part of editing that daily roundup is spent staring into my RSS reader, which culls about 100 or so feeds for me.

I realized I’ve been staring into an RSS reader for the better part of a decade now, and I recalled the various posts I’d recently seen (yes, via my RSS reader) about the death of RSS. Like this one, and this one, and even this one, from way back in 2006. All claimed RSS was over, and, for the most part, that Twitter killed it.

I wondered to myself – am I a dinosaur? I looked at Searchblog’s RSS counter, which has been steadily growing month after month, and realized it was well over 200,000 (yesterday it added 4K folks, from 207K to 211K). Are those folks all zombies or spam robots? I mean, why is it growing? Is the RSS-reading audience really out there?

So I asked. And man, did my RSS readers respond. More than 100 comments in less than a day – the second most I’ve ever gotten in that period of time, I think. And that’s from RSS readers – so they had to click out of their comfy reader environment, come over to the boring HTML web version of my site, pass the captcha/spam test, put in their email, and then write a comment. In short, they had to jump through a lot of hoops to let me know they were there. Hell, Scoble – a “super node” if ever there was one – even chimed in.

I’ve concluded that each comment someone takes the time to leave serves as a proxy for 100 or so folks who probably echo that sentiment, but don’t take the time to leave a missive. It’s my rough guess, but I think it’s in the ballpark, based on years of watching traffic flows and comment levels on my posts. So 100 comments in 24 hours equates to a major response on this small little site, and it’s worth contemplating the feedback.

One comment that stood out for me came from Ged Carroll, who wrote:

Many people are happy to graze Twitter, but the ‘super nodes’ that are the ‘social editors’ need a much more robust way to get content: RSS. If you like RSS is the weapon of choice for the content apex predator, rather than the content herbivores.

A “content apex predator”! Interesting use of metaphor – but I think Ged is onto something here. At Federated, we’ve made a business of aligning ourselves with content creators who have proved themselves capable of convening an engaged and influential audience. That’s the heart of publishing – creating a community of readers/viewers/users who like what you have to say or the service you offer.

And while more and more folks are creating content of value on the web, that doesn’t mean they are all “publishers” in the sense of being professionals who make their living that way. Ged’s comment made me think of Gladwell’s “connectors” – there certainly is a class of folks on the web who derive and create value by processing, digesting, considering and publishing content, and not all of them are professionals in media (in fact, most of them aren’t).

In my post I posited that perhaps RSS was receding into a “Betamax” phase, where only “professionals” in my industry (media) would be users of it. I think I got that wrong, at least in spirit. There is most definitely an RSS cohort of sorts, but it’s not one of “media professionals.” Instead, I think “social editors” or “super nodes” is more spot on. These are the folks who feel compelled to consume a lot of ideas (mainly through the written word), process those ideas, and then create value by responding or annotating those ideas. They derive social status and value by doing so – we reward people who provide these services with out attention and appreciation. They have more Twitter followers than the average bear. They probably have a blog (like Ged does). And they’re most likely the same folks who are driving the phenomenal growth of Tumblr.

Social editors who convene the largest audiences can actually go into business doing what they love – that’s the premise of FM’s initial business model.

But there orders of magnitude more folks who do this well, but may not want to do it full time as a business, or who are content with the influence of an audience in the hundreds or thousands, as opposed to hundreds of thousands or millions, like many FM sites.

I’m learning a lot about this cohort via FM’s recent acquisition of BigTent and Foodbuzz – both of these businesses have successfully created platforms where influential social editors thrive.

RSS is most certainly not dead, but as many commentators noted, it may evolve quite a bit in the coming years. It has so much going for it – it’s asynchronous, it’s flexible, it’s entirely subjective (in that you pick your feeds yourself, as opposed to how Facebook works), it allows for robust UIs to be built around it. It’s a fundamentally “Independent Web” technology.

But RSS also has major problems, in particular, there’s not a native monetization signal that goes with it. Yesterday’s post proves I have a lot of active RSS readers, but I don’t have a way to engage them with intelligent marketing (save running Pheedo or Adsense, which isn’t exactly what I’d call a high-end solution). I, like many others, pretty much gave up on RSS as a brand marketing vehicle a couple of years back. There was no way to “prove” folks were actually paying attention, and without that proof, marketers will only buy on the come – that’s why you see so many direct response ads running in RSS feeds.

It does seem that no one is really developing “for” RSS anymore.

Except, I am. At FM we use RSS in various robust fashions to pipe content from our network of hundreds (now thousands) of talented “social editors” into multitudes of marketing and content programs. (Check out FoodPress for just one example). We’ve even developed a product we call “superfeeds” that allows us to extend what is possible with RSS. In short, we’d be lost without RSS, and from the comments on my post, it seems a lot of other folks would be too, and in particular, folks who perform the critical role of “super node” or “social editor.”

So long live the social editor, and long live RSS. Perhaps it’s time to take another look at how we might find an appropriate monetization signal for the medium. I’m pretty sure that marketers would find conversing with couple hundred thousand “super nodes” valuable – if only we could figure a way to make that value work for all involved.

Hmmmm…..

Is RSS Really Dead?

By - December 02, 2010

IMy Shrook.png‘m usually the last guy to know, and the first to admit it, but is RSS really dead? I keep seeing posts claiming Twitter and Facebook have essentially replaced RSS as the way folks filter their news these days, but I for one am still addicted to my RSS client (it’s Shrook, for anyone who still cares).  

Perhaps RSS isn’t dead, but instead it’s professionalizing. It’s the Beta to the VHS of Twitter. Higher quality, better signal, but more expensive in terms of time, and used only by folks “in the industry.”

I write, every single day (especially with Signal), and I consume a lot of feeds in order to do that. I need a professional tool that lets me do that efficiently, and so far nothing beats an RSS reader. But I’m serious about my feeds, and most folks, I guess aren’t.

Or are you? I mean, sure, Feedburner is languishing over at Google, I hear, but

Potemkim or Real.pnghell, I have 207,000 readers consuming my feed, at least, that’s what Google tells me. And that’s up from about 170K earlier this year. Are you out there, RSS readers? Or am I blasting XML into a ghost town?

Just wonderin. Shout out if you’re here, guys. And shout out if you’re reading this because someone pointed to it on Twitter….

Google's "Opinion" Sparks Interesting Dialog On Tying of Services to Search

By -

the search cover.pngYesterday’s post on Google having an algorithmic “opinion” about which reviews were negative or positive sparked a thoughtful response from Matt Cutts, Google’s point person on search quality, and for me raised a larger question about Google’s past, present, and future.

In his initial comment (which is *his* opinion, not Google’s, I am sure), Cutts remarked:

“…the “opinion” in that sentence refers to the fact our web search results are protected speech in the First Amendment sense. Court cases in the U.S. (search for SearchKing or Kinderstart) have ruled that Google’s search results are opinion. This particular situation serves to demonstrate that fact: Google decided to write an algorithm to tackle the issue reported in the New York Times. We chose which signals to incorporate and how to blend them. Ultimately, although the results that emerge from that process are algorithmic, I would absolutely defend that they’re also our opinion as well, not some mathematically objective truth.”

While Matt is simply conversing on a blog post, the point he makes is not just a legal nit, it’s a core defense of Google’s entire business model. In two key court cases, Google has prevailed with a first amendment defense. Matt reviews these in his second comment:

“SearchKing sued Google and the resulting court case ruled that Google’s actions were protected under the first amendment. Later, KinderStart sued Google. You would think that the SearchKing case would cover the issue, but part of KinderStart’s argument was that Google talked about the mathematical aspects of PageRank in our website documentation. KinderStart not only lost that lawsuit, but KinderStart’s lawyer was sanctioned for making claims he couldn’t back up… After the KinderStart lawsuit, we went through our website documentation. Even though Google won the case, we tried to clarify where possible that although we employ algorithms in our rankings, ultimately we consider our search results to be our opinion.”

The key point, however, is made a bit later, and it’s worth highlighting:

“(the) courts have agreed … that there’s no universally agreed-upon way to rank search results in response to a query. Therefore, web rankings (even if generated by an algorithm) are are an expression of that search engine’s particular philosophy.”

Matt reminded us that he’s made this point before, on Searchblog four years ago:

“When savvy people think about Google, they think about algorithms, and algorithms are an important part of Google. But algorithms aren’t magic; they don’t leap fully-formed from computers like Athena bursting from the head of Zeus. Algorithms are written by people. People have to decide the starting points and inputs to algorithms. And quite often, those inputs are based on human contributions in some way.”

Back then, Matt also took pains to point out that his words were his opinion, not Google’s.

So let me pivot from Matt’s opinion to mine. All of this is fraught, to my mind, with implications of the looming European investigation. The point of the European action, it seems to me, is to find a smoking gun that proves Google is using a “natural monopoly” in search to favor its own products over those of competitors.

Danny has pointed out the absurdity of such an investigation if the point is to prove Google favors its search results over the search results of competitors like Bing or others. But I think the case will turn on different products, or perhaps, a different definition of what constititues “search results.” The question isn’t whether Google should show compeitors standard search results, it’s whether Google favors its owned and operated services, such as those in local (Google Places instead of Foursquare, Facebook etc), commerce (Checkout instead of Paypal), video (YouTube instead of Hulu etc.), content (Google Finance instead of Yahoo Finance or others, Blogger instead of WordPress, its bookstore over others, etc.), applications (Google Apps instead of MS Office), and on and on.

That is a very tricky question. After all, aren’t those “search results” also? As I wrote eons ago in my book, this most certainly is a philosophical question. Back in 2005, I compared Yahoo’s approach to search with Google’s:

Yahoo makes no pretense of objectivity – it is clearly steering searchers toward its own editorial services, which it believes can satisfy the intent of the search. … Apparent in that sentiment lies a key distinction between Google and Yahoo. Yahoo is far more willing to have overt editorial and commercial agendas, and to let humans intervene in search results so as to create media that supports those agendas. Google, on the other hand, is repelled by the idea of becoming a content- or editorially-driven company. While both companies can ostensibly lay claim to the mission of “organizing the world’s information and making it accessible” (though only Google actually claims that line as its mission), they approach the task with vastly different stances. Google sees the problem as one that can be solved mainly through technology – clever algorithms and sheer computational horsepower will prevail. Humans enter the search picture only when algorithms fail – and only then grudgingly. But Yahoo has always viewed the problem as one where human beings, with all their biases and brilliance, are integral to the solution.

I then predicted some conflict in the future:

But expect some tension over the next few years, in particular with regard to content. In late 2004, for example, Google announced they would be incorporating millions of library texts into its index, but made no announcements about the role the company might play in selling those texts. A month later, Google launched a video search service, but again, stayed mum on if and how it might participate in the sale of television shows and movies over the Internet.

Besides the obvious – I bet Google wishes it had gotten into content sales back in 2004, given the subsequent rise of iTunes – there’s still a massive tension here, between search services that the world believes to be “objective” and Google’s desire to compete given how the market it is in is evolving.

Not to belabor this, but here’s more from my book on this issue, which feels pertinent given the issues Google now faces, both in Europe and in the US with major content providers:

… for Google to put itself into the position of media middle man is a perilous gambit – in particular given that its corporate DNA eschews the almighty dollar as an arbiter of which content might rise to the top of the heap for a particular search. Playing middle man means that in the context of someone looking for a movie, Google will determine the most relevant result for terms like “slapstick comedy” or “romantic musical” or “jackie chan film.” For music, it means Google will determine what comes first for “usher,” but it also means it will have to determine what should come first when someone is looking for “hip hop.”

Who gets to be first in such a system? Who gets the traffic, the business, the profits? How do you determine, of all the possibilities, who wins and who loses? In the physical world, the answer is clear: whoever pays the most gets the positioning, whether it’s on the supermarket shelf or the bin-end of a record store. ….But Google, more likely than not, will attempt to come up with a clever technological solution that attempts to determine the most “objective” answer for any given term, be it “romantic comedy” or “hip hop.” Perhaps the ranking will be based on some mix of PageRank, download statistics, and Lord knows what else, but one thing will be certain: Google will never tell anyone how they came to the results they serve up. Which creates something of a Catch-22 when it comes to monetization. Will Hollywood really be willing to trust Google to distribute and sell their content absent the commercial world’s true ranking methodology: cold, hard cash?

…Search drives commerce, and commerce drives search. The two ends are meeting, inexolerably, in the middle, and every major Internet player, from eBay to Microsoft, wants in. Google may be tops in search for now, but in time, being tops in search will certainly not be enough.

Clearly, as a new decade unfolds, search alone is not enough anymore, and my prediction that Google will protect itself with the shield of “objectivity” has been upended. But the question of how Google ties its massive lead in search to its new businesses in local, content, applications, and other major markets remains tricky, and at this point, quite unresolved.