It’s been a while since I’ve updated you on my Signal newsletter, which I do each day. Here’s the last week or so of them. If you want to read it in RSS, here’s the feed:
It’s been a while since I’ve updated you on my Signal newsletter, which I do each day. Here’s the last week or so of them. If you want to read it in RSS, here’s the feed:
Here’s one for you, folks: A few folks “in the know” told me that a company is thinking about doing something with another company, but that second company has no idea about it, and in order for the whole thing to play out, a whole lot of things need to happen first, most of which are totally dependent on other things happening over which the sources have no control!
Great story, eh?
This piece is yet another example of the kind of “journalism” that is increasingly gaining traction in the tech world – pageview-baiting single-sourced speculation, with very little reporting, tossed online for the hell of it to see what happens.
It’s lazy, it’s unsupportable, and it’s tiresome.
To me, it’s not even about the crass commercialist drive to be “first” or to drive the most pageviews. It’s about the other side of the story – the sources. Reuters, in particular, as a bastion of “traditional” journalistic mores, should know that the “source” who gave this reporter the story has his or her own agenda. More likely than not, that agenda is to lend credibility to the idea that AOL and Yahoo should merge. It’s a huge disservice to the craft of journalism to let your obligations to your readers be so easily manipulated.
I miss the Industry Standard sometimes.
Yesterday I posted what was pretty much an offhand question – Is RSS Dead? I had been working on the FM Signal, a roundup of the day’s news I post over at the FM Blog. A big part of editing that daily roundup is spent staring into my RSS reader, which culls about 100 or so feeds for me.
I realized I’ve been staring into an RSS reader for the better part of a decade now, and I recalled the various posts I’d recently seen (yes, via my RSS reader) about the death of RSS. Like this one, and this one, and even this one, from way back in 2006. All claimed RSS was over, and, for the most part, that Twitter killed it.
I wondered to myself – am I a dinosaur? I looked at Searchblog’s RSS counter, which has been steadily growing month after month, and realized it was well over 200,000 (yesterday it added 4K folks, from 207K to 211K). Are those folks all zombies or spam robots? I mean, why is it growing? Is the RSS-reading audience really out there?
So I asked. And man, did my RSS readers respond. More than 100 comments in less than a day – the second most I’ve ever gotten in that period of time, I think. And that’s from RSS readers – so they had to click out of their comfy reader environment, come over to the boring HTML web version of my site, pass the captcha/spam test, put in their email, and then write a comment. In short, they had to jump through a lot of hoops to let me know they were there. Hell, Scoble – a “super node” if ever there was one – even chimed in.
I’ve concluded that each comment someone takes the time to leave serves as a proxy for 100 or so folks who probably echo that sentiment, but don’t take the time to leave a missive. It’s my rough guess, but I think it’s in the ballpark, based on years of watching traffic flows and comment levels on my posts. So 100 comments in 24 hours equates to a major response on this small little site, and it’s worth contemplating the feedback.
One comment that stood out for me came from Ged Carroll, who wrote:
Many people are happy to graze Twitter, but the ‘super nodes’ that are the ‘social editors’ need a much more robust way to get content: RSS. If you like RSS is the weapon of choice for the content apex predator, rather than the content herbivores.
A “content apex predator”! Interesting use of metaphor – but I think Ged is onto something here. At Federated, we’ve made a business of aligning ourselves with content creators who have proved themselves capable of convening an engaged and influential audience. That’s the heart of publishing – creating a community of readers/viewers/users who like what you have to say or the service you offer.
And while more and more folks are creating content of value on the web, that doesn’t mean they are all “publishers” in the sense of being professionals who make their living that way. Ged’s comment made me think of Gladwell’s “connectors” – there certainly is a class of folks on the web who derive and create value by processing, digesting, considering and publishing content, and not all of them are professionals in media (in fact, most of them aren’t).
In my post I posited that perhaps RSS was receding into a “Betamax” phase, where only “professionals” in my industry (media) would be users of it. I think I got that wrong, at least in spirit. There is most definitely an RSS cohort of sorts, but it’s not one of “media professionals.” Instead, I think “social editors” or “super nodes” is more spot on. These are the folks who feel compelled to consume a lot of ideas (mainly through the written word), process those ideas, and then create value by responding or annotating those ideas. They derive social status and value by doing so – we reward people who provide these services with out attention and appreciation. They have more Twitter followers than the average bear. They probably have a blog (like Ged does). And they’re most likely the same folks who are driving the phenomenal growth of Tumblr.
Social editors who convene the largest audiences can actually go into business doing what they love – that’s the premise of FM’s initial business model.
But there orders of magnitude more folks who do this well, but may not want to do it full time as a business, or who are content with the influence of an audience in the hundreds or thousands, as opposed to hundreds of thousands or millions, like many FM sites.
RSS is most certainly not dead, but as many commentators noted, it may evolve quite a bit in the coming years. It has so much going for it – it’s asynchronous, it’s flexible, it’s entirely subjective (in that you pick your feeds yourself, as opposed to how Facebook works), it allows for robust UIs to be built around it. It’s a fundamentally “Independent Web” technology.
But RSS also has major problems, in particular, there’s not a native monetization signal that goes with it. Yesterday’s post proves I have a lot of active RSS readers, but I don’t have a way to engage them with intelligent marketing (save running Pheedo or Adsense, which isn’t exactly what I’d call a high-end solution). I, like many others, pretty much gave up on RSS as a brand marketing vehicle a couple of years back. There was no way to “prove” folks were actually paying attention, and without that proof, marketers will only buy on the come – that’s why you see so many direct response ads running in RSS feeds.
It does seem that no one is really developing “for” RSS anymore.
Except, I am. At FM we use RSS in various robust fashions to pipe content from our network of hundreds (now thousands) of talented “social editors” into multitudes of marketing and content programs. (Check out FoodPress for just one example). We’ve even developed a product we call “superfeeds” that allows us to extend what is possible with RSS. In short, we’d be lost without RSS, and from the comments on my post, it seems a lot of other folks would be too, and in particular, folks who perform the critical role of “super node” or “social editor.”
So long live the social editor, and long live RSS. Perhaps it’s time to take another look at how we might find an appropriate monetization signal for the medium. I’m pretty sure that marketers would find conversing with couple hundred thousand “super nodes” valuable – if only we could figure a way to make that value work for all involved.
I‘m usually the last guy to know, and the first to admit it, but is RSS really dead? I keep seeing posts claiming Twitter and Facebook have essentially replaced RSS as the way folks filter their news these days, but I for one am still addicted to my RSS client (it’s Shrook, for anyone who still cares).
Perhaps RSS isn’t dead, but instead it’s professionalizing. It’s the Beta to the VHS of Twitter. Higher quality, better signal, but more expensive in terms of time, and used only by folks “in the industry.”
I write, every single day (especially with Signal), and I consume a lot of feeds in order to do that. I need a professional tool that lets me do that efficiently, and so far nothing beats an RSS reader. But I’m serious about my feeds, and most folks, I guess aren’t.
Or are you? I mean, sure, Feedburner is languishing over at Google, I hear, but
hell, I have 207,000 readers consuming my feed, at least, that’s what Google tells me. And that’s up from about 170K earlier this year. Are you out there, RSS readers? Or am I blasting XML into a ghost town?
Just wonderin. Shout out if you’re here, guys. And shout out if you’re reading this because someone pointed to it on Twitter….
Yesterday’s post on Google having an algorithmic “opinion” about which reviews were negative or positive sparked a thoughtful response from Matt Cutts, Google’s point person on search quality, and for me raised a larger question about Google’s past, present, and future.
In his initial comment (which is *his* opinion, not Google’s, I am sure), Cutts remarked:
“…the “opinion” in that sentence refers to the fact our web search results are protected speech in the First Amendment sense. Court cases in the U.S. (search for SearchKing or Kinderstart) have ruled that Google’s search results are opinion. This particular situation serves to demonstrate that fact: Google decided to write an algorithm to tackle the issue reported in the New York Times. We chose which signals to incorporate and how to blend them. Ultimately, although the results that emerge from that process are algorithmic, I would absolutely defend that they’re also our opinion as well, not some mathematically objective truth.”
While Matt is simply conversing on a blog post, the point he makes is not just a legal nit, it’s a core defense of Google’s entire business model. In two key court cases, Google has prevailed with a first amendment defense. Matt reviews these in his second comment:
“SearchKing sued Google and the resulting court case ruled that Google’s actions were protected under the first amendment. Later, KinderStart sued Google. You would think that the SearchKing case would cover the issue, but part of KinderStart’s argument was that Google talked about the mathematical aspects of PageRank in our website documentation. KinderStart not only lost that lawsuit, but KinderStart’s lawyer was sanctioned for making claims he couldn’t back up… After the KinderStart lawsuit, we went through our website documentation. Even though Google won the case, we tried to clarify where possible that although we employ algorithms in our rankings, ultimately we consider our search results to be our opinion.”
The key point, however, is made a bit later, and it’s worth highlighting:
“(the) courts have agreed … that there’s no universally agreed-upon way to rank search results in response to a query. Therefore, web rankings (even if generated by an algorithm) are are an expression of that search engine’s particular philosophy.”
Matt reminded us that he’s made this point before, on Searchblog four years ago:
“When savvy people think about Google, they think about algorithms, and algorithms are an important part of Google. But algorithms aren’t magic; they don’t leap fully-formed from computers like Athena bursting from the head of Zeus. Algorithms are written by people. People have to decide the starting points and inputs to algorithms. And quite often, those inputs are based on human contributions in some way.”
Back then, Matt also took pains to point out that his words were his opinion, not Google’s.
So let me pivot from Matt’s opinion to mine. All of this is fraught, to my mind, with implications of the looming European investigation. The point of the European action, it seems to me, is to find a smoking gun that proves Google is using a “natural monopoly” in search to favor its own products over those of competitors.
Danny has pointed out the absurdity of such an investigation if the point is to prove Google favors its search results over the search results of competitors like Bing or others. But I think the case will turn on different products, or perhaps, a different definition of what constititues “search results.” The question isn’t whether Google should show compeitors standard search results, it’s whether Google favors its owned and operated services, such as those in local (Google Places instead of Foursquare, Facebook etc), commerce (Checkout instead of Paypal), video (YouTube instead of Hulu etc.), content (Google Finance instead of Yahoo Finance or others, Blogger instead of WordPress, its bookstore over others, etc.), applications (Google Apps instead of MS Office), and on and on.
That is a very tricky question. After all, aren’t those “search results” also? As I wrote eons ago in my book, this most certainly is a philosophical question. Back in 2005, I compared Yahoo’s approach to search with Google’s:
Yahoo makes no pretense of objectivity – it is clearly steering searchers toward its own editorial services, which it believes can satisfy the intent of the search. … Apparent in that sentiment lies a key distinction between Google and Yahoo. Yahoo is far more willing to have overt editorial and commercial agendas, and to let humans intervene in search results so as to create media that supports those agendas. Google, on the other hand, is repelled by the idea of becoming a content- or editorially-driven company. While both companies can ostensibly lay claim to the mission of “organizing the world’s information and making it accessible” (though only Google actually claims that line as its mission), they approach the task with vastly different stances. Google sees the problem as one that can be solved mainly through technology – clever algorithms and sheer computational horsepower will prevail. Humans enter the search picture only when algorithms fail – and only then grudgingly. But Yahoo has always viewed the problem as one where human beings, with all their biases and brilliance, are integral to the solution.
I then predicted some conflict in the future:
But expect some tension over the next few years, in particular with regard to content. In late 2004, for example, Google announced they would be incorporating millions of library texts into its index, but made no announcements about the role the company might play in selling those texts. A month later, Google launched a video search service, but again, stayed mum on if and how it might participate in the sale of television shows and movies over the Internet.
Besides the obvious – I bet Google wishes it had gotten into content sales back in 2004, given the subsequent rise of iTunes – there’s still a massive tension here, between search services that the world believes to be “objective” and Google’s desire to compete given how the market it is in is evolving.
Not to belabor this, but here’s more from my book on this issue, which feels pertinent given the issues Google now faces, both in Europe and in the US with major content providers:
… for Google to put itself into the position of media middle man is a perilous gambit – in particular given that its corporate DNA eschews the almighty dollar as an arbiter of which content might rise to the top of the heap for a particular search. Playing middle man means that in the context of someone looking for a movie, Google will determine the most relevant result for terms like “slapstick comedy” or “romantic musical” or “jackie chan film.” For music, it means Google will determine what comes first for “usher,” but it also means it will have to determine what should come first when someone is looking for “hip hop.”
Who gets to be first in such a system? Who gets the traffic, the business, the profits? How do you determine, of all the possibilities, who wins and who loses? In the physical world, the answer is clear: whoever pays the most gets the positioning, whether it’s on the supermarket shelf or the bin-end of a record store. ….But Google, more likely than not, will attempt to come up with a clever technological solution that attempts to determine the most “objective” answer for any given term, be it “romantic comedy” or “hip hop.” Perhaps the ranking will be based on some mix of PageRank, download statistics, and Lord knows what else, but one thing will be certain: Google will never tell anyone how they came to the results they serve up. Which creates something of a Catch-22 when it comes to monetization. Will Hollywood really be willing to trust Google to distribute and sell their content absent the commercial world’s true ranking methodology: cold, hard cash?
…Search drives commerce, and commerce drives search. The two ends are meeting, inexolerably, in the middle, and every major Internet player, from eBay to Microsoft, wants in. Google may be tops in search for now, but in time, being tops in search will certainly not be enough.
Clearly, as a new decade unfolds, search alone is not enough anymore, and my prediction that Google will protect itself with the shield of “objectivity” has been upended. But the question of how Google ties its massive lead in search to its new businesses in local, content, applications, and other major markets remains tricky, and at this point, quite unresolved.
Almost immediately after the Web 2.0 Summit last month, Tim O’Reilly and I sat down at an FM event and debriefed each other on what we learned. Here’s the video.
Wow, I’ve never seen this before. Check out Google’s post, responding to the New York Times story about a bad actor who had figured out a way to make a living leveraging what he saw as holes in Google’s approach to ranking.
How Google ranks is the subject of increasing scrutiny, including and particularly in Europe.
From Google’s blog:
Even though our initial analysis pointed to this being an edge case and not a widespread problem in our search results, we immediately convened a team that looked carefully at the issue. That team developed an initial algorithmic solution, implemented it, and the solution is already live.
What I find fascinating is the way Google handled this. Read this carefully:
Instead, in the last few days we developed an algorithmic solution which detects the merchant from the Times article along with hundreds of other merchants that, in our opinion, provide an extremely poor user experience. The algorithm we incorporated into our search rankings represents an initial solution to this issue, and Google users are now getting a better experience as a result.
What word stands out? Yep, “opinion.”
Think on that for a second. If ever there was an argument that algorithms are subjective, there it is.
(Oh, and by the way, the last paragraph in the blog post clearly is directed at the regulators in Europe, if you think about it….)
One of the many reasons I find Twitter fascinating is that the company seems endlessly at an inflection point. Eighteen months ago I was tracking its inflection point in usage (holy shit, look how it’s growing! Then, holy shit, has it stopped?!), then its inflection in business model (hey, it doesn’t have one! Wait, yes it does, but can it scale?!), and more recently, its inflection point in terms of employees (as in growing from 80+ staff to 350+ in one year – necessitating a shift in management structure….).
Twitter now faces yet another inflection point – one I’ve been tracking for some time, and one that seems to be coming to a head. To me, that inflection has to do with usefulness – can the service corral all the goodness that exists in its network and figure out a way to make it useful to its hundreds of millions of users?
To me, this inflection point is perhaps its most challenging, and its greatest opportunity, because it encompasses all the others. If Twitter creates delightful instrumentations of the unique cacophony constantly crossing its servers, it wins big time. Users will never leave, marketers will never get enough, and employees will pine to join the movement (witness Facebook now, and Google five years ago).
Now, I’m not saying Twitter isn’t already a success. It is. The service has a dedicated core of millions who will never leave the service (I’m one of them). And I’m going to guess Twitter gets more resumes than it knows what to do with, so hiring isn’t the problem. And lastly, I’ve been told (by Ev, onstage at Web 2) that the company has more marketing demand than it can fulfill.
But therein lies the rub. Twitter has the potential to be much more, and everyone there knows it. It has millions of dedicated users, but it also has tens of millions who can’t quite figure out what the fuss is all about. And you can’t hire hundreds of engineers and product managers unless you have a job for them to do – a scaled platform that has, at its core, a product that everyone and their mother understands.
As for that last point – the surfeit of marketing demand – well that’s also a problem. Promoted tweets, trends, and accounts are a great start, but if you don’t have enough inventory to satisfy demand, you’ve not crossed the chasm from problem to opportunity.
In short, Twitter has a publishing problem. Put another way, it has a massive publishing opportunity.
Oh, I know, you’re saying “yeah Battelle, there you go again, thinking the whole world fits neatly into your favorite paradigm of publishing.”
Well yes, indeed, I do think that. To me, publishing is the art of determining what is worth paying attention to, by whom. And by that definition, Twitter most certainly is a publishing platform, one used by nearly 200 million folks.
The problem, of course, is that while Twitter makes it nearly effortless for folks to publish their own thoughts, it has done far too little to help those same folks glean value from the thoughts of others.
It was this simple truth that led FM to work with Microsoft to create ExecTweets, and AT&T to create the TitleTweets platform. It’s the same truth that led to the multi-pane interface of Tweetdeck as well as countless other Twitter apps, and it was with an eye toward addressing this problem that led to the introduction of Lists on Twitter.com and its associated APIs.
But while all those efforts are worthy, they haven’t yet solved the core problem or addressed the massive opportunity. At its core, publishing is about determining signal from noise. What’s extraordinary about Twitter is the complexity of that challenge – one man’s noise is another man’s signal, and vice versa. And what’s signal now may well be noise tomorrow – or two minutes from now. Multiply this by 200 million or so, then add an exponential element. Yow.
There is both art and science to addressing this challenge. What we broadly understand to be “the media” have approached the problem with a mostly one-to-many approach: We define an audience, determine what topic most likely will want to pay attention to, then feed them our signal, one curated and culled from the noise of all possible information associated with that topic. Presto, we make Wired magazine, Oprah, or Serious Eats.
Facebook has done the same with information associated with our friend graph. The newsfeed is, for all intents and purposes, a publication created just for you. Sure, it has its drawbacks, but it’s pretty darn good (though its value is directly determined by the value you place in your Facebook friend graph. Mine, well, it don’t work so well, for reasons of my own doing).
So how might Twitter create such a publication for each of its users? As many have pointed out (including Twitter’s CEO Dick Costolo), Twitter isn’t a friend graph, it’s more of an interest graph, or put another way, an information graph – a massive set of points interconnected by contextual meaning. To the uninitiated, this graph is daunting.
Twitter’s current approach to navigating this graph centers around following human beings – at first with its “suggested users” list, which simply didn’t scale. Twitter soon replaced suggested users with “Who to follow” – a more sophisticated, algorithm-driven list of folks who seem to match your current set of followers and, to some extent, your interests. When you follow someone who’s a big foodie, for example, Twitter will suggest other folks who tweet about food. It does so, one presumes, by noting shared interests between users.
The question is, does Twitter infer those interests via the signal of who follows who, or does it do it by actually *understanding* what folks are tweeting about?
Therein, I might guess, lies the solution. The former is a proxy for a true interest graph – “Hey, follow these folks, because they seem to follow folks who are like the folks you already follow.” But latter *is* an interest graph – “Hey, follow these folks, because they tweet about things you care about.”
From that logically comes the ability to filter folks’ streams based on interests, and once you can do that, well, things get really…interesting. You could follow interests, instead of people, for example. It’s like search meets social! And hey – isn’t that kind of the holy grail?
If Twitter can make the interest graph explicit to its users, and develop products and features which surface that graph in real time, it wins on all counts. That is a very big problem to solve, and a massive opportunity to run after.
For more on this, read Making Twitter an Information Network, by Mike Champion, and “The unbearable lameness of web 2.0″, by Kristian Köhntopp, as well as the wonderful but too short The Algorithm + the Crowd are Not Enough, by Rand Fishkin. These and many more have informed my ongoing thinking on this topic. What do you think?
ATD is reporting that Google is offering well more than twice what had been previously offered – $6 billion, instead of $2.5 billion. That sounds more like it. As I wrote yesterday, $2.5 billion sounded very low for this particular asset. (And this from the guy who thought YouTube was overpriced.)
Clearly, that leak to Vator.tv last weekend was timed to push a deal point, I’m guessing.
Key to this deal is Marissa Mayer, who recently took over local for Google and was promoted to boot. This would be her defining deal, and the integration of the acquisition would be critical. Google has had mixed results in this department so far – YouTube is clearly on its way to being a winner, but took far too long to get there. Many small pickups have proven to be big winners – Applied Semantics comes to mind. And Blogger, FeedBurner, and many other small acquisitions never really found their footing, but DoubelClick was a huge win.
Today’s big rumor, so far at least, is that Google may be buying Groupon for a reported $2.5 billion. This sale has been rumored for some time, but the figure – and story – is based on a rather thin piece by VatorNews which broke early this morning. The headline is honest in its lack of, er, definitiveness – Google buys Groupon for $2.5 billion? – but it does claim one “reliable source.”
Whether or not this story plays out, my first thought was that the company is worth far more than $2.5 billion. If, as many have reported, Groupon is doing $50mm in revenue a month, that’s a mere 4.2x multiple on current run rate. Given the insanely strategic nature of the purchase to not only Google, but just about every other major player in the game (Microsoft, Yahoo, eBay, Amazon, hell, American Express if they wanted to play, not to mention Facebook), I’d be utterly stunned if Google won the company for that price. Location is a key signal connecting commerce, search, social, and small business. It’s a big, big deal, and Groupon is the leader in the space.
My spidey senses tell me someone is shopping this deal through a leak to the press, seeking to drive an auction. We’ll see.