Reader Henrik writes: Twitter is great for discovering new and interesting things, but for consistently good sources of information, nothing beats RSS.
It’s been a while since I’ve updated you on my Signal newsletter, which I do each day. Here’s the last week or so of them. If you want to read it in RSS, here’s the feed:
Here’s one for you, folks: A few folks “in the know” told me that a company is thinking about doing something with another company, but that second company has no idea about it, and in order for the whole thing to play out, a whole lot of things need to happen first, most of which are totally dependent on other things happening over which the sources have no control!
Great story, eh?
This piece is yet another example of the kind of “journalism” that is increasingly gaining traction in the tech world – pageview-baiting single-sourced speculation, with very little reporting, tossed online for the hell of it to see what happens.
It’s lazy, it’s unsupportable, and it’s tiresome.
To me, it’s not even about the crass commercialist drive to be “first” or to drive the most pageviews. It’s about the other side of the story – the sources. Reuters, in particular, as a bastion of “traditional” journalistic mores, should know that the “source” who gave this reporter the story has his or her own agenda. More likely than not, that agenda is to lend credibility to the idea that AOL and Yahoo should merge. It’s a huge disservice to the craft of journalism to let your obligations to your readers be so easily manipulated.
I miss the Industry Standard sometimes.
Yesterday I posted what was pretty much an offhand question – Is RSS Dead? I had been working on the FM Signal, a roundup of the day’s news I post over at the FM Blog. A big part of editing that daily roundup is spent staring into my RSS reader, which culls about 100 or so feeds for me.
I realized I’ve been staring into an RSS reader for the better part of a decade now, and I recalled the various posts I’d recently seen (yes, via my RSS reader) about the death of RSS. Like this one, and this one, and even this one, from way back in 2006. All claimed RSS was over, and, for the most part, that Twitter killed it.
I wondered to myself – am I a dinosaur? I looked at Searchblog’s RSS counter, which has been steadily growing month after month, and realized it was well over 200,000 (yesterday it added 4K folks, from 207K to 211K). Are those folks all zombies or spam robots? I mean, why is it growing? Is the RSS-reading audience really out there?
So I asked. And man, did my RSS readers respond. More than 100 comments in less than a day – the second most I’ve ever gotten in that period of time, I think. And that’s from RSS readers – so they had to click out of their comfy reader environment, come over to the boring HTML web version of my site, pass the captcha/spam test, put in their email, and then write a comment. In short, they had to jump through a lot of hoops to let me know they were there. Hell, Scoble – a “super node” if ever there was one – even chimed in.
I’ve concluded that each comment someone takes the time to leave serves as a proxy for 100 or so folks who probably echo that sentiment, but don’t take the time to leave a missive. It’s my rough guess, but I think it’s in the ballpark, based on years of watching traffic flows and comment levels on my posts. So 100 comments in 24 hours equates to a major response on this small little site, and it’s worth contemplating the feedback.
One comment that stood out for me came from Ged Carroll, who wrote:
Many people are happy to graze Twitter, but the ‘super nodes’ that are the ‘social editors’ need a much more robust way to get content: RSS. If you like RSS is the weapon of choice for the content apex predator, rather than the content herbivores.
A “content apex predator”! Interesting use of metaphor – but I think Ged is onto something here. At Federated, we’ve made a business of aligning ourselves with content creators who have proved themselves capable of convening an engaged and influential audience. That’s the heart of publishing – creating a community of readers/viewers/users who like what you have to say or the service you offer.
And while more and more folks are creating content of value on the web, that doesn’t mean they are all “publishers” in the sense of being professionals who make their living that way. Ged’s comment made me think of Gladwell’s “connectors” – there certainly is a class of folks on the web who derive and create value by processing, digesting, considering and publishing content, and not all of them are professionals in media (in fact, most of them aren’t).
In my post I posited that perhaps RSS was receding into a “Betamax” phase, where only “professionals” in my industry (media) would be users of it. I think I got that wrong, at least in spirit. There is most definitely an RSS cohort of sorts, but it’s not one of “media professionals.” Instead, I think “social editors” or “super nodes” is more spot on. These are the folks who feel compelled to consume a lot of ideas (mainly through the written word), process those ideas, and then create value by responding or annotating those ideas. They derive social status and value by doing so – we reward people who provide these services with out attention and appreciation. They have more Twitter followers than the average bear. They probably have a blog (like Ged does). And they’re most likely the same folks who are driving the phenomenal growth of Tumblr.
Social editors who convene the largest audiences can actually go into business doing what they love – that’s the premise of FM’s initial business model.
But there orders of magnitude more folks who do this well, but may not want to do it full time as a business, or who are content with the influence of an audience in the hundreds or thousands, as opposed to hundreds of thousands or millions, like many FM sites.
RSS is most certainly not dead, but as many commentators noted, it may evolve quite a bit in the coming years. It has so much going for it – it’s asynchronous, it’s flexible, it’s entirely subjective (in that you pick your feeds yourself, as opposed to how Facebook works), it allows for robust UIs to be built around it. It’s a fundamentally “Independent Web” technology.
But RSS also has major problems, in particular, there’s not a native monetization signal that goes with it. Yesterday’s post proves I have a lot of active RSS readers, but I don’t have a way to engage them with intelligent marketing (save running Pheedo or Adsense, which isn’t exactly what I’d call a high-end solution). I, like many others, pretty much gave up on RSS as a brand marketing vehicle a couple of years back. There was no way to “prove” folks were actually paying attention, and without that proof, marketers will only buy on the come – that’s why you see so many direct response ads running in RSS feeds.
It does seem that no one is really developing “for” RSS anymore.
Except, I am. At FM we use RSS in various robust fashions to pipe content from our network of hundreds (now thousands) of talented “social editors” into multitudes of marketing and content programs. (Check out FoodPress for just one example). We’ve even developed a product we call “superfeeds” that allows us to extend what is possible with RSS. In short, we’d be lost without RSS, and from the comments on my post, it seems a lot of other folks would be too, and in particular, folks who perform the critical role of “super node” or “social editor.”
So long live the social editor, and long live RSS. Perhaps it’s time to take another look at how we might find an appropriate monetization signal for the medium. I’m pretty sure that marketers would find conversing with couple hundred thousand “super nodes” valuable – if only we could figure a way to make that value work for all involved.
I‘m usually the last guy to know, and the first to admit it, but is RSS really dead? I keep seeing posts claiming Twitter and Facebook have essentially replaced RSS as the way folks filter their news these days, but I for one am still addicted to my RSS client (it’s Shrook, for anyone who still cares).
Perhaps RSS isn’t dead, but instead it’s professionalizing. It’s the Beta to the VHS of Twitter. Higher quality, better signal, but more expensive in terms of time, and used only by folks “in the industry.”
I write, every single day (especially with Signal), and I consume a lot of feeds in order to do that. I need a professional tool that lets me do that efficiently, and so far nothing beats an RSS reader. But I’m serious about my feeds, and most folks, I guess aren’t.
Or are you? I mean, sure, Feedburner is languishing over at Google, I hear, but
hell, I have 207,000 readers consuming my feed, at least, that’s what Google tells me. And that’s up from about 170K earlier this year. Are you out there, RSS readers? Or am I blasting XML into a ghost town?
Just wonderin. Shout out if you’re here, guys. And shout out if you’re reading this because someone pointed to it on Twitter….
Yesterday’s post on Google having an algorithmic “opinion” about which reviews were negative or positive sparked a thoughtful response from Matt Cutts, Google’s point person on search quality, and for me raised a larger question about Google’s past, present, and future.
In his initial comment (which is *his* opinion, not Google’s, I am sure), Cutts remarked:
“…the “opinion” in that sentence refers to the fact our web search results are protected speech in the First Amendment sense. Court cases in the U.S. (search for SearchKing or Kinderstart) have ruled that Google’s search results are opinion. This particular situation serves to demonstrate that fact: Google decided to write an algorithm to tackle the issue reported in the New York Times. We chose which signals to incorporate and how to blend them. Ultimately, although the results that emerge from that process are algorithmic, I would absolutely defend that they’re also our opinion as well, not some mathematically objective truth.”
While Matt is simply conversing on a blog post, the point he makes is not just a legal nit, it’s a core defense of Google’s entire business model. In two key court cases, Google has prevailed with a first amendment defense. Matt reviews these in his second comment:
“SearchKing sued Google and the resulting court case ruled that Google’s actions were protected under the first amendment. Later, KinderStart sued Google. You would think that the SearchKing case would cover the issue, but part of KinderStart’s argument was that Google talked about the mathematical aspects of PageRank in our website documentation. KinderStart not only lost that lawsuit, but KinderStart’s lawyer was sanctioned for making claims he couldn’t back up… After the KinderStart lawsuit, we went through our website documentation. Even though Google won the case, we tried to clarify where possible that although we employ algorithms in our rankings, ultimately we consider our search results to be our opinion.”
The key point, however, is made a bit later, and it’s worth highlighting:
“(the) courts have agreed … that there’s no universally agreed-upon way to rank search results in response to a query. Therefore, web rankings (even if generated by an algorithm) are are an expression of that search engine’s particular philosophy.”
Matt reminded us that he’s made this point before, on Searchblog four years ago:
“When savvy people think about Google, they think about algorithms, and algorithms are an important part of Google. But algorithms aren’t magic; they don’t leap fully-formed from computers like Athena bursting from the head of Zeus. Algorithms are written by people. People have to decide the starting points and inputs to algorithms. And quite often, those inputs are based on human contributions in some way.”
Back then, Matt also took pains to point out that his words were his opinion, not Google’s.
So let me pivot from Matt’s opinion to mine. All of this is fraught, to my mind, with implications of the looming European investigation. The point of the European action, it seems to me, is to find a smoking gun that proves Google is using a “natural monopoly” in search to favor its own products over those of competitors.
Danny has pointed out the absurdity of such an investigation if the point is to prove Google favors its search results over the search results of competitors like Bing or others. But I think the case will turn on different products, or perhaps, a different definition of what constititues “search results.” The question isn’t whether Google should show compeitors standard search results, it’s whether Google favors its owned and operated services, such as those in local (Google Places instead of Foursquare, Facebook etc), commerce (Checkout instead of Paypal), video (YouTube instead of Hulu etc.), content (Google Finance instead of Yahoo Finance or others, Blogger instead of WordPress, its bookstore over others, etc.), applications (Google Apps instead of MS Office), and on and on.
That is a very tricky question. After all, aren’t those “search results” also? As I wrote eons ago in my book, this most certainly is a philosophical question. Back in 2005, I compared Yahoo’s approach to search with Google’s:
Yahoo makes no pretense of objectivity – it is clearly steering searchers toward its own editorial services, which it believes can satisfy the intent of the search. … Apparent in that sentiment lies a key distinction between Google and Yahoo. Yahoo is far more willing to have overt editorial and commercial agendas, and to let humans intervene in search results so as to create media that supports those agendas. Google, on the other hand, is repelled by the idea of becoming a content- or editorially-driven company. While both companies can ostensibly lay claim to the mission of “organizing the world’s information and making it accessible” (though only Google actually claims that line as its mission), they approach the task with vastly different stances. Google sees the problem as one that can be solved mainly through technology – clever algorithms and sheer computational horsepower will prevail. Humans enter the search picture only when algorithms fail – and only then grudgingly. But Yahoo has always viewed the problem as one where human beings, with all their biases and brilliance, are integral to the solution.
I then predicted some conflict in the future:
But expect some tension over the next few years, in particular with regard to content. In late 2004, for example, Google announced they would be incorporating millions of library texts into its index, but made no announcements about the role the company might play in selling those texts. A month later, Google launched a video search service, but again, stayed mum on if and how it might participate in the sale of television shows and movies over the Internet.
Besides the obvious – I bet Google wishes it had gotten into content sales back in 2004, given the subsequent rise of iTunes – there’s still a massive tension here, between search services that the world believes to be “objective” and Google’s desire to compete given how the market it is in is evolving.
Not to belabor this, but here’s more from my book on this issue, which feels pertinent given the issues Google now faces, both in Europe and in the US with major content providers:
… for Google to put itself into the position of media middle man is a perilous gambit – in particular given that its corporate DNA eschews the almighty dollar as an arbiter of which content might rise to the top of the heap for a particular search. Playing middle man means that in the context of someone looking for a movie, Google will determine the most relevant result for terms like “slapstick comedy” or “romantic musical” or “jackie chan film.” For music, it means Google will determine what comes first for “usher,” but it also means it will have to determine what should come first when someone is looking for “hip hop.”
Who gets to be first in such a system? Who gets the traffic, the business, the profits? How do you determine, of all the possibilities, who wins and who loses? In the physical world, the answer is clear: whoever pays the most gets the positioning, whether it’s on the supermarket shelf or the bin-end of a record store. ….But Google, more likely than not, will attempt to come up with a clever technological solution that attempts to determine the most “objective” answer for any given term, be it “romantic comedy” or “hip hop.” Perhaps the ranking will be based on some mix of PageRank, download statistics, and Lord knows what else, but one thing will be certain: Google will never tell anyone how they came to the results they serve up. Which creates something of a Catch-22 when it comes to monetization. Will Hollywood really be willing to trust Google to distribute and sell their content absent the commercial world’s true ranking methodology: cold, hard cash?
…Search drives commerce, and commerce drives search. The two ends are meeting, inexolerably, in the middle, and every major Internet player, from eBay to Microsoft, wants in. Google may be tops in search for now, but in time, being tops in search will certainly not be enough.
Clearly, as a new decade unfolds, search alone is not enough anymore, and my prediction that Google will protect itself with the shield of “objectivity” has been upended. But the question of how Google ties its massive lead in search to its new businesses in local, content, applications, and other major markets remains tricky, and at this point, quite unresolved.
Almost immediately after the Web 2.0 Summit last month, Tim O’Reilly and I sat down at an FM event and debriefed each other on what we learned. Here’s the video.
Wow, I’ve never seen this before. Check out Google’s post, responding to the New York Times story about a bad actor who had figured out a way to make a living leveraging what he saw as holes in Google’s approach to ranking.
How Google ranks is the subject of increasing scrutiny, including and particularly in Europe.
From Google’s blog:
Even though our initial analysis pointed to this being an edge case and not a widespread problem in our search results, we immediately convened a team that looked carefully at the issue. That team developed an initial algorithmic solution, implemented it, and the solution is already live.
What I find fascinating is the way Google handled this. Read this carefully:
Instead, in the last few days we developed an algorithmic solution which detects the merchant from the Times article along with hundreds of other merchants that, in our opinion, provide an extremely poor user experience. The algorithm we incorporated into our search rankings represents an initial solution to this issue, and Google users are now getting a better experience as a result.
What word stands out? Yep, “opinion.”
Think on that for a second. If ever there was an argument that algorithms are subjective, there it is.
(Oh, and by the way, the last paragraph in the blog post clearly is directed at the regulators in Europe, if you think about it….)