free html hit counter Help Me With a Google-Backed Panel Friday: On CrowdSourcing | John Battelle's Search Blog

Help Me With a Google-Backed Panel Friday: On CrowdSourcing

By - March 06, 2008

Tomorrow I’ll be moderating a panel at Stanford on a fun topic: Crowdsourcing. In fact, the central question to the discussion will be addressing answers to this partial statement: “Crowdsourcing will never work to do….”

I’ve got three great panelists who are going to help get the conversation started. They are NYU’s Jay Rosen (more on him here), Google’s Hal Varian (here), and Current TV’s Joel Hyatt (here).

Jay plans to argue that crowdsourcing will never work to create new things.

Hal plans to argue that crowdsourcing will never get you an excess return on the stock market.

And Joel plans to argue that crowdsourcing isn’t an easier way to do business; it’s a *different* way to do business, with different processes involved and different skills required.

What I want to do here is to do a bit of my own crowdsourcing…. what do you think crowdsourcing can NOT do, and what do you make of our panelists ideas?

Related Posts Plugin for WordPress, Blogger...

34 thoughts on “Help Me With a Google-Backed Panel Friday: On CrowdSourcing

  1. Steve Flinn says:

    Quick thoughts:

    - Although I am sympathetic with the spirit of Jay’s position, “create new things” is too broad — certainly crowdsourcing can create new things, by virtue of combinatoric effects if nothing else. Where it will struggle vs individuals and small groups is to produce elegant new things, where elegance = simplicity + consistency.

    - Hal’s position just seems like an extension of the more general truism that crowdsourcing with a smaller crowd will be outperformed by a larger crowd (the whole market).

    - Joel’s position doesn’t seem to me to have enough crispness of definition to be verified one way or another so the discussion will generate more heat than light I suspect.

  2. Perry says:

    Umm, how about…

    “Crowdsourcing cannot replace original thought and analysis – as in preparing for conference speaking engagements you’ve signed up for and don’t have time allocated to stop, think and prepare for.”

    sorry, couldn’t resist, John

    ;)

  3. I think Jay has the big one- crowds are good at filtering and vetting, but not creating.

    Off the top of my head, I think it’s worth noting that crowdsourcing as we know it is not self-governing. We have seen it work well in highly structured environments but as soon as there is a chance people try to game the system or use it to their own advantage, not for the collective good. Whether this is the Digg bury brigade or people defacing wikipedia, crowdsourcing may be self-policing, but it needs a larger framework, and often administrators, to make sure things stay in check.

    That’s just the first thing that comes to mind. I am sure some brainstorming will pull up lots of other great ideas…

  4. John Battelle says:

    Alright Perry, now, settle down! I’m prepared, I think…we’ve done the conference call, we’ve read the memos, but I do really want to see if I might get ideas from this rather elite “crowd” here at Searchblog…

  5. David Alpert says:

    Regarding “create new things”: Depending how you look at it, the entire notion of a capitalist economy is itself a form of crowdsourcing. The “company” is the nation, the “site” the economy. It’s a system set up in such a way that the masses can participate, rather than just the “employees” (government people) having to create everything. And it results in a better site (economy).

    In that sense, crowdsourcing sure does create new things, since a crowdsourced economy creates more new things than a non-crowdsourced economy.

  6. Perry says:

    I actually blogged a bit about this last year on the “wisdumb of crowds” – more aligned to observations for local search and shopping. However, I do believe there is a “flattening effect” in crowd behavior that takes the edge off original ideation and expression. In publishing uses, in one dimension Crowdsourcing feels like “Readers Digest 2.0″, if that makes sense.

    http://evansink.com/2007/08/19/the-wisdumb-of-crowds/

    I hope it helps (can’t do any worse than my original smart ass comment!)

    good luck with the session, sounds like a great panel.

  7. Bertil Hatt says:

    I have to disagree with Hal Varian (not an easy feat) but what he will talk is prediction markets at Google: a very good example of Crowd sourcing that allow him to have better insights — which can help him beat the market. You can argue the larger the crowd, or that I am self-contradictory. . . None the less:
    1. the Google experiment proved (physically) central players had an advantage;
    2. markets are imperfect: a clever prediction market will help you defeat a clumsy, gregarious, uninformed market.

    Regarding new, I’ll have to argue that what is new is decided by society: if you offer something unique, but no one seems to see the added value, the originality (it’s just different, but doesn’t expand your horizon) — too bad. The crowd decides on what is part of the world, and therefore what is new to it. Disco is new to the younger generation: DJs spinning electronic piano tunes are successful, regardless of how not unheard of if their music, but on how original the idea to pick such a style now is fitting.

    I’ll have to agree with Joel—although I can’t thing of an easy way to do business (if he has any). What is new is that it can make certain things significantly lower, and shift the transaction costs, therefore the control dynamics and finally the added value location.

  8. derek tumolo says:

    I’m gonna riff a bit about what Sal and David said above. David talks about how the market and the crowd are differing scales of the same forces, and Sal highlights the fact that crowds without rules become unruly mobs.
    The market is subject to the rules of the invisible hand and darwinian business competition. Discussions on the internet in general are not. This is why you get trolls, flame wars, etc. This is also why wikipedia, slashdot and others work. They have created mechanisms to direct the mob. There’s a reason the nobel prize went to researchers of mechanism design. Its a huge field because for the first time, we’re operating outside the normal constraints of material wealth.
    To tie this back to the original question about crowdsourcing, I think it can do absolutely anything given the proper mechanism to work in. If a pile of goo can spontaneously create life through feedback loops, imagine what intelligence can do with the same tools.

  9. JG says:

    Crowdsourcing will never teach us something that we didn’t already know.

    Maybe not everyone knew it, in which case crowdsourcing is a good mechanism for the dissemination of information.

    But it will never make those key inferences that are necessary to truly gain new insight or wisdom.

  10. BIll S says:

    The constant pressure for crowds to participate, share, collaborate, contribute, rate, tag and interact and generally do the hard work that computers cannot, will lead to a widespread crowd strike followed by a crowd demand for higher wages, followed by an uprising in robot and algorithm sourcing.

  11. Jeff Tadie says:

    Hmmm, thinking the historical analogy of early search progressing from search/recall of structured data to algorithmic and relevency search of unstructured data….mirrors how the converse could be applied to present (& massive) social and blogging properties, where if you added structure to the data…and summed the data for analysis…you could reverse integrate the effects of crowdsourcing, but with massive volumes of data. 80% is junk, of course, but 20% is relevent and still massive.

  12. Mark says:

    “crowdsourcing will never work to create new things”

    What of the open source movement?
    Netflix’s competition to rewrite their movie recommendation engine?

  13. Google is bidding for Digg, along with Microsoft and two news organizations – according to a leak to Mike Arrington.

    Digg and other social news and bookmarking sites is an example of crowdsourcing in the Web 2.0 era.

    Apparently it is working quite well, just look at Wikepedia and Dmoz. Now even large, traditional news organizations are inviting citizen journalism to enhance their online output.

    Open source projects during the Web 1.0 era like Linux and The World Wide Web are examples of crowdsourcing.

    It appears that when something has long term value to society, they will contribute and nurture. Ultimately, the creme rises to the top. This is when the benefits of crowdsourcing shine the most.

  14. nmw says:

    1 thing that REQUIRES a crowd is language: if you say potayto and I say potahto, then let’s call the whole thing off!

    :P nmw

    ps/btw: re markets “In economics, a market is a social structure developed to facilitate the exchange of rights, services or product ownership.” (that’s wikipedia.org, so it MUST be true :P) Since it’s a “social structure”, it is UNTHINKABLE without some kind of “crowd” behind it (or even CREATING it).

  15. JG says:

    “crowdsourcing will never work to create new things”
    What of the open source movement?

    Most successful open source projects only have a small head of dedicated contributors. The crowd, if it participates in actually writing/checking in code, really only contributes by fixing bugs. The crowd doesn’t actually create new things in the project, i.e. new interfaces, new user interaction models, new protocols, etc. The crowd only tweaks what is already there. In that sense, it is no different from a large, non-open source project releasing an early alpha or beta version to the public, for testing. The crowd isn’t really creating anything new.

    Netflix’s competition to rewrite their movie recommendation engine?

    And, let me ask you: Which contribution won the competition? What statistical technique did that contribution utilize. That’s right. An ensemble method won. In other words, a method that simply took the recommendation output from a bunch of other recommendation engines, applied some boosting or bagging, slapped its own name on the output, and was done.

    Nothing really new was created, no new insights were gained or learned. At the end of the day we really do not know anything more about how to do good machine-based recommendation. We just know that if we apply an ensemble method, results go up by a couple of percentage points. I could have told you that sorta thing works, fifteen years ago.

  16. Mark says:

    “The crowd only tweaks what is already there.”

    Absolutely not. Take Google Web Toolkit (GWT) – an open source framework at the core maintained by some real propellerheads and a thousand lesser mortals out there developing open source UI widgets that like calendars, sliders etc that sit on the framework

    re Netflix- ensemble methods only? Read the recent Wired article which singles out the loner doing well with a new angle. Novel or otherwise, Netflix quickly surpassed what their internal R&D team spent years developing by opening up the problem to crowds.

  17. JG says:

    Absolutely not. Take Google Web Toolkit (GWT) – an open source framework at the core maintained by some real propellerheads and a thousand lesser mortals out there developing open source UI widgets that like calendars, sliders etc that sit on the framework

    So where is the “crowd” in this? As you say, the core is maintained by experts, not by the crowd. And of the thousands of widgets, how much “crowd” goes into any one widget? Any one widget is probably both developed and maintained by anywhere from 1 to 3 people. Not by the crowd.

    I think you’re confusing “large developer community” with “crowdsourcing”.

    And re: Wired article.. please post a link. Then I can comment. Thx.

  18. Mark says:

    “I think you’re confusing “large developer community” with “crowdsourcing”.

    The GWT framework and related widgets was Google internal product. They could have sat on it and jealously guarded it as a competitive advantage. Instead they recognised it wasn’t what made Google, Google and open sourced it where it could bloom as a technology.

    Netflix took a similar decision – they’re in the business of selling movies and took the step to turn over the thinking about recommendation to the crowds.

    As for the Netflix article, here you go:
    Wired article

    The Netflix guys certainly seemed to think they’d hit a wall internally: “If we knew how to do it, we’d have already done it,” said Reed Hastings, chief executive of Netflix, based in Los Gatos, Calif. “And we’re pretty darn good at this now. We’ve been doing it a long time.”

  19. Crowdsourcing can generate some really good ideas, but the problem is who picks it up after that stage. Who is going to do it and who’s is going to pay for it….

  20. JG says:

    The GWT framework and related widgets was Google internal product. They could have sat on it and jealously guarded it as a competitive advantage. Instead they recognised it wasn’t what made Google, Google and open sourced it where it could bloom as a technology.

    But again, I think you’re confusing the development of applications using GWT with the GWT technology itself. GWT as a toolset or as a development platform is not the result of crowdsourcing. Any more than MS Windows is. GWT is the product of a small group of focused experts, not of the crowd.

    From that standpoint, MS Windows also lets a thousand flowers bloom — think about all the software over the years that has been developed for Windows, because Microsoft has published APIs. Right? Entire billion-dollar industries have been created, because MS realized that creating SDKs and driver development kits for the platform would be beneficial for all.

    But I think most of us would be hard pressed to call MS Windows “crowdsourcing”. Rather, it is just a large developer community.

    Crowdsourcing, at least as I understand it (and I might be understanding it incorrectly, I admit), is when a huge group works on *one* project. Thousands of applications that run on Windows are thousands of projects, not *one* project. Thousands of widgets developed using GWT are thousands of widgets, not one widget.

  21. JG says:

    As for the Netflix Wired article.. by the end of the article it finally gave the scores for BellKor (the ensemble method team) and for Potter (the lone gunman). And BellKor is still winning, at 8.57% improvement vs. Potter’s 8.07% improvement.

    So that seems to argue in favor of crowdsourcing, because the BellKor team is actually the team that participates in an open manner, collaboratively, with many of the other teams. And integrates everyone elses methods, using ensemble machine learning algorithms.

    But I guess my point is that, even with 8.57% improvement, what did Netflix learn from the BellKor method? What did they learn about how to do better movie recommendations? My feeling is that they did not really learn that much. Because the BellKor methods are not generalizable. They tend to overfit the training data, and do not actually say anything about how to solve the real problem.

    Let’s take, for example, that 8.57% improvement, and compare it against the quality of recommendations that you might get from talking with 4-5 of your friends. Friend who know you well. I’ll bet you get recommendations that are 40% better than the current Netflix recommendations, rather than 8.57% better. And your friends will do a really good job at telling you why they think you will like the movie. The quality of your friends’ recommendations will be much better than anything a machine can do.

    Crowdsourced methods will never be able to do what a small group of knowledgeable humans can do.

  22. Julian Birch says:

    Surely a stock market is crowdsourced to begin with, which would suggest that the technique isn’t going to be very useful for beating it. A technique that’s better than itself is going to be hard to find…

  23. Mark says:

    Crowdsourcing, …is when a huge group works on *one* project

    Or, if you trust the crowdsource that is Wikipedia’s definition: ” taking a task traditionally performed by an employee or contractor, and outsourcing it to an undefined, generally large group of people, in the form of an open call”

    Google’s open-sourcing of GWT development and Netflix’s competition could fall under that definition.

    The question is, did these companies benefit from this activity and did the crowds rise to the challenge?

    I think you’d have to say “yes” in both of these cases.

  24. JG says:

    “taking a task traditionally performed by an employee or contractor, and outsourcing it to an undefined, generally large group of people, in the form of an open call”
    The question is, did these companies benefit from this activity and did the crowds rise to the challenge? I think you’d have to say “yes” in both of these cases.

    Well, again, what this immediately calls to mind is Microsoft’s need for the developer community at large to build interesting applications and games for the Windows platform. E.g. Microsoft developed DirectX (the same way Google developed GWT) and then needed lots of game developers to write to that specification, in order to make Windows a viable and exciting platform for gamers.

    There is really no way Microsoft could have developed every single PC game released over the last 15 years itself. It took tasks that it traditionally performed itself (game development, e.g. MS Flight Simulator) and outsourced it to a large crowd/community of developers. And as a result, MS Windows became a hugely successful, widely-desired platform, and MS benefited.

    In other words, MS Windows also crowdsourced, right?

    But if that’s your definition of crowdsourcing, well.. so what? So the history of software development is simply a history of crowdsourcing. Crowdsourcing becomes a meaningless term. Crowdsourcing simply means software development.

  25. Mark says:

    The windows analogy is not accurate or useful.

    Microsoft sell Windows. It is their main source of profit and core to their business. You don’t freely get the source code and likely never will. They want to popularise their platform because it means revenue.

    Google make no money from GWT. It is not core to their business. You do get the source to the whole thing. Why? In their words : “GWT took off much faster than we expected, and it quickly became clear that the most sensible way to advance GWT would be to open [source] it sooner rather than later”. This is from their FAQ.

    The whole of GWT is open (not just the “surface level” as in the Windows API) and there are people innovating at all levels in the software stack not just the “widget” level.

    It *is* different.
    This is about a business taking the decision to turn over thinking to outsiders in the hope they will advance that thinking.

    We are perhaps drawn back to the distinction between “platform” and “widgets” and “how many make a crowd?”. The more complex “platform” elements which you single out as the preferred subject of this discussion will invariably require greater skills which are in shorter supply and so there may not be such a huge “crowd” involved there. The size is not important. The point is Google found external resource that were both motivated and capable of advancing their thinking.

    Interesting discussion!

  26. JG says:

    Yes, I agree Mark: It is an interesting discussion. And I hope my repeated comments are taken as benignly and good-spiritedly as possible :-)

    But again, I have to disagree when you say:

    The windows analogy is not accurate or useful. Microsoft sell[s] Windows. It is their main source of profit and core to their business. You don’t freely get the source code and likely never will. They want to popularise their platform because it means revenue.

    You say that just because MS sells Windows, that somehow opening the DirectX API to gamers is not crowdsourcing? That in order to be crowdsourcing, there needs to be open source so that people can innovate at all levels of the stack? I simply do not buy that. Why does that have to be true? All you need is an open API, not open source behind that API. Above, you say that crowdsourcing is:

    “taking a task traditionally performed by an employee or contractor, and outsourcing it to an undefined, generally large group of people, in the form of an open call”

    An employee of Microsoft, writing a game for Microsoft Game Studios, is not going to have to crawl through the source code to Windows in order to do their job. All they are going to have to do is know what the APIs are, and how to call them.

    So if crowdsourcing is simply letting the crowd do what a MS employee might have done, and the MS employee didn’t need the Windows source, why should the crowd need it?

    And what does selling Windows have to do with anything? At the end of the day, Netflix also sells a service. Netflix gets a bunch of people to work on the problem for it, but at the end of the day in order to take general advantage of the solution (“test” data rather than “training” data) you need to become a Netflix member, i.e. pay money for the Netflix service.

    So does that mean Netflix is not crowdsourcing, because Netflix sells the service?

  27. Mark says:

    “And I hope my repeated comments are taken as benignly and good-spiritedly as possible”

    Absolutely – ditto for my comments.

    I now think any debate on what is or isn’t “crowdsourcing” is fruitless. It seems there can be many possible interpretations.

    I suppose most of my comments come from the personal frustration I have with typical short-sighted corporate mentality and their reluctance to “open up”. One example I bump into regularly is a distrust of open-source based on the unfounded belief that “the crowds” produce inferior software. Another example is the reluctance to share absolutely anything outside of the company for fear of losing “competitive advantage”. Another annoyance is the arrogance of the “not invented here” syndrome.

    For me, GWT and Netflix are positive examples of companies recognising the potential value of opening up to outsiders and having the courage and skill to harness that effectively.

  28. JG says:

    I suppose most of my comments come from the personal frustration I have with typical short-sighted corporate mentality and their reluctance to “open up”.

    I do have sympathy for this kind of frustration. I have felt it as well.

    One example I bump into regularly is a distrust of open-source based on the unfounded belief that “the crowds” produce inferior software.

    For what it’s worth, I am not one of those people that thinks crowds produce inferior software. I think crowds can produce software that is just as good as “professional” or closed software.

    Where I remain unconvinced is whether crowds can create better software than a small, closed group of “experts”. I think there is a feeling amongst folks (maybe) like yourself that the crowd can not only do just as good, but can also do better. I don’t believe it.

    I say this with no agenda against the crowd, or for the corporation. But I just do not see a lot of empirical evidence for revelatory, large-leap-forward crowd-developed software. Or movie recommendation. Or protocol development. Or user interface design.

    Another example is the reluctance to share absolutely anything outside of the company for fear of losing “competitive advantage”.

    I agree, that’s pretty lame, as well. In fact, I have seen certain companies simultaneously call for more openness in one arena while at the same time remain staunchly conservative and closed in other arenas.

    Another annoyance is the arrogance of the “not invented here” syndrome.

    I agree with you here, too.

    For me, GWT and Netflix are positive examples of companies recognising the potential value of opening up to outsiders and having the courage and skill to harness that effectively.

    Fair enough. Again, I wasn’t reacting to those particular concerns of yours, so much as I was reacting to the (what I believe to be largely unjustified) almost religious, dogmatic belief among many Web 2.0 entrepreneurs nowadays that the crowd can produce software/ ideas/ architectures/ designs/ user experiences/ etc. that are better than what a small, focused team can produce. Not as good as. But better. I just don’t see the evidence for that.

    For what it’s worth, I don’t see it as a corporation vs. crowd issue. I see this as a small, focused, unified group vs. crowd issue. That is to say, the small, focused unified group does not necessarily have to be within a large corporation. It could exist in academia. Or it could exist in a proverbial Silicon Valley garage. But the insights and wisdom and intuitive, forward-landing leaps that happen get made, I believe, almost always by small, focused teams. Not by the crowd.

    The crowd has the ability to maintain software, to debug, and to copy features that it sees elsewhere. And by so doing it can produce software that is just as good as other software.

    But I have to agree with Jay Rosen, above. It does not work to create new things.

  29. Mark says:

    “I was reacting to the …dogmatic belief among many Web 2.0 entrepreneurs”

    You may find “cult of the amateur” an interesting read that has this as a theme. I didn’t agree with much of what was said in the book but it is an antidote to all this web2.0 zealotry.

    “I see this as a small, focused, unified group vs. crowd issue”

    Of course the old maxim “too many cooks spoil the broth” can hold here. For some small focused tasks a large crowd is undesirable.

    However, there is definitely something to be said for harnessing opinions of large crowds. I think the closest thing we currently have to “intelligent” systems are those informed by the collective opinions of many people. It’s that blend of algorithms *plus* many human viewpoints that can produce the smarts.

    Of course the viewpoints typically need filtering:
    * In Amazon reviews or PageRank, attempts to game the system must be filtered out (by the community and/or algorithms).
    *In Netflix a very formal ranking measure (RMSE) produces a leaderboard.
    *In open-source projects contributors have to earn their spurs and are only granted rights to change code after proving their worth

    If you can implement an appropriate quality filter and put in place the necessary incentives then there is wisdom to be had in the crowds.

  30. JG says:

    You may find “cult of the amateur” an interesting read that has this as a theme.

    Yes, I am familiar with the author. I haven’t read the book, though. I’ll check it out.

    However, there is definitely something to be said for harnessing opinions of large crowds. I think the closest thing we currently have to “intelligent” systems are those informed by the collective opinions of many people. It’s that blend of algorithms *plus* many human viewpoints that can produce the smarts.

    Yes, that’s true, but again: What smarts do such systems produce? What insights and wisdom do such systems offer?

    When I look at the evidence, yes, I see that such systems are good at filtering spam. But beyond that, the sorts of things that I learn from such systems are things like “people who bought hot dogs also bought hot dog buns” and “people who like The Rolling Stones also like The Who” and “people who watched Star Wars Episode IV also watched Star Wars Episode V”.

    In other words, “wisdom of crowds” predictions yield very safe, very obvious connections. How is that going to really help me? I already knew all that, about the hot dogs and The Who and The Empire Strikes Back. Are those bad recommendations? No, they are not bad. They are spot on, in fact. But they are so spot on, that anyone could have told you that. No need for a fancy crowd-aggregation mechanism and 400,000 servers to crunch the numbers.

    Which is why, in my very first post on this thread, I said: “Crowdsourcing will never teach us something that we didn’t already know…it will never make those key inferences that are necessary to truly gain new insight or wisdom.

    If you can point to instances otherwise, I would be interested in hearing about them. But all too often, the glory of crowdsourcing comes in recommending hot dog buns to people who buy hot dogs. True, and relevant. But ultimately not very interesting and useful.

  31. Mark says:

    “wisdom of crowds” predictions yield very safe, very obvious connections

    I wouldn’t be so quick to dismiss the value of these knowledgebases. It’s what helped differentiate Google for a start.

    I haven’t read the book, though. I’ll check it out.

    …and just try telling me the crowd’s book reviews on Amazon won’t sway your judgement to buy one iota ;-)

    “it will never make those key inferences”
    If you can point to instances otherwise..

    So, back to soliciting crowd input for something more “meaty” than opinions on products…

    I think it’s hard to cite examples of “tough” problems being cracked by crowds – there has been limited opportunity to do so. Netflix is the only public example I have experience of. However, the lack of examples shouldn’t necessarily be viewed as an indictment on the skills of the amateur. It’s not always possible for corporations to present a problem in the way Netflix did where all the necessary ingredients for this innovation were in place i.e:
    * the company feels commercially comfortable airing the problem (and potential solutions) in the first place
    * the necessary incentives are in place to motivate outsiders
    * the problem does not require participants to have significant resources (e.g. not “curing cancer”)
    * there is a manageable quality filter for companies to assess the crowd feedback at scale.

    It’s rare that all these pre-requisites are in place but I don’t doubt that given these conditions answers are to be found. I like the Bill Joy quote :‘No matter who you are, most of the smartest people work for someone else.’

  32. JG says:

    I wouldn’t be so quick to dismiss the value of these knowledgebases. It’s what helped differentiate Google for a start.

    No, no, I am not dismissing their value. Like I said above, crowd wisdom (of which PageRank is one example) does yield very safe (translation: acceptable, appropriate, relevant) predictions. Safety is not bad, just like comfort food is not bad. So I am not dismissing that value.

    It just does not yield any predictions that are all that surprising or insightful.

    The reason Google took off is that it succeeded in weeding out the spam to find the one, true home page for a person or company. Google’s main contribution to the information retrieval world was quick, effective navigational (i.e. “known item”) search. In other words, Google succeeds when the answer to your question is a “safe” prediction on which everyone can agree. So Google works for finding people’s home pages. Or company websites. Or product websites.

    But the utility of PageRank (and other related crowdsourced methods) tails off extremely quickly. It very quickly gets the obvious correct navigational pages. But when it comes to something that requires more insight, it breaks down. And fast!

    In other words, PageRank is good for recommending “The Who” for people that also like “The Rolling Stones”. But when it comes to finding lesser known bands, PageRank falls apart rather quickly.

    So try a little experiment with me. Go into your Google “advanced settings” and set it to return 20 results per page, rather than the regular 10. Then ask Google a question like “What was the cause of the invasion and war in Iraq” (or, translated to Google shorthand, “cause reasons war invasion iraq”. Something for which there are literally hundreds of thousands of relevant web pages. And then do a little experiment. Look at the 15th result. (or the 12th or the 18th, I don’t care). How good is that result? How relevant is that result? I’ll bet you a 100 to 1 that it’s mediocre at best.

    Or, if you don’t like that query example, then extend our little experiment further. Set your advanced settings to return the top 20 hits, like before. But now, over the course of the next week, for every query you do, don’t immediately click the 1st or 2nd result. Take a few extra seconds of your time and go down to (again let’s say) the 15th result. And just look at it. Open it up in a new tab if you want. But look at it. Think about how relevant it either is or is not. Then look at how many pages Google returns (typically in the 100s of thousands if not millions). And ask yourself: In those millions of pages Google has in its index, do you really think that this result, ranked 15th, is better than those undisplayed millions? As yourself, has the PageRanked linked wisdom of crowds succeeded in finding not just the obvious 1st or 2nd relevant page, but the 15th? The 30th? The 400th? There are topics for which there are literally hundreds of really good pages. How well does Google do in helping you find those? How quickly does the quality of the search results tail off, after the 5th or 6th ranked SERP?

    Don’t let me answer this for you. Answer it for yourself, by doing a little experiment over the next week and consistently looking at the 15th ranked result. Then, let’s meet back here in a week, and continue this discussion about the ability of crowdsourced methods to yield insight and discover knowledge.

    I’m serious. I could very well be wrong about this. That’s why you have to do the experiment, for yourself. I’ve done it for myself, but I’m only one person. But my feeling is that if crowdsourced methods were really able to “work to create new things”, i.e. to find those “ah hah” results that truly yield new and interesting insights, then the Google result at rank=15, for a query for which there are literally hundreds of thousands of hits, should be an extremely high quality result. See for yourself how often it either is or isn’t.

    So, want to meet back on this page on Monday, March 17th, and see how the experiment went?

    …and just try telling me the crowd’s book reviews on Amazon won’t sway your judgement to buy one iota ;-)

    Oh, sure, yes. Of course I use those, and those help. I like that Amazon has them.

    But I have gotten much better recommendations from a solitary friend (an “expert”) that already has a particular book and knows me well, or that has already used a particular product. I get more insight on photography books from my friend who is a professional photographer than I do from Amazon. I get more insight on gardening books from my hippie college roommate who tends her own garden, than I do from Amazon. I get more insight on cooking books from my friend that runs a restaurant, than I do from Amazon. I get more insight on Java programming books from my friend that works at Sun, than I do from Amazon.

    It is from those people with working, practicing knowledge of a particular subject that I get real insight. Not from crowd reviews.

    You may be right in that more problems will arise in which the right mixture of conditions can be found, which make crowdsourced methods possible. But to agree with your Bill Joy “the smartest people work for someone else” quote: I would rather just go to those people and ask them directly for their recommendation, rather than rely on a watered-down, statistically averaged, crowdsourced filter.

  33. prefabrik says:

    Regarding “create new things”: Depending how you look at it, the entire notion of a capitalist economy is itself a form of crowdsourcing. The “company” is the nation, the “site” the economy. It’s a system set up in such a way that the masses can participate, rather than just the “employees” (government people) having to create everything. And it results in a better site (economy).

    In that sense, crowdsourcing sure does create new things, since a crowdsourced economy creates more new things than a non-crowdsourced economy.

  34. “crowdsourcing will never work to create new things”

    What of the open source movement?
    Netflix’s competition to rewrite their movie recommendation engine?

    give me answers this qus….

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>