Top 100+ Search Engines

A list of the Top 100+ Search Engines of 2007, graciously compiled by SEO Charles Knight, with some editorial tinkering. Thanks Charles. A9 amazon.com answerbag AOL Ask.com Ask.mobile askville AURA! Baidu bessed blinkx boing ChaCha ClipBlast! Clusty collarity CometQ d e c i p h o del.icio.us digg digg…

A list of the Top 100+ Search Engines of 2007, graciously compiled by SEO Charles Knight, with some editorial tinkering. Thanks Charles.

A9 amazon.com answerbag AOL Ask.com Ask.mobile askville AURA! Baidu bessed blinkx boing ChaCha ClipBlast! Clusty collarity CometQ d e c i p h o del.icio.us digg digg labs swarm eurekster exalead Feedster FINDITALL GIGABLAST Google Google /*Code Search*/ Google Mobile ICEROCKET inQuira ixquick Jambo Jyve KartOO keyCompete Kosmix krugle KwMap last.fm like Live QnA LiveDeal lurpo MavenSearch mnemomap MS. DEWEY mystrands nayio oodle Opera Mini Pagebull pluggd PreFound Quintura kids Quintura retrevo riya ROLLYO searchmash SearchTheWeb2 SEO Discussion Search Singing FISH Skweezer snap Sphere Sphider SPURL.net SpyFu SQUIDOO Srchr SurfWax Technorati thefind.com trulia url.com Vivisimo W a l h e l l o Webaroo WEBBRAIN What to RENT? whonu? WIKIO Windows Live wink WiseNut wondir Yahoo! ANSWERS Yahoo! MINDSET Yahoo! Mobile Yahoo! SEARCH Yahoo! Search Builder Yedda yoono yoople ZABASEARCH ZEBO Zillow.com Zippy ZOO.COM ZUULA

44 thoughts on “Top 100+ Search Engines”

  1. Is digg really a search engine? Maybe we need a definition of a search engine.

    Like: A Search Engine is any site that has some way to request a specific piece of data.

  2. “This would be a perfect topic to use the SNAP previews

    See if perhaps it could be intergrated with this topic; the thumbnails of those various search engines would be displayed as one hovers over each link”
    Good idea

  3. I wonder if, during the first round of search wars (AltaVista, HotBot, etc.), anyone devised a relatively unbiased way of measuring search accuracy. It would be interesting to see a metric for how often an engine returned what the user was looking for on the first page.

  4. Greetings everyone.

    This is my Top 100 list, and the criteria I used to start was “that looks interesting.” – in other words it was subjective and not scientific. But having said that:

    I appreciate any and all feedback to the topic(s): The Top 100 Search Engines for 2006. The Top 10 (out of those 100).
    The #1 Search Engine for 2006 (out of those 10). (I picked ChaCha) and finally the Top 10 Search Engines to watch in (early) 2007.

    I will read and review every comment and use them to improve the list.

    Thanks to everyone.

    Charles Knight

  5. Dr. Pete writes: I wonder if, during the first round of search wars (AltaVista, HotBot, etc.), anyone devised a relatively unbiased way of measuring search accuracy.

    Pete, such relatively unbiased ways of measuring search accuracy have been around since the 1960s (Cyril Cleverdon and the Cranfield experiments). Those measures, utilizing concepts such as relevance, precision, and recall, are still used today with little change.

    The interesting question, IMHO, is not whether these measures exist. The interesting question is why the major search engines won’t submit themselves to a public, TREC-like evaluation of their algorithms. For example, let’s take 5000 queries, sampled randomly across popularity (high, medium, and low traffic queries) and across topic (computers, fashion, travel, etc.) Pool the results from the participating engines (GYMA and whoever else wants to participate) and spend a couple of trinkets from that $150 billion market cap to pay an independent third party to assess the relevance of, say, 50 documents for each query.

    Then it is simply a matter of using recall and precision to see which engine, on average, produces the best results!

    Oh, now THAT would be an innovation! Public evaluation!

    Of course, it makes sense why some would do this, and others wouldn’t. Public perception already rewards some engines as better than others. An engine that is already a popular favorite really does not benefit from such public evalution. If it wins the evaluation, it garners no more favor than it already had. If it loses, or even just ties for first, then it knocks the engine down a peg, in public perception. So there is absolutely no incentive to participate.

    And yet, from a user’s standpoint, public, 3rd-party evaluation would be an innovative godsend. The public might even learn that certain engines are better than others, at particular topics! One engine might do really well on computer queries, another on fashion queries. One engine might be really good at handling popular queries, and another much better at long-tail queries.

    I would love to know which engine works best in which scenario. So, c’mon GYMA.. how about some consumer-oriented innovation in that sphere! I could care less about all your chat clients. How about some public evaluation innovation, instead?

  6. If we are going to call Amazon a search engine why not eBay or Wikipedia or Myspace for that matter?

    Interesting list but I’d love a bit more information on criteria and qualification.

  7. I like http://pizza.de — but I guess perhaps the difficulty of getting a hot, steaming delivery to the States might arguably disqualify it from Charles’s list. You can’t always get what you want….

    ;D nmw

  8. JG’s thoughts on public evaluation are sound, but how does the public evaluate a result? It looks for meaning and relevance. Coning algorithns rate and rank the meaning in searched results. This gives relevance genuine thinking credibility. For more, see http://www.oracep.com and contact us for sample research.

  9. Just completed reading “The Search” John.. Great read and a Great Book. Thanks for all the effor you have put into it.
    – just wanted to point out here that you have lfet Yahoo Answers here, in the list of search engines, i can see Live QnA but not Yahoo Answers

  10. @JG – I’m glad to know that at least the research is out there. I’m a cognitive psychologist by training, and I’m amazed at how often no one bothers to even think about testing something that seems so readily testable. Even if the results of applying such testing to the major engines weren’t completely fair or accurate, it seems like they would kick-start a fascinating public conversation about how an ideal search engine should function.

  11. Here are JGs thoughts Coned; public evaluation begins!

    Coning Index 90% -low level background, high level analysis and judgement.

    66%] Pete, such relatively unbiased ways of measuring search accuracy have been around since the 1960s (Cyril Cleverdon and the Cranfield experiments). Those measures, utilizing concepts such as relevance, precision, and recall, are still used today with little change.

    93%] The interesting question, IMHO, is not whether these measures exist. The interesting question is why the major search engines won’t submit themselves to a public, TREC-like evaluation of their algorithms. For example, let’s take 5000 queries, sampled randomly across popularity (high, medium, and low traffic queries) and across topic (computers, fashion, travel, etc.) Pool the results from the participating engines (GYMA and whoever else wants to participate) and spend a couple of trinkets from that $150 billion market cap to pay an independent third party to assess the relevance of, say, 50 documents for each query.

    100%] Then it is simply a matter of using recall and precision to see which engine, on average, produces the best results!

    100%] Oh, now THAT would be an innovation! Public evaluation!

    92%] Of course, it makes sense why some would do this, and others wouldn’t. Public perception already rewards some engines as better than others. An engine that is already a popular favorite really does not benefit from such public evalution. If it wins the evaluation, it garners no more favor than it already had. If it loses, or even just ties for first, then it knocks the engine down a peg, in public perception. So there is absolutely no incentive to participate.

    100%] And yet, from a user’s standpoint, public, 3rd-party evaluation would be an innovative godsend. The public might even learn that certain engines are better than others, at particular topics! One engine might do really well on computer queries, another on fashion queries. One engine might be really good at handling popular queries, and another much better at long-tail queries.

    75%] I would love to know which engine works best in which scenario. So, c’mon GYMA.. how about some consumer-oriented innovation in that sphere! I could care less about all your chat clients. How about some public evaluation innovation, instead?

    Want to know more? http://www.oracep.com

  12. Am I blind, or are you telling me that live.com isn’t part of the list? you have the QnA, but not the main site?

  13. Quite impressive to see Technorati being more successful than amazon at search. I think the clean interface and the clear value to the user are the main reasons for technorati’s success.-

    Thank you for sharing this story with me !

  14. Desert
    Dental is a state of the art cosmetic and implant dentistry located in and
    serving the Las Vegas and Henderson, Nevada area. Sedation and denture
    dentists Ilya Benjamin and Vadim Lebovich both believe in the through
    evaluation, treatment and care of your teeth. Weather it is a new smile,
    dentures, full mouth rehabilitation, tooth implants, Invisalign
    straightening, you are in competent hands with caring denturists at the
    Desert Dental office in Henderson, NV

    http://www.mydesertdental.com/

  15.  Steve Rogers was a scrawny, thin sickly p90x man. He joined the military despite that, but was classified as unfit for duty. Steve’s courage and resilience caught the eye of a scientist who recruited him to a secret experimental project. Steve entered a body chamber, and emerged as superhero Captain America. He experienced a rebirth into a chiseled, muscular and uber-athletic body. beats by dre (If only such a chamber existed for the general population.)

    Good news, if you want to achieve an athletic, toned body you can try a fitness-training course at home. You may think windows 7 key that to look buff as Captain America, you must join a gym. Definitely, it is beneficial to lift weights or enroll in cardio class. However, not every city in the country has a gym or workout facility. Some towns are too small. beats by dre So, then what do you do?

    There are two in home fitness programs to consider: P90x or TRX.

  16. Fashionable hat let no longer cold in winterThe transformation is very obvious, especially in the north, when the calendar turned over “the beginning [url=http://www.wholesaletisahats.com/]tisa caps[/url] of winter” this day, people suddenly realized should increase the thick clothes, will think of it-hat, because it can in this cold winter, time to warm your heart. Hat! With it’s comforting, this winter must not cold.Winter hat pop, can be divided into mature and QingChunXing two parts.Take mature for, cheap tisa snapback hat  rare animal skins made useful, for example, the mouse, the spirit minks and otters, this kind of hat style noble, elegant, their color generally is the natural color fur animal itself, and, of course, is sometimes artificial dye into a variety of colour, expensive, is in commonly 2000 yuan of above.To QingChunXing  tisa snapback hat hat speaking, there are many design and type: children’s wear hat, or with wool, or to use pure polyester, or with corduroy, etc material is made, the style and lively, http://www.wholesaletisahats.com/  lovely, some with cartoon characters and small animal patterns. From color, it is really “salesmanship”, what color are complete, these have a color pattern of the “clown hat” very recrui in the children’s favorite. 

Leave a Reply to Trogdor Cancel reply

Your email address will not be published. Required fields are marked *