Google Steps Into Music Search

But as per usual, Google's approach is to not have business deals with anyone around the intent of researching, finding, and obtaining music. I honestly think that at some point this will change. SEW has a good write up with comparisons to other engine's approaches. In the book I…

But as per usual, Google’s approach is to not have business deals with anyone around the intent of researching, finding, and obtaining music. I honestly think that at some point this will change. SEW has a good write up with comparisons to other engine’s approaches.

In the book I spend a few pages comparing how Yahoo does music search vs. Google. The one box shortcut was a big difference, as was the fact that with Yahoo, you can actually buy the music. Google has now addressed the one box, and is also making it easy to buy music, though from partners they seem to have chosen at random (the best or the most popular music sales sites on the web, much as they have with travel and other categories).

Google’s blog post.

4 thoughts on “Google Steps Into Music Search”

  1. Gah, I can’t stand announcements like this. Let us be clear here: Google is not doing music search. Google is doing text search, on text information it has (either automatically or manually or both) classified as music-related.

    If you want true music search, where the actual music-perceptual content of the song itself is used to find similar (“relevant”) songs, you have to look to a company like Pandora or to the research going on at conferences such as ISMIR. All these folks are looking at the actual content of the music signal itself…not text, not clickstreams, not Amazon-ian related buying patterns. As such, they can get at what is truly similar between two different pieces of music, and let me really find new songs that are relevant to my music information needs.

    The personally sad and/or frustrating thing about low-quality offerings like this from Google is that it dilutes the public understanding of what music information retrieval/music search really is and can be. When such a high-quality company sets the bar so low, it makes it difficult for smaller players to rise, because the public thinks “Oh, Google already does that”.

    To be fair, none of the others (Yahoo!, MSN) are doing content-based music search, yet, either. So Google is hardly unique in setting the bar so low. I guess I’ve just come to expect more from Google’s offerings.

  2. JG, there is no “truth” about what is similar between two songs. Sure, content analysis can tell you that two songs have similar rhythms or melodies, but that is not necessarily how humans judge similarity. Musical taste and judgements about things like genre involve a whole host of other factors that have nothing to with “the actual content of the music signal,” which is why the best music search engines will combine signal analysis with non-content-based metadata, expert judgments, trend tracking, etc (see for example Brian Whitman’s work). Of these, I would say signal analysis is probably the *least* important. In fact, engines that rely too heavily on signal analysis risk pissing off users who will insist that two songs from different eras/subcultures/countries sound nothing alike, even though, from a pure content-focused point of view, they do.

  3. Well, well, Ryan, not surprised to find you here. How’re Yahoo’s music search efforts coming along? ๐Ÿ˜‰

    As for JG’s comments… I can certainly relate. While I definitely understand your cautionary comments about content analysis, I nonetheless find Google’s current offering quite underwhelming.

    I’d think most folks already have a preferred online store or two, and frankly it’d seem much easier to simply search that store directly to sample and/or purchase specific music instead of using Google as an intermediary. Plus Google’s offering offers nothing in the realm of discovery or sharing/community-oriented features.

    * * *

    I recall reading a rumor online that L&S initially wanted to actually provide aural snippets, but got literally threatened by the (consistently shortsighted and anti-consumer) record labels. I guess a battle — however justified — with one dinosaur (the book publishing industry) was deemed enough for Google to stomach at one time.

    * * *

    In an ideal world, Google — along with Yahoo and the IETF or whatever that board is called — would develop open source standards for referring to artists, albums, and songs… and would work together to create a communal database of searchable aural and lyric samples.

    Then the key differentiators amongst the services would be the UI, how the information snippets are displayed, how music discovery is handled, and so on.

    But given the record labels assinine smack-down of (for which I’m still extremely bitter), I suppose we shouldn’t expect to see a whole lot of innovation in this space anytime soon ๐Ÿ˜

  4. Ryan: Yes, I overspoke.. “truth” is probably too strong of a word. I guess what I mean is that any technique for determining music similarity that does not include acoustic properties (i.e. rhythm, tempo, singer timbre, instrument timbre, harmonic characteristic, etc.) is not “music” retrieval. It is “text retrieval on music” (in Google’s case) or “behavorial retrieval on music” (in Amazon’s “Customers who also bought..” case). In these latter cases, “music” has nothing to do with the actual matching.

    I am indeed familiar with Whitman’s work, and I agree that a combination of methods is required, and that what constitutes a similar or relevant match is subjective. I mean, people differ all the time even in web search, where the relevance of a page is different, for different people, even if they issue the same exact textual query. It all comes down to the user information need.

    For example, Grover Washington’s “Just the Two of Us” might be similar for one person to Chris de Burgh’s “Lady in Red”, because they’re both slow, romantic songs. However, this very same Grover Washington song also might be similar for another person to Enrique Iglesias’ “Bailamos”, because you can do a samba to both pieces. So there is not, nor should there be, ground truth. It all boils down to user information need. If my information need is “slow romantic”, then one song will match. If my need is “samba dancable”, then another song will match.

    And you will only be able to meet an information need like “samba dancable”, for the majority of songs, by looking at the actual rhythmic structure of the song. The song “Golden Sands” by Paul Weller is a fantastic cha cha, for example. But I challenge you to find this information through any textual metadata source.

    If you are only doing text-based retrieval of music, there are similarities you will never, ever find. Well, maybe you’ll get lucky, if you can find a web page that says as much, in text, and you can somehow mine this information. But there will never be enough web pages to tell you all the different ways in which people consider any two songs to be similar, along the hundreds of dimensions that two songs can be similar.

    So again, I very much agree that the best music search engines will be combining content…with text/metadata…with behavioral data. It is all needed. But if you are only doing text and/or behavioral matching, if you do not actually put the music into music by doing actual content analysis, you are not, I maintain, really doing music retrieval. You are doing text-retrieval, on a subset of music-related text data.

    And by selling us on “text only” music search, Google is really setting the bar quite low.

    FWIW, I could make the same sorta arguments with Google Video. It is not video retrieval. It is text retrieval on a subset of video-linked data. But, this post is already too long and flame-worthy. Apologies ๐Ÿ˜‰ It’s just a subject I am pashernate about.

Leave a Reply

Your email address will not be published. Required fields are marked *