I’ve written before about my relationship with Foursquare, and I’m sure I will again. I’ve tweeted my complaint that the “friend” mechanism is poorly instrumented (in various ways), and I should note that this is certainly not just a Foursquare problem (more on “Friendstrimentation” shortly).
But today I wanted to build on my earlier post, “My Location Is a Box of Cereal,” and Think Out Loud a bit about what I’d really like to do on Foursquare: I’d like to check into a state of mind.
What do I mean by that?
Well, imagine that instead of checking into a physical location, as Foursquare is mostly constrained today, I check into the state of mind I might call “In the market for a car.” Or perhaps I check into “playing a great game of poker with my friends.” Or maybe I check into “pretty bummed out about the death of my cat.”
I think you get the point. The check in is, as I’ve argued elsewhere, more than a declaration of where I am. It’s also a declaration of my state of mind, as well as my openness to a response from someone who might provide me with value.
In short, the checkin is a search, waiting for a response. And there’s no reason to constrain that search query to location.
What matters is that as users of this particular brand of search, we get good results. And the jury is well out on that concept, at least to date.
Here’s what I’d like to have happen when I check in to the state of mind I’ll call “In the market for a car.” This is a commercial checkin, of course, and I’d be well aware of that when I checked in. So what might I expect?
First, the ecosystem of businesses eager to sell me a car become aware of my status, and are prepared to respond in an instrumented fashion. I use the word “instrumented” very directly here – the last thing I want is a bunch of spam results – pointless, irrelevant come ons for brands or models in which I most likely have no interest. If that’s what I wanted, I’d just use a search engine. After all, most of search is instrumented, for the most part, against my query, and my query alone. On a service like Foursquare, I’d expect the response to be far more nuanced.
How? Well, I’ve given Foursquare permission to use my Facebook social graph, for one, and my Twitter interest graph, for another. So when I check into Foursquare, I’d expect a response that understands who I am, who I know, what my interests are, and how I compare, as a cohort, to others like me, who may have also in the past checked into a similar “state of mind.”
Add even more social and interest data to the mix, and you can see how this starts to get pretty interesting.
I’d expect a response that 1. knows who I am is personalized in a meaningful way, 2. surprises or delights me with an offer of value to my search, and 3. respects the fact that I might not be ready to act, at least not yet.
Organizing all this data and response isn’t an easy task. But then again, neither was building out the infrastructure we currently understand to be search. Once the checkin is loosed from the chains of pure location, the potential for connecting to customers in conversation at scale, and at an intimate level, is far too great for this use case to not exist.
A final thought on Foursquare, since I’m on about it. I really wish it was easier to create temporary or unique “venues” or states of mind. For example, last night about 125 folks came to the Web 2 dinner at a local SF restaurant. Many of them “checked into” the actual restaurant, but wouldn’t it have been a lot more fun if, when they came and fired up Foursquare, they saw a new “venue” that had been created, perhaps by the first person there, or perhaps by the organizer, called “The Web 2 Premiere Dinner”? And further, wouldn’t it be cool if the organizer, sponsor, or anyone else involved in the dinner could attach some kind of value to folks who might check in?
Now sure, I know you can create a new venue on the fly, and many do (I saw a pal who checked into “The Dog House” a while back, because he did something that upset his wife. I loved that). But the process to do so is awkward and difficult at best. Foursquare can and should encourage such behavior, and provide resources for us to intelligently curate the results.
Doing so would be a big step toward an ecosystem of search that was driven by the equivalent of a “social query” driven by a state of mind as much a location. And when the two connect, well, so much the better (read The Gap Scenario for more on that.)
OK, back to work, all.
Last weekend the news was conjecture about Facebook doing web search, today, the news is conjecture about Google doing social networks. All of this has been sparked by two well known Valley guys opining on samesaid…Kevin Rose, CEO of Digg, tweeted that Google was working on a “Google Me” social network (he since was “asked to take down his tweet” by someone…) and then a former Facebook employee answered a related question on his own Q&A service, Quora.
Let’s not get ahead of ourselves here, folks. I certainly don’t find it the least bit surprising that Google is continuing its push into social – let’s not forget, the company recently launched Buzz, which qualifies as a major social network, already owns Orkut, which also qualifies, and has added social features to its core search service – including Google Profiles and social search functionalities.
Pulling these together into a seamless, useful, and coordinated product just makes sense. It’s to be expected. And it’s badly needed, because none of these disparate features or products have found their own footing.
The real question is whether Google has the corporate will to call a spade a spade, and acknowledge publicly that it’s game on with Facebook. Often companies attempt to pretend they’re not really in a competitor’s business. It’s rather hard to defend such a position now. I say, go for it, Google. If the product is good, the traction will be there.
Google today announced another step in its protracted divorce from China – to satisfy regulatory and license requirements, it’s no longer directly serving results from its Hong Kong based (and uncensored) engine onto its Google.cn site. Instead, it’s directing users to the Hong Kong site, in essence, creating one more click for users to go through before accessing its service.
And there’s no certainty that service will be allowed inside China, as the regime is clearly not pleased with Google’s failure to roll over. Google’s license to do business inside the country apparently expires tomorrow. This move was clearly intended to convince China that Google is living by the letter of Chinese law. I’m not sure that matters, and it may effect Google’s other businesses – Maps, for example.
Meanwhile, Google’s main competition, Baidu, which as a homegrown company has no such issues, has gained marketshare at Google’s expense. CEO Robin Li will be at Web 2 this Fall, a rare appearance and one certain to be newsworthy.
Here’s Google’s blog post.
Thanks to Andy at Beet for asking. My post earlier here goes into far more detail. I do look rather querulous, do I not? It must have been the sun.
…you just have to rethink what “search” really means. Last night Jobs said he had no interest in search. I am quite certain what he meant is he has no interest in HTML, “traditional” search. But think about what search really is, and I am certain, Apple will be in the search business.
Why? Well, as I said in the last post on the iPad (and rather hurriedly, and entirely my fault, poorly communicated to many of those who left comments), it’s all about the link. Perhaps I should have said, it’s all about the signal.
Let’s think about the allegories between search and the web as we knew it, and apps and the app platform that Apple controls, as we know it. Last night Jobs said that we’ve never before seen such an explosion of apps as we’ve witnessed on the iPhone platform – 200,000 and counting, up to 20K new ones a week.
That’s true, never before have so many developers created mobile phone apps in such abundance. But think back to the last great platform where hundreds of thousands created value by making new services, content, and places where consumers might interact: yep, that’d be the web. A website is an app. And the platform of the web – it’s open. Anyone can build on it. And anyone can create signals from their “app” to another “app” – a link from one site to another. And anyone can share any data from any site to another site, or mash up those data streams to create entirely new kinds of sites. Yep, it was rather a free for all, but over the past 15 or so years business rules have emerged, social norms have developed, an ecosystem has flourished.
Take yourself back to the early days of the web – just as now we are in the early days of what I’ve called before, and will call here, AppWorld.
Remember what a mess it was? How much noise there was, and how precious little signal? And what application emerged that found that precious signal, made sense of it, and helped us find our way? Yep, it was search, and the signal was the link, interpreted, of course, through PageRank and ultimately hundreds of other sub signals (click through, freshness, decay, etc.)
Now, think of AppWorld. Where’s the signal? Short answer is, we don’t have one. Yet.
The beauty of the link was that it became a proxy for engagement. It was where consumers were declaring their intent – signaling what they wanted from the web. That signal became the basis for a massive marketing economy. Google ascended. And content models were turned upside down (much to my delight at FM, I will admit).
So then, what is the proxy for engagement in AppWorld? Before you argue that “we don’t need one,” let’s not forget Jobs’ stated goal of getting into advertising so as to give his legions of developers a business model, to reward them for creating value on Apple’s platform. That’s the whole reason he’s creating iAds, he declared last night. To get his developers paid. “We won’t be making very much money on advertising,” he said. (Let’s watch and see…)
Well, if marketers are going to find value in AppWorld, they’re going to need a proxy for engagement, a trail of breadcrumbs, some signal(s) that show were consumers are, what they are doing, and ideally, predicts what they might do next. And we as consumers also need this trail – we need smart navigation tools to figure out which apps to use, which apps our friends recommend, and how best to navigate the apps we are using. It was easy when there were just a few apps. Now there are hundreds of thousands. Soon there will be millions. Don’t tell me a Google like metadata play isn’t going to evolve inside such an ecosystem. After all, search did all those things for the web. But so far, we don’t have a similar signal for AppWorld.
But we will. The data is already there. It’s the data we all create when we interact with apps – when we jump from one to another, when we navigate within pages, when we execute a command in an app and then ask that app to store that execution “up in the cloud” also known as the web. And as far as I can tell (Apple won’t answer questions on this) it’s that data which, if shared with others besides the developer and Apple, Apple then labels “third party” and forbids (based on a smokescreen of privacy issues, which I believe can and must be addressed).
I believe such a policy cannot stand, because it will create a fragile ecosystem devoid of feedback loops and external innovation. No matter, whether or not Apple allows third parties to consume AppWorld data, Apple will do search. It won’t be search as we understand it on the web, but it’ll be search for AppWorld, and if done right, it will be extremely profitable.
**Dashed off as I am running to lunch at D….will update soon…**
When Bing launched, I framed the new service from Microsoft as an important step in the evolution of search:
I actually don’t think Microsoft is trying to out-Google Google with Bing. I think it’s trying to build a different kind of search application, one that sits on top of commodity search and helps people make decisions in a new way. Done right, this totally breaks the AdWords model that has driven search so far. To me, that is a very big step in a new direction, and one that Google cannot afford to make.
Today Google has decided it can’t afford NOT to make this step, at least somewhat. The company has decided to create a left hand nav bar that pushes the service toward search as an app.
Now, when I mentioned that idea in a briefing yesterday, the Google rep I spoke to wasn’t eager to confirm the concept, but to my mind, this is exactly what’s going on. Bing (and Ask before it) has built a service on top of commodity search results, one that does not require you to go back and forth, back and forth, but rather instrument your search session using intelligent, persistent navigation. This is exactly what Google’s new UI lets you do.
The real question, of course, comes down to money. Does this mean fewer clicks on paid ads for Google? I asked that question, and the response was telling: I’m paraphrasing, but in essence Google told me “we’ve found that this new approach increases the chance that users will find the information they are looking for.” And in Google’s parlance, ads are information.
Of course Google would never roll out such a significant UI update without rigorously testing the impact on AdWords clicks, and indeed Google confirmed to me that this is the most tested UI change Google’s ever made. Indeed, the left nav bar has been seen in the wild for several years.
What’s on the bar is worth grokking as well. First, “Web” has been replaced with “Everything.” That’s pretty meta – maybe we should change the name of the Web 2.0 Summit to the Everything 2.0 Summit – but I digress. Second, what is on the bar changes based on your search in real time. And one of the options includes “Updates” – their way of incorporating Facebook, Twitter and other real time data. A “Something Different” link gives you related searches, among many other new or consolidated features on the left nav. A full overview can be found at SEL.
Google told me that the actual underlying results – both organic SERPs as well as the ads that accompany them – have not changed. This is a new skin over Google’s results, not a shift in how those results are determined. That’s important, but not entirely the story.
The story is that this shift will change how we interact with Google, what our search query stream looks like, and therefore, what kind of SERPs and ads will be produced. I am certain Google has modeled this shift, and equally certain the company believes this change will impact their bottom line in a positive way. Of course, the company could be mistaken. Only future quarterly results will prove whether or not Google got it right.
What do you all make of the changes?
From the real time search service’s blog post:
Until today, we’ve been indexing the links shared on Twitter, MySpace, Digg, Delicious and by our own OneRiot panel to help determine our search results. Now, with the addition of Facebook data, OneRiot delivers search results that reflect the pulse of a much, much wider social web.
Also, the service seems a bit wary of what might come of all this:
Now, of course, we’re only showing (indeed, only have access to) data that has been shared publicly by Facebook users. A user can restrict the visibility of these Likes on their Facebook profile. However, we’d be sidestepping the issue if we didn’t recognize that some users might be concerned that stuff they have shared on Facebook can now pop up on services like ours. Given that, we are rolling out this feature as a very limited bucket test today to assess users’ reactions and gather feedback. We love the new feature. And if users do too then we’ll roll it out to everyone at an appropriate speed.
As well they should. The service can be found here.
This is very interesting news, but not unexpected if you’ve been paying attention. Note in the past I’ve predicted that Apple will not do web search, but will do “app search,” because app search is essentially broken, if you can even call it search to begin with. It’s more like directory navigation at this point.
What Apple needs is a search engine that “crawls” apps, app content, and app usage data, then surfaces recommendations as well as content . To do this, mobile apps will need to make their content available for Apple to crawl. And why wouldn’t you if you’re Yelp, for example? Or Facebook, for that matter? An index of apps+social signal+app content would be quite compelling. What Apple will NOT do is crawl the entire web.
Look at the valuable information that you can extract from how any one of us interacts with a well-designed application, then create a dataset for that. Say I use the New York Transit application to navigate my way through New York for 3 or 4 days… all of the questions and back-and-forth that I use that app for, which is essentially a structured search session—right? Now, match that against a set of data which is the transit map. I say, “I need to go over here. I want to go over there. I prefer this route over that route,”—that becomes a dataset that should inform other searches that I’m making on things that seemingly are unrelated but may not be. That should be available as metadata for future searches. And figuring how to inform that is as important as parsing the line or the spoken phrase that I’m making in the moment.
Now, if I take that spoken phrase and go and search for “Chicago rental car” four months after interacting with that New York Transit map application, how can we take the metadata from that interaction with New York and inform the appropriate response in Chicago. Perhaps the best suggestions would be, “Hey, you know what? You don’t need to rent a car. You can use the Chicago Transit. Here’s an app for it. You can get from the airport to everywhere you want to go without having to rent a car. Plus, you’ll save $150 which we know is a goal of yours because you’ve been interacting with the Mint application and it said that a goal of yours is that you want to save $200 a month and here’s a way that you do that”?
Tying all that together, that’s the Holy Grail because then it starts to understand you. If you only parse just the query, even if you get the natural language right and the intent right, you’re missing the whole person.
It’s now clear to me that Apple is very serious about being the Google of the post-HTML, app-driven Internet. But so is Google, so is Microsoft, and there are certainly going to be other players to boot. (Er…Like HP, which just bought Palm and plans on “doubling down” on the Web OS.) Game on.
TC broke the news today that Tynt, a search interception and user behavior data company, got a big round of funding from Panorama Capital, which is also an investor in FM. I’ve installed the Tynt service on Searchblog and I’d like to get your response. I think what the service does is quite clever and useful both to publishers and users. However, it does create new user experience for those of us who cut and paste on sites, and I’m interested if folks find the new approach worthy.
The service works like this: when you copy a snippet of text from a site with Tynt, you’ll see that Tynt appends a unique URL into the pasted text (for example, see the graphic below where I’ve copied and pasted a snippet from a Searchblog post into an email).
This URL both redirects readers back to the location from which the snippet was pasted, as well as notifies the Tynt service of the actions taken. This gives Tynt a database of user behavior – a signal of intention – that could become quite valuable. At scale, this means Tynt can, for example, build a Digg-like view of the web – without ever having to create a Digg. It all works based on behavior most readers do all the time anyway.
This data is also surfaced to publishers, which can help them improve their editorial and user experience, among other things.
Tynt also has a pop up service (see graphic below from TC – I have not implemented this yet and until recently the company did not disclose this service publicly) that identifies when certain cut and pasted text is likely to be a signal of search intent. This is based on examining the string of words that is copied. Short phrases – of a few words, for example – usually means the reader is doing a search – they are cut and pasting the text into a search bar or another search browser window.
Think about that behavior – probably something you do a lot (I certainly do). What happens? Well, you are reading a story, and come across a term or word you don’t understand, or want to research more. You highlight it, go to the search bar (or open another tab with Google in it), copy the text, paste it into Google, and find yourself on another page (the Google search page.)
Who wins in this scenario? Well, usually Google does (or whichever search engine is used). They get the search, and the probable revenue from that search (as we know, many folks click on paid search links!).
Who loses? Well, the publisher, because some number of folks who execute this behavior will leave the publisher’s page and never return. And the publisher never sees that search revenue either, even though it was the publisher which sparked the search intention in the first place. One could argue that the user loses as well, because they often run off into a back and forth search game that distracts them from their initial focus on the article they were reading.
Tynt changes this game, in that it both keeps the reader on the page, and intercepts the search behavior (and potential revenue). This I find quite interesting (as does Google, I am sure, and Bing, which I bet would love to power those Tynt searches which otherwise might go to Google…). For its major partners, Tynt splits revenues with publishers, bypassing the search engines. The company already has deals in place with scores of major publishers representing billions of page view a month. It claims to be doing 100 million searches. That makes it a player, one the major engines will have to deal with.
One to watch.