free html hit counter Future of Search Archives | Page 3 of 8 | John Battelle's Search Blog

Thinking Out Loud About Voice Search: What’s the Business Model?

By - February 10, 2012

(image) I don’t have Siri yet – I’m still using my “old” iPhone 4. But I do have my hands on a new (unboxed) Nexus, which has Google Voice Actions on it, and I’m sure at some point I’ll get a iPhone 4GS. So this post isn’t written from experience as much as it’s pure speculation, or as I like to call it, Thinking Out Loud.

But driving into work yesterday I realized how useful voice search is going to be to me, once I’ve got it installed. Stuck in traffic, I tried searching for alternate routes, and it struck me how much easier it’d be to just say “give me alternate routes.” That got me thinking about all manner of things – many of which are now possible – “Text my wife I’ll be late,” “Email my assistant and ask her to print the files for my 11 am meeting,” “Find me a good liquor store within a mile of here,” (I’ve actually done that one using Siri on my way to a friend’s house last weekend).

I’ve written about this before, of course (see Texting Is Stupid, for one example from over three years ago), and I predicted in 2011 that voice was going to be a game changer. It clearly is, but now my question is this: What’s the business model?

I hate to pick on Google, but it’s worth asking the question, given how it dominates mobile search: What happens to the AdWords business model when a large percentage of mobile searches are done using voice? Given we don’t look at our screens while using voice commands (pretty much the whole point, no?), how will Google make money from voice search?

It’s an interesting question, but not for Apple – Apple doesn’t make money through search ads, so it can give voice search away for free, and use it as a benefit of buying and using the hardware device (which is where Apple makes its coin, after all). And from what I can tell, Apple uses Yahoo, Wolfram, Yelp and others to populate Siri’s search answers, not Google. I’m sure there’s a direct reason for that: Google probably wanted some kind of fee from Apple, and I’m guessing Apple had little interest in paying. (I also don’t know if Apple is paying Yahoo, Wolfram or Yelp, if any of you do, please let me know…)

Now, Google does have one model in market that could translate to money in voice search – what it calls “Click to Call.” This is the ability for businesses to integrate direct phone calling into their mobile ads. I don’t know if that model is integreated into Voice Actions, but I’d be surprised if it didn’t show up soon (I can imagine Google’s version of Siri asking “Would you like to call this business now?”). And while that should prove a decent revenue stream, it won’t cover the majority of voice searches. And Google isn’t a company that likes to give away search without a monetization strategy.

What do you think such a strategy might be? Could we even imagine the return of “paid inclusion” – where voice search results are returned based on who pays to be part of the results? Sounds far fetched, but at the right scale, it could work.

I’ve not done much thinking about this, but I bet some of you have. What do you say?

  • Content Marquee

Larry Page’s “Tidal Wave Moment”?

By - February 07, 2012

Who remembers the moment, back in 1995, when Bill Gates wrote his famous Internet Tidal Wave Memo? In it he rallied his entire organization to the cause of the Internet, calling the new platform an existential threat/opportunity for Microsoft’s entire business. In the memo Gates wrote:

“I assign the Internet the highest level of importance. In this memo I want to make clear that our focus on the Internet is crucial to every part of our business. The Internet is the most important single development to come along since the IBM PC was introduced in 1981.”

The memo runs more than 5300 words and includes highly detailed product plans across all of Microsoft. In retrospect, it probably wasn’t a genius move to be so transparent – the memo became public during the US Dept. of Justice action against Microsoft in the late 1990s.

It strikes me that Larry Page at Google could have written such a memo to all Googlers last year. Of course, Page and his advisors must have learned from Microsoft’s mistakes, and certainly don’t want a declarative memo floating around the vast clouds of Internet eternity. Bad things can happen from direct mandates such as those made by Gates – in the memo he mentions that Microsoft must “match and beat” Netscape, for example, words that came back to haunt him during the DOJ action.

Here’s what Page might have written to his staff in 2011, with just a few words shifted:

“ I assign social networking the highest level of importance. In this memo I want to make clear that our focus on social networking is crucial to every part of our business. Social networking is the most important single development to come along since Google was introduced in 1998.”

I very much doubt Page wrote anywhere that Google must “match and beat” Facebook. And unlike Gates, he probably did not pen detailed memos about integrating Google+ into all of Google’s products (as Gates did – for pages – declaring that Microsoft must integrate the Internet into all of its core products.)

But it’s certainly not lost on any Googler how important “social” is to the company: all of their bonuses were tied to social last year.

So why am I bringing this up now? Well, I’ve got no news hook. I’m just doing research for the book, and came across the memo, and its tone and urgency struck a familiar note. The furor around Search Plus Your World has died down, but it left a bad taste in a lot of folks’ mouths. But put in the context of “existential threat,” it’s easier to understand why Google did what it did.

Unlike the Internet, which was a freely accessible resource that any company could incorporate into its products and services, to date “social” has been dominated by one company, a company that Google has been unable to work with. Imagine if, when Gates wrote his Tidal Wave memo, the “Internet” he spoke of was controlled entirely by, say, MCI, and that Microsoft was unable to secure a deal to get all that Internet goodness into its future products.

That seems to be where Google finds itself, at least by its own reckoning. To continue being a great search engine, it needs the identity and relationship data found, for the most part, behind Facebook’s walls.

I’ve written elsewhere about the breakdown of the open web, the move toward more “walled gardens of data,” and what that does to Google’s ability to execute its core business of search. And it’s not just social – readers have sent me tons of information that predict how mobile, in particular, will escape the traditional reaches of Google’s spidering business model. I hope to pore through that information and post more here, but for now, it’s worth reading a bit of history to put Google’s moves into broader context.

Our Google+ Conundrum

By - January 14, 2012

I’m going to add another Saturday morning sketch to this site, and offer a caveat to you all: I’ve not bounced this idea off many folks, and the seed of it comes from a source who is unreservedly biased about all this. But I thought this worth airing out, so here you have it.

Given that Google+ results are dominating so many SERPs these days, Google is clearly leveraging its power in search to build up Google+. Unless a majority of people start turning SPYW (Search Plus Your World) off, or decide to search in a logged out way, Google has positioned Google+ as a sort of “mini Internet,” a place where you can find results for a large percentage of your queries.(My source is pretty direct about this: “Google has decided that beating Facebook is worth selling their soul.”)

But to my point. An example of samesaid is the search I did this morning for that Hitler video I posted. Here’s a screenshot of my results:

 

As you can see, the Universal search feature kicked in, and put News results at the top. I know that news results won’t get me straight to the video, I want the YouTube or Vimeo page, not a story about the video. So I look to the results below. The next four results are from Google+. Right below the fold is the actual YouTube video. I didn’t see it on first blush.

So I found that video by clicking on someone’s Google+ post about it (see how the first one is purple, and not blue? That’s the one I clicked on). Some dude I don’t know posted it to Google+, I clicked through to his post (gaining Google another pageview), then clicked through the video to YouTube. That’s lame. That’s not a Googley search experience.

But if that’s how the world of Google works now, that means it’s very important that you tend your Google+ pages, so that you rank well in Google search. Google has pretty much gamed its own search engine to insure Google+ will succeed.

This is what happens when you tell your entire staff that your salary depends on winning in social. 

Now, this presents us all a conundrum. If a large percentage of people are logged into Google and/or Google+ when they are searching for stuff, that means Google+ pages are going to rank well for those people. Hence, I really have no choice but to play Google’s game, and tend to my Google+ page, be I a brand, a person, a small business…. are you getting the picture here? If you decide to NOT play on Google+, you will, in essence, be devalued in Google search, at least for the percentage of people who are logged in whilst using Google.

I dunno. This strikes me as wrong. I’ve spent nearly ten years building this site, Searchblog, and it has tens of thousands of inbound links, six thousand posts, nearly 30,000 comments, etc., etc. But if you are logged into Google+ and search for me, you’re going to get my Google+ profile first.

Seems a bit off. Seems like Google is taking the first click away from me and directing it to a Google service.

Now, if I decide to protest this, and delete my Google+ account, I better pray no one else named John Battelle creates a Google+ account, or they will rank ahead of me. And while Battelle is a pretty unique name, there are actually quite a few of us out there. Imagine if my name was John Kelly? Or Joe Smith?

Yikes. Quite a conundrum.

Again, just sketching on a Saturday morning. It’s a beautiful day, so I think I’ll stop, take a ride, and think a bit more about it before I write anymore.

Related:

It’s Not About Search Anymore, It’s About Deals

Hitler Is Pissed About Google+

Google Responds: No,That’s Not How Facebook Deal Went Down (Oh, And I Say: The Search Paradigm Is Broken)

Compete To Death, or Cooperate to Compete?

Twitter Statement on Google+ Integration with Google Search

Search, Plus Your World, As Long As It’s Our World

 

Google Responds: No,That’s Not How Facebook Deal Went Down (Oh, And I Say: The Search Paradigm Is Broken)

By - January 13, 2012

(image) I’ve just been sent an official response from Google to the updated version of my story posted yesterday (Compete To Death, or Cooperate to Compete?). In that story, I reported about 2009 negotiations over incorporation of Facebook data into Google search. I quoted a source familiar with the negotiations on the Facebook side, who told me  “Senior executives at Google insisted that for technical reasons all information would need to be public and available to all,” and “The only reason Facebook has a Bing integration and not a Google integration is that Bing agreed to terms for protecting user privacy that Google would not.”

I’ve now had conversations with a source familiar with Google’s side of the story, and to say the company disagrees with how Facebook characterized the negotiations is to put it mildly. I’ve also spoken to my Facebook source, who has clarified some nuance as well. To get started, here’s the official, on the record statement, from Rachel Whetstone, SVP Global Communications and Public Affairs:

“We want to set the record straight. In 2009, we were negotiating with Facebook over access to its data, as has been reported.  To claim that the we couldn’t reach an agreement because Google wanted to make private data publicly available is simply untrue.”

My source familiar with Google’s side of the story goes further, and gave me more detail on why the deal went south, at least from Google’s point of view. According to this source, as part of the deal terms Facebook insisted that Google agree to not use publicly available Facebook information to build out a “social service.” The two sides had already agreed that Google would not use Facebook’s firehose (or private) data to build such a service, my source says.

So what does “publicly available” mean? Well, that’d be Facebook pages that any search engine can crawl – information on Facebook that people *want* search engines to know about. This is compared to the firehose data that was the core asset being discussed between the parties. This firehose data is what Google would need in order to surface personal Facebook pages relevant to you in the context of a search query. (So, for example, if you were my friend on Facebook, and you searched for “Battelle soccer” on Google, then with the proposed deal, you’d see pictures of my kids’ soccer games that I had posted to Facebook).

Apparently, Google believed that Facebook’s demand around public information could be interpreted  as applying to how Google’s own search service was delivered, not to mention how it (or other products) might evolve. Interpretation is always where the devil is in these deals. Who’s to say, after all, that Google’s “social search” is not a “social service”? And Google Pages, Maps, etc. – those are arguably social in nature, or will be in the future.

Google balked at this language, and the deal fell apart. My Google source also disputes the claim that Google balked at being able to technically separate public from private data. Conversely, my Facebook source counters that the real issue of public vs. private had to do with Google’s refusal to honor changes in privacy settings over time – for example, if I deleted those soccer pictures, they should also be deleted from Google’s index. There’s a point where this all devolves to she said/he said, because the deal never happened, and to be honest, there are larger points to make.

So let’s start with this: If Facebook indeed demanded that Google not use publicly available Facebook data, it’s certainly understandable why Google wouldn’t agree to the deal. It may not seem obvious, but there is an awful lot of publicly available Facebook pages and data out there. Starbucks, for example, is more than happy to let anyone see its Facebook page, no matter if you’re logged in or not. And then there’s all that Facebook open graph data out on the public web – tons of sites show Facebook status updates, like counts and so on in a public fashion. In short, asking Google to not leverage that data in anything that might constitute a “social service” is anathema to a company who claims its mission to crawl all publicly available information, organize it, and make it available.

It’s one thing to ask that Google not use Facebook’s own social graph and private data to build new social services – after all, the social graph is Facebook’s crown jewels. But it’s quite another thing to ask Google to ignore other public information completely.

From Google’s point of view, Facebook was crippling future products and services that Google might create, which was tantamount to an insurance policy of sorts that Google wouldn’t become a strong competitor, at least not one that  leverages public information from Facebook. Google balked. If Facebook’s demand could have been interpreted as also applying to Google’s search results, well, that’s a stone cold deal killer.

I certainly understand why Facebook might ask for what they did, it’s not crazy. Google might well have responded by narrowing the deal, saying “Fine, you don’t build a search engine, and we won’t build a social network. But we should have the right to create other kinds of social services.” As far as I know, Google didn’t chose to say that. (Microsoft apparently did). And I think I know why: The two companies realized they were dancing on the head of a pin. Search = social, social = search. They couldn’t figure out a way to tease the two apart. Microsoft has cast its lot with Facebook, Google, not so much.

When high stakes deals fall apart, both sides usually claim the other is at fault, and that certainly seems to be the case here. It’s also the case with the Twitter deal, which I’ve gotten a fair amount of new information about as well. I hope to dig into that in another post. For now, I want to pull back a second and comment on what I think is really going on here, at least from the perspective of a longer view.

Our Cherished Search Paradigm Is Broken (But We Will Fix It….Eventually)

I think what we have here is a clear indication that the search paradigm we’ve operated under for a decade or so is broken. That paradigm stems from Google’s original letter to shareholders in 2004. Remember this line?Our search results are the best we know how to produce. They are unbiased and objective, and we do not accept payment for them or for inclusion or more frequent updating.

In many cases, it’s simply naive to claim Google is unbiased or objective. Google often favors its own properties over others, as Danny points out in Real-Life Examples Of How Google’s “Search Plus” Pushes Google+ Over Relevancy and others have also detailed. But there is a reason: if you’re going to show results from all other possible contenders, replete with their associated UI and functional bells and whistles (as Google does with its own Maps, Pages, Plus etc.), well, it’s nearly impossible now to determine which service is the right answer to a particular person’s query. Not to mention, you need to put a deal in place to get all the functionality of the service. Instead, Google has opted, in many cases, to go with their own stuff.

This is not a new idea, by the way. Yahoo’s been doing it this way from the beginning. The contentious issue is that biasing some results toward Google’s own products runs counter to Google’s founding philosophy.

I have a theory as to why all this is happening, and I don’t entirely blame Google. Back when search wasn’t personalized, Google could defensibly say that one service was better than another because it got more traffic, was linked to more (better PageRank), and so on. Back when everyone got the same results and the web was one homogenous glob of HTML, well, you could claim “this is the best result for the general population.” But personalized search has broken that framework – I lamented this back in 2008 with this post: Search Was Our Social Glue. But That Is Dissolving (more here).

With the rise of Facebook and the app economy, the problem of search has become terribly complicated. If you want to have results from Facebook in your search, well, that search service has to do a deal with Facebook. But what if you want results from your running app (I have hundreds of rides and runs logged on AllSportGPS, for example)? Or Instagram? Or Path, for that matter? Do they all have to do deals with Google and Bing? There are so many unconnected pieces of the Internet now (millions of apps, most of our own Facebook experiences, etc. etc.) that what’s a good personal result for one person is not necessarily good for another. If Google is to stay true to its original mission, it needs a new framework and a massive number of new signals – new glue – to put the pieces back together.

There are several ways to resolve this, and in another post, I hope to explore them (one of them, of course, is simply that everyone should just go through Facebook. That’s the vision of Open Graph). But for now, I’m just going to say this: The issues raised by this kerfuffle are far larger than Google vs. Facebook, or Google vs. Twitter. We are in the midst of a major search paradigm shift, and there will be far more tears before it gets resolved. But resolve it must, and resolve it will.

Whisperings of the Future Surround Us

By - November 17, 2011

Yesterday I met with Christopher Ahlberg, the PhD co-founder of Recorded Future, a company I noted in these pages back in mid-2010. Ahlberg is one of those rare birds you just know is making stuff that matters – a scientist, an entrepreneur, a tinkerer, and an enthusiast all wrapped into one.

He ran me through Recorded Future’s technology and business model, and I found it impressive. In fact, I’m hoping I can employ it somehow into my book research. And that conditional tense of “hoping” is the main problem I have with Ahlberg’s creation – it’s a rather complicated system to use. Then again, what of worth isn’t, I suppose?

Recorded Future is, at its core,  a semantic search engine that consumes tens of thousands of structured information feeds as its “crawl.” It then parses this corpus for several core assets: Entities, Events, and Time (or Dates). Recorded Future’s algorithms are particularly adept at identifying and isolating these items, then correlating them at scale. If that sounds simple, it ain’t.

The service then employs a relatively complicated query structure that allows you to project the road ahead for your question. For example, you might choose “Amazon” as your entity, and then set your timeframe for events involving Amazon over the past two months and into the next two months. Recorded Future will analyze its sources (SEC filings, blogs, news sites, etc) and create a timeline-like “map” of things that have happened and are predicted to happen with regard to Amazon over the next eight weeks. You can further refine a search by adding other entities or events (“earnings” or “CEO”, for example).

How does it work? Well, turns out the Internet is rife with whisperings of the future, you just need to learn how to listen. That’s Recordable Future’s specialty. As you might imagine, Wall Street quants and government spooks just love this stuff. I’d imagine journalists would as well, but most of us are too strapped to afford the company’s services. Embedded below is a new feature of the site, a weekly overview of a news-related entity.

Recorded Future’s engine is not limited to the sources it currently consumes. Not only is Ahlberg adding more every month, his customers can add their own corpuses. Imagine throwing Wikileaks into Recorded Future, for example.

Perhaps the coolest aspect of the service is a visualization of how entities relate to each other over time. Ahlberg showed me a search for mobile patents, then toggled into a relationship graph. Guess what entity broke into the center of the graph, connected to nearly everything else? Yup – Motorola.

Did I mention that Google is an investor in Recorded Future?

As I said, I hope to start using the service soon, and perhaps posting my findings here.

On Location, Brand, and Enterprise

By - September 11, 2011

HP IO.pngFrom time to time I have the honor of contributing to a content series underwritten by one of FM’s marketing partners. It’s been a while since I’ve done it, but I was pleased to be asked by HP to contribute to their Input Output site. I wrote on the impact of location – you know I’ve been on about this topic for nearly two years now. Here’s my piece. From it:

Given the public face of location services as seemingly lightweight consumer applications, it’s easy to dismiss their usefulness to business, in particular large enterprises. Don’t make that mistake. …

Location isn’t just about offering a deal when a customer is near a retail outlet. It’s about understanding the tapestry of data that customers create over time, as they move through space, ask questions of their environment, and engage in any number of ways with your stores, your channel, and your competitors. Thanks to those smartphones in their pockets, your customers are telling you what they want – explicitly and implicitly – and what they expect from you as a brand. Fail to listen (and respond) at your own peril.

More on the Input Output site.

More on Twitter's Great Opportunity/Problem

By - August 10, 2011

Itwitter-bird.pngn the comments on this previous post, I promised I’d respond with another post, as my commenting system is archaic (something I’m fixing soon). The comments were varied and interesting, and fell into a few buckets. I also have a few more of my own thoughts to toss out there, given what I’ve heard from you all, as well as some thinking I’ve done in the past day or so.

First, a few of my own thoughts. I wrote the post quickly, but have been thinking about the signal to noise problem, and how solving it addresses Twitter’s advertising scale issues, for a long, long time. More than a year, in fact. I’m not sure why I finally got around to writing that piece on Friday, but I’m glad I did.

What I didn’t get into is some details about how massive the solving of this problem really is. Twitter is more than the sum of its 200 million tweets, it’s also a massive consumer of the web itself. Many of those tweets have within them URLs pointing to the “rest of the web” (an old figure put the percent at 25, I’d wager it’s higher now). Even if it were just 25%, that’s 50 million URLs a day to process, and growing. It’s a very important signal, but it means that Twitter is, in essence, also a web search engine, a directory, and a massive discovery engine. It’s not trivial to unpack, dedupe, analyze, contextualize, crawl, and digest 50 million URLs a day. But if Twitter is going to really exploit its potential, that’s exactly what it has to do.

The same is true of Twitter’s semantic challenge/opportunity. As I said in my last post, tweets express meaning. It’s not enough to “crawl” tweets for keywords and associate them with other related tweets. The point is to associate them based on meaning, intent, semantics, and – this is important – narrative continuity over time. No one that I know of does this at scale, yet. Twitter can and should.

Which gets me to all of your comments. I heard both in the written comments, on Twitter, and in extensive emails offline, from developers who are working on parts of the problems/opportunities I outlined in my initial post. And it’s true, there’s really quite a robust ecosystem out there. Trendspottr, OneRiot, Roundtable, Percolate, Evri, InfiniGraph, The Shared Web, Seesmic, Scoopit, Kosmix, Summify, and many others were mentioned to me. I am sure there are many more. But while I am certain Twitter not only benefits from its ecosystem of developers, it actually *needs* them, I am not so sure any of them can or should solve this core issue for the company.

Several commentators noted, as did Suamil, “Twitter’s firehose is licensed out to at least publicly disclosed 10 companies (my former employer Kosmix being one of them and Google/Bing being the others) and presumably now more people have their hands on it. Of course, those cos don’t see user passwords but have access to just about every other piece of data and can build, from a systems standpoint, just about everything Twitter can/could. No?”

Well, in fact, I don’t know about that. For one, I’m pretty sure Twitter isn’t going to export the growing database around how its advertising system interacts with the rest of Twitter, right? On “everything else,” I’d like to know for certain, but it strikes me that there’s got to be more data that Twitter holds back from the firehose. Data about the data, for example. I’m not sure, and I’d love a clear answer. Anyone have one? I suppose at this point I could ask the company….I’ll let you know if I find out anything. Let me know the same. And thanks for reading.

Book Review: In The Plex

By - April 20, 2011

Last night I had the pleasure of interviewing Steven Levy, and old colleague from Wired, on the subject of his new book: In The Plex: How Google Thinks, Works, and Shapes Our Lives. The venue was the Commonwealth Club in San Francisco, and I think they’ll have the audio link up soon.

Steven’s interview was a lot like his book – full of previously untold anecdotes and stories that rounded out pieces of Google’s history that many of us only dreamt of knowing about. When I was reporting my book,The Search: How Google and Its Rivals Rewrote the Rules of Business and Transformed Our Culture, I had limited access to folks at Google, and *really* limited access to Larry Page and Sergey Brin. Levy had the opposite, spending more than two years inside the company and seeing any number of things that journalists would have killed to see in years past.

The result is a lively and very detailed piece of reporting about the inner workings of Google. But I was a bit disappointed with the book in that Steven didn’t take all that new knowledge and pull back to give us his own analysis of what it all meant. I asked him about this, and he said he made the conscious decision to not editorialize, but rather lay it all out there and let the reader draw his or her own conclusions. I respect that, but I also know Steven has really informed opinions, and I wish he’d give them to us.

What I took away from In the Plex was a renewed respect for the awesome size and scope of Google’s infrastructure, as well as its ambition. Sometimes we forget that Google is more likely than not the largest manufacturer of computers in the world, and runs the largest single instance of computing power in the world. It’s also one of the largest collectors and analyzers of data in the world. All of this has drawn serious scrutiny, but I don’t think even the regulators really grok how significant Google’s assets are. They should all read Steven’s book.

Levy only grazes the surface of Google’s social blindness, unfortunately, and due to timing could only mention Page’s ascendancy to CEO in his epilogue. But his reporting on how the China issue played out is captivating, as are the many details he fills out in Google’s early history. If you’re fascinated by Google, you’ve got to add this one to your library.

Google "Head End" Search Results: Ads as Content, Or…Just Ads?

By - March 30, 2011

GoogHeadEndSearchAdEditRatioBattelleMedia.png

Today I spoke at Sony HQ in front of some Pretty Important Folks, so I wanted to be smart about Sony’s offerings lest anything obviously uninformed slip out of my mouth. To prepare I did a bunch of Google searches around Sony and its various products.

Many of these searches are what I call “head end” searches – a lot of folks are searching for the terms I put in, and they are doubly important to Google (and its advertising partners) because they are also very commercial in nature (not in my case, but in general.) Usually folks searching for “Sony Tablets” have some intent to purchase tablets in the near future, or at the very least are somewhere in what’s called the “purchase funnel.”

I was struck with the results, so much so I took a screen shot of one representative set of results. In traditional print, we used to watch a metric called “Ad Edit Ratio” very closely (as did the government, for reasons of calculating postal rates). Editors at publications lobbied for low ad edit ratios (so they’d get more space to put their content, naturally). Advertising executives lobbied for higher Ad Edit ratios (so they could sell more ads, of course). We usually settled somewhere around 50-50 – half ads, half editorial.

Google is way lower than that, on any given search. But not for head end searches. In fact, as a percentage of actual “editorial” (organic search results) versus “paid”, it’s pushing towards 35/65 or more, at least when you measure the space “above the fold” on a typical screen.

Then again, in the case of AdWords, one could argue the ads are contextually relevant and useful.

Just felt worth pointing out, if for no other reason as to add a page to the historical record of how the service is evolving. Once “media” adwords start taking over, this picture may well change again, and it might not be a change that folks like much.