Google’s On The Field Now. Is It Being Too Cautious?

Google’s Gemini launch.

As hype escalated around the debut of ChatGPT more than a year ago, I predicted that OpenAI and Microsoft would rapidly develop consumer subscription service models for their nascent businesses. Later that year I wrote a piece speculating that Google would inevitably follow suit. If Google was smart, and careful, it had a chance to become “the world’s largest subscription service.” From that piece:

Google can’t afford to fall behind as its closest competitors throw massive resources at AI-driven products and services. But beyond keeping up, Google finds itself in an even higher-stakes transition: Its core business, search, may be shifting into an entirely new consumer model that threatens the very foundation of the company’s cash flow spigot: Advertising. 

Read More
Leave a comment on Google’s On The Field Now. Is It Being Too Cautious?

SIGN UP FOR THE NEWSLETTER

Stay up to date on the latest from BattelleMedia.com

The Question Google Won’t Answer

Reading Ben Thompson’s coverage of Google’s earnings call this week,  one thing jumps out, and simply can’t be ignored: Google CEO Sundar Pichai was asked a simple question, and, as Thompson points out, Pichai dodged it completely. A Merril analyst asked this question:

“Just wondering if you see any changes in query volumes, positive or negative, since you’ve seen the year evolve and more Search innovative experiences.”

Read More
3 Comments on The Question Google Won’t Answer

Google Will Become the World’s Largest Subscription Service. Discuss.

A Google subscription box via Dall-E

Those of you who’ve been reading for a while may have noticed a break in my regular posts – it’s August, and that means vacation. I’ll be back at it after Labor Day, but an interesting story from The Information today is worth a brief note.

Titled How Google is Planning to Beat OpenAI, the piece details the progress of Google’s Gemini project, formed four months ago when the company merged its UK-based DeepMind unit with its Google Brain research group. Both groups were working on sophisticated AI projects, including LLMs, but with unique cultures, leadership, and code bases, they had little else in common. Alphabet CEO Sundar Pichai combined their efforts in an effort to speed his company’s time to market in the face of stiff competition from OpenAI and Microsoft.

Read More
3 Comments on Google Will Become the World’s Largest Subscription Service. Discuss.

Digital Is Killing Serendipity

The buildings are the same, but the information landscape has changed, dramatically.

Today I’m going to write about the college course booklet, an artifact of another time. I hope along the way we might learn something about digital technology, information design, and why we keep getting in our own way when it comes to applying the lessons of the past to the possibilities of the future. But to do that, we have to start with a story.  

Forty years ago this summer I was a rising Freshman at UC Berkeley. Like most 17- or 18- year olds in the pre-digital era, I wasn’t particularly focused on my academic career, and I wasn’t much of a planner either. As befit the era, my parents, while Berkeley alums, were not the type to hover – it wasn’t their job to ensure I read through the registration materials the university had sent in the mail – that was my job. Those materials included a several-hundred-page university catalog laying out majors, required courses, and descriptions of nearly every class offered by each of the departments. But that was all background – what really mattered, I learned from word of mouth, was the course schedule, which was published as a roughly 100-page booklet a few weeks before classes started. 

Read More
6 Comments on Digital Is Killing Serendipity

Come With Me on a Spin Through the Hellscape of AI-Generated News Sites

Welcome to the hellscape of “Made for Advertising” sites

This past Monday NewsGuard, a journalism rating platform that also analyzes and identifies AI-driven misinformation, announced it had identified hundreds of junk news sites powered by generative AI. The focus of NewsGuard’s release was how major brands were funding these spam sites through the indifference of programmatic advertising, but what I found interesting was how low that number was – 250 or so sites. I’d have guessed they’d find tens of thousands of these bottom feeders – but maybe I’m just too cynical about the state of news on the open web. I have a hunch my cynicism will be rewarded in due time, once the costs of AI decline and the inevitable economic incentives that have always driven hucksters kick in.

Given 250 is a manageable number for a mere mortal, I decided to ask the good folks at NewsGuard, where I’m an advisor, for a copy of their listings. Nothing like a tour through the post-apocalyptic hellscape of our AI future, right?

What I found was…disappointing. Most of the sites were beyond shoddy – barely literate, obviously automated, full of errors and content warnings, and utterly devoid of any sense of organizational structure. The most common message, upon clicking on a story link, was some variation of an OpenAI violation:

Not exactly a compelling headline. The next most common experience was this:

This of course is evidence that the scammers are rotating URLs to avoid blacklisting, unburdened of any concern about building audience loyalty. Beyond OpenAPI warnings and 404s, there’s the browser warnings that the site you’re about to visit is, well, seedy:

When you do get an actual news experiences, it becomes clear that these publishers have little interest in passing as “real news sites,” IE publications a sane person might intend to visit. They are instead built as SEO chum in the hopes that Google’s indexes might favor them with some low quality traffic, or worse, as destinations for bot traffic destined for arbitrage inside the darker regions of the programmatic ad universe. The editorial decisions on the various home pages I visited were, well, hilariously inchoate:

Perhaps that’s what we should expect with the first phase of this particular genre, but I found their general awfulness depressing: Most reporters will look at these sites and dismiss them. But they shouldn’t.

Traditional “made for advertising” sites already control 21 percent of all programmatic advertising revenues, and these sites tend to dominate Google search results, enshittifying the open web with low-calorie crap that, one would hope, actually good AI might help us avoid. But the relatively low volume of AI sites indicates, at least anecdotally, that so far the economics of replacing human-built content with AI-driven drivel have yet to kick it. Put simply, it’s still too expensive to replace sites like Geeky Post or Explore Reference with AI. For now.

But when costs come down, I expect made for advertising sites will pivot to AI almost overnight. And I wonder if that’s a bad thing. Once the web’s worst sites all shift to AI-driven output, perhaps they’ll find themselves in a positive spiral of competition for actual human attention. If these sites start to create reasonably high quality content, and search and social start to reward them with real traffic that converts to revenue, perhaps we can simply automate away the shitshow that the open web has become.

One can dream.

You can follow whatever I’m doing next by signing up for my site newsletter here. Thanks for reading.

Leave a comment on Come With Me on a Spin Through the Hellscape of AI-Generated News Sites

Asking The Stupid Questions of GenAI

I recently caught up with a pal who happens to be working at the center of the AI storm. This person is one of the very few folks in this industry whose point of view I explicitly trust: They’ve been working in the space for decades, and possess both a seasoned eye for product as well as the extraordinary gift of interpretation.

This gave me a chance to ask one of my biggest “stupid questions” about how we all might use chatbots. When I first grokked LLM-driven tools like ChatGPT, it struck me that one of its most valuable uses would be to focus its abilities on a bounded data set. For example, I’d love to ask a chatbot like Google Bard to ingest the entire corpus of Searchblog posts, then answer questions I might have about, say, the topics I’ve written about the most. (I’ve been writing here for 20 years, and I’ve forgotten more of it than I care to admit).  This of course only scratches the surface of what I’d want from a tool like Bard when combined with a data set like the Searchblog archives, but it’s a start.

My friend explained that my wish is not possible now, despite what Bard confidently told me when I asked it directly:

Well, no. Bard hallucinated all manner of bullshit in its answer. Yes, I write about technology, but not the Internet of things. I guess I write about society, but mainly in the context of policy and consumer data, not “education, healthcare, and the environment.” Culture? When’s the last time you’ve seen me write about movies?! And if I ever start writing about “personal development,” please put one between my eyes.

Bard’s list of supposed articles was even funnier – it reads like an eighth-grade book report culled from poorly constructed LinkedIn clickbait. Bard is a confident simpleton, despite its claim to be able query specific domains (in this case, battellemedia.com). I responded to Bard with this new prompt: “This is not right. That site does not cover music, movies. Nor does it do motivation, well being, productivity. Why did you answer that way?” Bard’s answer was … pretty much the same, though it did clumsily incorporate my corrections in its response:

Gah. My next prompt was an attempt to clarify where Bard was getting its answers, since it was clearly not using the battellemedia.com domain. “Are you actually referring to content on the site to do these answers?”

Bard’s answer:

Ok, then, at least we’re getting some honesty. I decided to try one last time:

Now this was quite the freshly whipped bullshit: Actual percentages of how the content on my site breaks down! Unbeknownst to me, more than one in ten of my posts are about cybersecurity – a topic I’ve rarely if ever written about here.

Ok, enough beating up on poor Bard. My well-placed friend explained that while it’s currently out of scope for a standard chatbot like Bard or ChatGPT to do what I’m asking of it, “domain specific” queries was a hot area of development for all LLMs. So when will it happen? My friend didn’t commit to an answer on that, but I did get the sense it’s coming soon. The ability to apply LLM-level intelligence to large data sets is just too big an opportunity – in both B2C as well as B2B/enterprise markets.

A big reason this is taking more time than I’d like is cost. Noted AI investor Andreessen Horowitz recently posted a long explainer on the state of LLM models, but it all comes down to this money quote:  “Today, even linear scaling (the best theoretical outcome) would be cost-prohibitive for many applications. A single GPT-4 query over 10,000 pages would cost hundreds of dollars at current API rates.” By my estimates, this cost would need to come down at least four orders of magnitude – from hundreds of dollars per query to pennies  – to unlock the kind of magic that I’ve been dreaming about over the past few months. Not to mention all the technological machinations related to prompt handling, vector database management, orchestration frameworks, and other stuff that makes my brain hurt. But the good news, despite my rather pessimistic post from earlier this week, is that the good shit’s coming – we just need to be a bit more patient.

You can follow whatever I’m doing next by signing up for my site newsletter here. Thanks for reading.

 

Leave a comment on Asking The Stupid Questions of GenAI

The Firefox Test

The Information today reports that Mozilla plans to integrate GPT-like chat technology into its widely used Firefox browser. Mozilla has long partnered with Google for search, yielding a reputed hundreds of millions in revenue as a result.

The tech press has breathlessly speculated that, freshly invigorated thanks to ChatGPT, Microsoft’s Bing might steal a major distribution partner from Google. First it was Samsung (wrong), then it was Apple (unlikely), and always there was Firefox, with its 200 million monthly users and its tumultuous relationship with its Googley paymaster.

But The Information’s reporting includes a twist: While Google and OpenAI declined comment, Firefox Chief Product Officer Steve Teixeira is quoted saying his company could add AI-driven chat without violating its current deal with Google – or any future deal, given the Google deal is reportedly up later this year. From today’s story: “Teixeira said his company’s search deal with Google doesn’t pertain to conversational technologies, and the chatbot arrangement would be separate from traditional search.”

Teixeira goes on to make any number of claims about how search and conversational interfaces are essentially different use cases – a useful fantasy that feels increasingly hard to defend. “…the mainstream is really accustomed to getting a search engine results page, with lots of results, and changing that behavior for mainstream people is going to take some time.”

Michelle begs to differ. A huge chunk of search is already poised to be swallowed by chat interfaces, and I’d argue it’s only a matter of time before the ten blue links becomes a secondary destination – one you go to only after consulting your AI agent. One of my principal frustrations with early chatbots like Pi is that they don’t have the ability to quickly search something in the context of our chats. As I’ve argued before, search is on its way to commoditization, and more likely than not, we’ll all end up paying for chat bots the way we pay for other valuable information services – as a monthly subscription.

To get there, major platforms like Google, Amazon, Apple, and Facebook will need to roll out chat integrations inside search – and three of four of those have no reason not to – only Google has a cannibalization dilemma with its core search model. But for now, Google has the lion’s share of search-related attention, and therefore, the most to both win and/or lose as consumer behavior shifts toward chat. Which is all a long way of saying this: When Firefox does announce its first chat integration, more likely than not it’ll come from Google.

 

 

Leave a comment on The Firefox Test

Google Splits the Web Into “Commercial” and “Non Commercial” Search

Last week I was traveling – and being in four places in six days does not make for a good writing vibe. But today I’m back – and while the pace is picking up for the annual Signal conference I co-produce with P&G, I wanted to take a minute to reflect on last week’s news – no, not that CNN shitshow, but Google’s big I/O conference, where the company finally revealed its plans around search, AI, and a whole lot more.

Leading tech analyst Ben Thompson summarized how most of the pundit-ocracy responded to Google I/O: “the ‘lethargic search monopoly’ has woken up.” He also noted something critical: “AI is in fact a sustaining technology for all of Big Tech, including Google.” Put another way, the bar has been reset and no one company is going to own a moat around AI  – at least not yet.  Over time, of course, moats can and will be built, just as they were with core technologies like the microprocessor, the Internet itself, and the mobile phone. But for now, it’s a race without clear winners.

Head to The Verge if you want a summary of what went down at I/O – beyond AI, Google doubled down on devices – positioning itself as a serious competitor to Apple (I’ve been a Google Pixel user for years, and all I want is for the two companies to figure out how to deliver a text…).

But we’re all about AI and search here at Searchblog, and damn, there was finally some real talk about how the peanut butter and chocolate would be combined. As The Verge put it, “The single most visited page on the internet is undergoing its most radical change in 25 years.” From the story:

Called the Search Generative Experience (SGE), the new interface makes it so that when you type a query into the search box, the so-called “10 blue links” that we’re all familiar with appear for only a brief moment before being pushed off the page by a colorful new shade with AI-generated information. The shade pushes the rest of Google’s links far down the page you’re looking at — and when I say far, I mean almost entirely off the screen.

That seems radical, but for commercial searches (Google defines that term, not me), ads will still be front and center. And that’s a key distinction. The majority of searches are not commercial, and for those, Google promised an AI-driven summary at the top – an evolution of the “snippets” and one boxes that we’ve become accustomed to. It’s a clever hack – for most of our searches, it’ll seem like Google’s familiar ten blue link interface has been replaced. But commercial searches will continue to feel, well, commercial. Google’s really pushing what that will look like, for a taste, check out this short video:

Not your father’s advertising experience.

These changes have yet to be rolled out (they’re coming “soon” and you can sign up for the sandbox with a personal Gmail account here), but with these revelations certainly gave Google’s extended ecosystem a new set of opaque signals to read. Millions of SEO-driven websites will have to recalibrate against Google’s black box algorithms and hope they can continue to win placement and the revenue that comes along with it. To you all, I say bon chance.

You can follow whatever I’m doing next by signing up for my site newsletter here. Thanks for reading.

 

3 Comments on Google Splits the Web Into “Commercial” and “Non Commercial” Search

Google and Commoditization: Anyone Need a BackRub?

The first Google logo, when the project was called BackRub and focused on Internet “backlinks.” Lore has it the hand is co-founder Larry Page’s.

Once upon a time when search was new, Google came along and put the whole darn Internet in RAM. This was an astonishing (and expensive) feat of engineering at the time – one that gave Google a significant competitive moat. Twenty years ago, very few companies had the know how or the resources to keep an up-to-date copy of the entire web in expensive, super fast silicon. Google’s ability to do so allowed it unprecedented flexibility and speed in its product, and that product won the search crown, building a trillion-dollar market cap along the way.

Since then compute, storage, and engineering costs have declined in a kind of reverse version of Moore’s Law. Pretty much anyone with a bit of funding and some basic Internet crawling skills can stand up a web index – but there’s been no reason to do so. For 15 or so years one of the biggest clichés in venture circles was “no one will ever fund another search engine.” (A second cliché? “No one’s ever said “Just Bing it.”)

Google had won, and that was that.

All of this came back to me when reading an eye-opening leaked memo over on Dylan Patel’s SemiAnalysis newsletter. Titled “We Have No Moat, and Neither Does OpenAI” and written by a senior researcher at Google, the memo is a devastating critique of Google’s approach to competing in the world of generative AI. SemiAnalysis placed a strong caveat at the top of the text, and I’ve not been able to find confirmation of the memo’s claims (Bloomberg has more here). But the essence of the memo’s argument resonates: Both Google and OpenAI are going to lose out in the race to dominate the AI future because they are refusing to play in the domain of open source software, where legions of developers are racing ahead of companies who are trying to go it on their own.

But while the “open vs. closed” debate is fundamental to the future of the Internet (and our society), that’s not the only thing I’m on about today. What I realized today was something more concrete and astonishing. With GPT technologies, pretty much every startup can now put a decent facsimile of Google in RAM, and according to the memo, they can do it for pretty much peanuts.

Put another way, Google’s been commoditized.

I know, I know, the whole “Google is screwed” meme has been around ever since ChatGPT’s launch, but before I joined the chorus, I’ve been waiting for Google’s response (there are rumblings it may come at Google’s I/O event next week, we’ll see). If and when Google launches competitive products, I reasoned, then we’ll see whether the search giant is truly in trouble.

But it might not matter how good Google’s eventual products may be. The bar for what consumers expect from an index of the world’s information has not just been raised, it’s been entirely reset.  This hit home for me as I was playing around with Pi, the new “personal AI assistant” from Reid Hoffman & co’s Inflection AI. In my initial conversation, I asked Pi if it used the Internet as its core source material. Of course it does. With my recent posts about my wife’s ChatGPT usage in mind, I then asked it if it could help me find the best design blogs. It certainly could, and not only that, it suggested I browse some niche sites as well. I then began querying Pi about how it determines which sites to display – does it have a ranking system like Google’s PageRank?

No, Pi responded, it doesn’t employ a ranking system, but it does use various signals to determine which sites to suggest. What are those signals, I asked? Pi replied they include  a site’s “popularity” as well as “quality of writing.” Interesting! I asked what Pi used to determine popularity, and its reply kind of blew my mind: Backlinks.

Backlinks! For those who aren’t Internet history nerds, Google began as a project called “BackRub” – an attempt by then-PhD candidate Larry Page to create a database of every backlink on the web. In addition to backlinks, Pi also uses the number of social media shares a particular site garners as a signal, as well as analyzing the complexity and grammar of the site’s text to determine quality. These are among the many signals that Google (and every other search engine) uses – in essence, replicating Google’s core differentiation and leveraging it for a new use case – as an AI personal assistant.

Sure, Google can compete by creating its own AI personal assistant, but one has to ask, where does it end? There are limitless use cases to employ the world’s knowledge at scale – and Google can’t possibly compete in every one of them. The author of the aforementioned memo states as much in his manifesto. Like it or not, GPT technology has transmogrified Google’s once-impermeable moat into an open platform, and thousands of entrepreneurs and developers are busy building new applications on top of it. Unfortunately for Google, none of them are paying the company a cent for the pleasure of doing so.

This is likely the subject of many future posts, but it strikes me that if it’s going to compete, Google has a very big pivot in its near future: abandon its closed approach to developing products and services, and finally become a true platform company.

You can follow whatever I’m doing next by signing up for my site newsletter here. Thanks for reading.

 

4 Comments on Google and Commoditization: Anyone Need a BackRub?

Microsoft Ups the Ante on Both Google and Its Partner OpenAI

Microsoft today announced a cluster of upgrades to its Bing-ChatGPT product, including:

  • Eliminating the Bing chat waitlist, which effectively throttled the product’s growth by adding steps to a consumer’s journey.
  • Integrating more visual search results, which will enliven the consumer experience and potentially engage visitors for longer.
  • Adding chat history and persistence, a major differentiation between Bing chat and OpenAI’s ChatGPT, and for me anyway, the main reason I didn’t use Bing.
  • Adding more long document summarization, which is another feature that ChatGPT excels at.
  • Adding a platform layer to Bing, so third party developers can integrate in much the same manner as they can with ChatGPT’s plugins, which I’ve both praised and trashed in past posts (praised because of their potential, trashed because the model reminds me of the app store, which is a walled garden nightmare).

Overall, this news strikes me as Microsoft upping the ante not only on Google, which now has even more catching up to do, but also on Microsoft’s own partner OpenAI, which until now had a superior product. I’m on the road and not able to write as much as I’d like on this, but it’s worth noting. I’m sure the product managers in Mountain View aren’t getting much sleep these days – the pressure is mounting for Google to respond. And in OpenAI headquarters, the frustration has to be building as well – they cut that deal with Microsoft, and now have to live with its terms.

1 Comment on Microsoft Ups the Ante on Both Google and Its Partner OpenAI