Those of you who’ve been reading for a while may have noticed a break in my regular posts – it’s August, and that means vacation. I’ll be back at it after Labor Day, but an interesting story from The Information today is worth a brief note.
Titled How Google is Planning to Beat OpenAI, the piece details the progress of Google’s Gemini project, formed four months ago when the company merged its UK-based DeepMind unit with its Google Brain research group. Both groups were working on sophisticated AI projects, including LLMs, but with unique cultures, leadership, and code bases, they had little else in common. Alphabet CEO Sundar Pichai combined their efforts in an effort to speed his company’s time to market in the face of stiff competition from OpenAI and Microsoft.
The buildings are the same, but the information landscape has changed, dramatically.
Today I’m going to write about the college course booklet, an artifact of another time. I hope along the way we might learn something about digital technology, information design, and why we keep getting in our own way when it comes to applying the lessons of the past to the possibilities of the future. But to do that, we have to start with a story.
Forty years ago this summer I was a rising Freshman at UC Berkeley. Like most 17- or 18- year olds in the pre-digital era, I wasn’t particularly focused on my academic career, and I wasn’t much of a planner either. As befit the era, my parents, while Berkeley alums, were not the type to hover – it wasn’t their job to ensure I read through the registration materials the university had sent in the mail – that was my job. Those materials included a several-hundred-page university catalog laying out majors, required courses, and descriptions of nearly every class offered by each of the departments. But that was all background – what really mattered, I learned from word of mouth, was the course schedule, which was published as a roughly 100-page booklet a few weeks before classes started.
Welcome to the hellscape of “Made for Advertising” sites
This past Monday NewsGuard, a journalism rating platform that also analyzes and identifies AI-driven misinformation, announced it had identified hundreds of junk news sites powered by generative AI. The focus of NewsGuard’s release was how major brands were funding these spam sites through the indifference of programmatic advertising, but what I found interesting was how low that number was – 250 or so sites. I’d have guessed they’d find tens of thousands of these bottom feeders – but maybe I’m just too cynical about the state of news on the open web. I have a hunch my cynicism will be rewarded in due time, once the costs of AI decline and the inevitable economic incentives that have always driven hucksters kick in.
Given 250 is a manageable number for a mere mortal, I decided to ask the good folks at NewsGuard, where I’m an advisor, for a copy of their listings. Nothing like a tour through the post-apocalyptic hellscape of our AI future, right?
What I found was…disappointing. Most of the sites were beyond shoddy – barely literate, obviously automated, full of errors and content warnings, and utterly devoid of any sense of organizational structure. The most common message, upon clicking on a story link, was some variation of an OpenAI violation:
Not exactly a compelling headline. The next most common experience was this:
This of course is evidence that the scammers are rotating URLs to avoid blacklisting, unburdened of any concern about building audience loyalty. Beyond OpenAPI warnings and 404s, there’s the browser warnings that the site you’re about to visit is, well, seedy:
When you do get an actual news experiences, it becomes clear that these publishers have little interest in passing as “real news sites,” IE publications a sane person might intend to visit. They are instead built as SEO chum in the hopes that Google’s indexes might favor them with some low quality traffic, or worse, as destinations for bot traffic destined for arbitrage inside the darker regions of the programmatic ad universe. The editorial decisions on the various home pages I visited were, well, hilariously inchoate:
Perhaps that’s what we should expect with the first phase of this particular genre, but I found their general awfulness depressing: Most reporters will look at these sites and dismiss them. But they shouldn’t.
Traditional “made for advertising” sites already control 21 percent of all programmatic advertising revenues, and these sites tend to dominate Google search results, enshittifying the open web with low-calorie crap that, one would hope, actually good AI might help us avoid. But the relatively low volume of AI sites indicates, at least anecdotally, that so far the economics of replacing human-built content with AI-driven drivel have yet to kick it. Put simply, it’s still too expensive to replace sites like Geeky Post or Explore Reference with AI. For now.
But when costs come down, I expect made for advertising sites will pivot to AI almost overnight. And I wonder if that’s a bad thing. Once the web’s worst sites all shift to AI-driven output, perhaps they’ll find themselves in a positive spiral of competition for actual human attention. If these sites start to create reasonably high quality content, and search and social start to reward them with real traffic that converts to revenue, perhaps we can simply automate away the shitshow that the open web has become.
I recently caught up with a pal who happens to be working at the center of the AI storm. This person is one of the very few folks in this industry whose point of view I explicitly trust: They’ve been working in the space for decades, and possess both a seasoned eye for product as well as the extraordinary gift of interpretation.
This gave me a chance to ask one of my biggest “stupid questions” about how we all might use chatbots. When I first grokked LLM-driven tools like ChatGPT, it struck me that one of its most valuable uses would be to focus its abilities on a bounded data set. For example, I’d love to ask a chatbot like Google Bard to ingest the entire corpus of Searchblog posts, then answer questions I might have about, say, the topics I’ve written about the most. (I’ve been writing here for 20 years, and I’ve forgotten more of it than I care to admit). This of course only scratches the surface of what I’d want from a tool like Bard when combined with a data set like the Searchblog archives, but it’s a start.
My friend explained that my wish is not possible now, despite what Bard confidently told me when I asked it directly:
Well, no. Bard hallucinated all manner of bullshit in its answer. Yes, I write about technology, but not the Internet of things. I guess I write about society, but mainly in the context of policy and consumer data, not “education, healthcare, and the environment.” Culture? When’s the last time you’ve seen me write about movies?! And if I ever start writing about “personal development,” please put one between my eyes.
Bard’s list of supposed articles was even funnier – it reads like an eighth-grade book report culled from poorly constructed LinkedIn clickbait. Bard is a confident simpleton, despite its claim to be able query specific domains (in this case, battellemedia.com). I responded to Bard with this new prompt: “This is not right. That site does not cover music, movies. Nor does it do motivation, well being, productivity. Why did you answer that way?” Bard’s answer was … pretty much the same, though it did clumsily incorporate my corrections in its response:
Gah. My next prompt was an attempt to clarify where Bard was getting its answers, since it was clearly not using the battellemedia.com domain. “Are you actually referring to content on the site to do these answers?”
Bard’s answer:
Ok, then, at least we’re getting some honesty. I decided to try one last time:
Now this was quite the freshly whipped bullshit: Actual percentages of how the content on my site breaks down! Unbeknownst to me, more than one in ten of my posts are about cybersecurity – a topic I’ve rarely if ever written about here.
Ok, enough beating up on poor Bard. My well-placed friend explained that while it’s currently out of scope for a standard chatbot like Bard or ChatGPT to do what I’m asking of it, “domain specific” queries was a hot area of development for all LLMs. So when will it happen? My friend didn’t commit to an answer on that, but I did get the sense it’s coming soon. The ability to apply LLM-level intelligence to large data sets is just too big an opportunity – in both B2C as well as B2B/enterprise markets.
A big reason this is taking more time than I’d like is cost. Noted AI investor Andreessen Horowitz recently posted a long explainer on the state of LLM models, but it all comes down to this money quote: “Today, even linear scaling (the best theoretical outcome) would be cost-prohibitive for many applications. A single GPT-4 query over 10,000 pages would cost hundreds of dollars at current API rates.” By my estimates, this cost would need to come down at least four orders of magnitude – from hundreds of dollars per query to pennies – to unlock the kind of magic that I’ve been dreaming about over the past few months. Not to mention all the technological machinations related to prompt handling, vector database management, orchestration frameworks, and other stuff that makes my brain hurt. But the good news, despite my rather pessimistic post from earlier this week, is that the good shit’s coming – we just need to be a bit more patient.
The tech press has breathlessly speculated that, freshly invigorated thanks to ChatGPT, Microsoft’s Bing might steal a major distribution partner from Google. First it was Samsung (wrong), then it was Apple (unlikely), and always there was Firefox, with its 200 million monthly users and its tumultuous relationship with its Googley paymaster.
But The Information’s reporting includes a twist: While Google and OpenAI declined comment, Firefox Chief Product Officer Steve Teixeira is quoted saying his company could add AI-driven chat without violating its current deal with Google – or any future deal, given the Google deal is reportedly up later this year. From today’s story: “Teixeira said his company’s search deal with Google doesn’t pertain to conversational technologies, and the chatbot arrangement would be separate from traditional search.”
Teixeira goes on to make any number of claims about how search and conversational interfaces are essentially different use cases – a useful fantasy that feels increasingly hard to defend. “…the mainstream is really accustomed to getting a search engine results page, with lots of results, and changing that behavior for mainstream people is going to take some time.”
Michellebegs to differ. A huge chunk of search is already poised to be swallowed by chat interfaces, and I’d argue it’s only a matter of time before the ten blue links becomes a secondary destination – one you go to only after consulting your AI agent. One of my principal frustrations with early chatbots like Pi is that they don’t have the ability to quickly search something in the context of our chats. As I’ve argued before, search is on its way to commoditization, and more likely than not, we’ll all end up paying for chat bots the way we pay for other valuable information services – as a monthly subscription.
To get there, major platforms like Google, Amazon, Apple, and Facebook will need to roll out chat integrations inside search – and three of four of those have no reason not to – only Google has a cannibalization dilemma with its core search model. But for now, Google has the lion’s share of search-related attention, and therefore, the most to both win and/or lose as consumer behavior shifts toward chat. Which is all a long way of saying this: When Firefox does announce its first chat integration, more likely than not it’ll come from Google.
Last week I was traveling – and being in four places in six days does not make for a good writing vibe. But today I’m back – and while the pace is picking up for the annual Signal conference I co-produce with P&G, I wanted to take a minute to reflect on last week’s news – no, not that CNN shitshow, but Google’s big I/O conference, where the company finally revealed its plans around search, AI, and a whole lot more.
Leading tech analyst Ben Thompson summarized how most of the pundit-ocracy responded to Google I/O: “the ‘lethargic search monopoly’ has woken up.” He also noted something critical: “AI is in fact a sustaining technology for all of Big Tech, including Google.” Put another way, the bar has been reset and no one company is going to own a moat around AI – at least not yet. Over time, of course, moats can and will be built, just as they were with core technologies like the microprocessor, the Internet itself, and the mobile phone. But for now, it’s a race without clear winners.
Head to The Verge if you want a summary of what went down at I/O – beyond AI, Google doubled down on devices – positioning itself as a serious competitor to Apple (I’ve been a Google Pixel user for years, and all I want is for the two companies to figure out how to deliver a text…).
But we’re all about AI and search here at Searchblog, and damn, there was finally some real talk about how the peanut butter and chocolate would be combined. As The Verge put it, “The single most visited page on the internet is undergoing its most radical change in 25 years.” From the story:
Called the Search Generative Experience (SGE), the new interface makes it so that when you type a query into the search box, the so-called “10 blue links” that we’re all familiar with appear for only a brief moment before being pushed off the page by a colorful new shade with AI-generated information. The shade pushes the rest of Google’s links far down the page you’re looking at — and when I say far, I mean almost entirely off the screen.
That seems radical, but for commercial searches (Google defines that term, not me), ads will still be front and center. And that’s a key distinction. The majority of searches are not commercial, and for those, Google promised an AI-driven summary at the top – an evolution of the “snippets” and one boxes that we’ve become accustomed to. It’s a clever hack – for most of our searches, it’ll seem like Google’s familiar ten blue link interface has been replaced. But commercial searches will continue to feel, well, commercial. Google’s really pushing what that will look like, for a taste, check out this short video:
Not your father’s advertising experience.
These changes have yet to be rolled out (they’re coming “soon” and you can sign up for the sandbox with a personal Gmail account here), but with these revelations certainly gave Google’s extended ecosystem a new set of opaque signals to read. Millions of SEO-driven websites will have to recalibrate against Google’s black box algorithms and hope they can continue to win placement and the revenue that comes along with it. To you all, I say bon chance.
The first Google logo, when the project was called BackRub and focused on Internet “backlinks.” Lore has it the hand is co-founder Larry Page’s.
Once upon a time when search was new, Google came along and put the whole darn Internet in RAM. This was an astonishing (and expensive) feat of engineering at the time – one that gave Google a significant competitive moat. Twenty years ago, very few companies had the know how or the resources to keep an up-to-date copy of the entire web in expensive, super fast silicon. Google’s ability to do so allowed it unprecedented flexibility and speed in its product, and that product won the search crown, building a trillion-dollar market cap along the way.
Since then compute, storage, and engineering costs have declined in a kind of reverse version of Moore’s Law. Pretty much anyone with a bit of funding and some basic Internet crawling skills can stand up a web index – but there’s been no reason to do so. For 15 or so years one of the biggest clichés in venture circles was “no one will ever fund another search engine.” (A second cliché? “No one’s ever said “Just Bing it.”)
Google had won, and that was that.
All of this came back to me when reading an eye-opening leaked memo over on Dylan Patel’s SemiAnalysis newsletter. Titled “We Have No Moat, and Neither Does OpenAI” and written by a senior researcher at Google, the memo is a devastating critique of Google’s approach to competing in the world of generative AI. SemiAnalysis placed a strong caveat at the top of the text, and I’ve not been able to find confirmation of the memo’s claims (Bloomberg has more here). But the essence of the memo’s argument resonates: Both Google and OpenAI are going to lose out in the race to dominate the AI future because they are refusing to play in the domain of open source software, where legions of developers are racing ahead of companies who are trying to go it on their own.
But while the “open vs. closed” debate is fundamental to the future of the Internet (and our society), that’s not the only thing I’m on about today. What I realized today was something more concrete and astonishing. With GPT technologies, pretty much every startup can now put a decent facsimile of Google in RAM, and according to the memo, they can do it for pretty much peanuts.
Put another way, Google’s been commoditized.
I know, I know, the whole “Google is screwed” meme has been around ever since ChatGPT’s launch, but before I joined the chorus, I’ve been waiting for Google’s response (there are rumblings it may come at Google’s I/O event next week, we’ll see). If and when Google launches competitive products, I reasoned, then we’ll see whether the search giant is truly in trouble.
But it might not matter how good Google’s eventual products may be. The bar for what consumers expect from an index of the world’s information has not just been raised, it’s been entirely reset. This hit home for me as I was playing around with Pi, the new “personal AI assistant” from Reid Hoffman & co’s Inflection AI. In my initial conversation, I asked Pi if it used the Internet as its core source material. Of course it does. With my recent posts about my wife’s ChatGPT usage in mind, I then asked it if it could help me find the best design blogs. It certainly could, and not only that, it suggested I browse some niche sites as well. I then began querying Pi about how it determines which sites to display – does it have a ranking system like Google’s PageRank?
No, Pi responded, it doesn’t employ a ranking system, but it does use various signals to determine which sites to suggest. What are those signals, I asked? Pi replied they include a site’s “popularity” as well as “quality of writing.” Interesting! I asked what Pi used to determine popularity, and its reply kind of blew my mind: Backlinks.
Backlinks! For those who aren’t Internet history nerds, Google began as a project called “BackRub” – an attempt by then-PhD candidate Larry Page to create a database of every backlink on the web. In addition to backlinks, Pi also uses the number of social media shares a particular site garners as a signal, as well as analyzing the complexity and grammar of the site’s text to determine quality. These are among the many signals that Google (and every other search engine) uses – in essence, replicating Google’s core differentiation and leveraging it for a new use case – as an AI personal assistant.
Sure, Google can compete by creating its own AI personal assistant, but one has to ask, where does it end? There are limitless use cases to employ the world’s knowledge at scale – and Google can’t possibly compete in every one of them. The author of the aforementioned memo states as much in his manifesto. Like it or not, GPT technology has transmogrified Google’s once-impermeable moat into an open platform, and thousands of entrepreneurs and developers are busy building new applications on top of it. Unfortunately for Google, none of them are paying the company a cent for the pleasure of doing so.
This is likely the subject of many future posts, but it strikes me that if it’s going to compete, Google has a very big pivot in its near future: abandon its closed approach to developing products and services, and finally become a true platform company.
Microsoft today announced a cluster of upgrades to its Bing-ChatGPT product, including:
Eliminating the Bing chat waitlist, which effectively throttled the product’s growth by adding steps to a consumer’s journey.
Integrating more visual search results, which will enliven the consumer experience and potentially engage visitors for longer.
Adding chat history and persistence, a major differentiation between Bing chat and OpenAI’s ChatGPT, and for me anyway, the main reason I didn’t use Bing.
Adding more long document summarization, which is another feature that ChatGPT excels at.
Adding a platform layer to Bing, so third party developers can integrate in much the same manner as they can with ChatGPT’s plugins, which I’ve both praised and trashed in past posts (praised because of their potential, trashed because the model reminds me of the app store, which is a walled garden nightmare).
Overall, this news strikes me as Microsoft upping the ante not only on Google, which now has even more catching up to do, but also on Microsoft’s own partner OpenAI, which until now had a superior product. I’m on the road and not able to write as much as I’d like on this, but it’s worth noting. I’m sure the product managers in Mountain View aren’t getting much sleep these days – the pressure is mounting for Google to respond. And in OpenAI headquarters, the frustration has to be building as well – they cut that deal with Microsoft, and now have to live with its terms.
Last week I wrote a piece noting how my wife Michelle’s Google usage was down by nearly two thirds, thanks to her discovery of ChatGPT. I noted that Michelle isn’t exactly an early adopter – but that’s not entirely true. Michelle is more of a harbinger – if an early tech product “fits” her, she’ll adopt it early and often – and it’s usually a winner once it goes mainstream. The early Tivo DVRs come to mind – and they remain a better product than anything that’s come since in the television world (another example of how entrenched business models kill innovation).
But few early versions of any new product get to “Michelle market fit” on first attempt. For it to happen with an AI chatbot – well before I developed the habit – is rarer still. I mean, I’m supposed to be the early adopter around here!
So once I noticed Michelle was hooked, I asked her how she wrangles ChatGPT. As I noted in my last post, the majority of her usage focuses on information-intensive projects that tend to get messy when attempted with Google. For example, Michelle’s managing a real estate project with a complex set of inputs. The property requires extensive renovations but must conform to certain community regulations and standards. With the summer rental season approaching, she needs to make scores of decisions on everything from paint colors to septic system contractors. Prior to ChatGPT, Michelle would have started her searches inside Google, but quickly been frustrated by “content cruft” – reams of crappy results and, in her words, “way too many ads.” For commercial searches about household appliances, construction projects, landscaping ideas, and the like, Google’s index favors the kind of content created by “advertising mills” – cheap, low calorie stuff that crowds out the kind of trusted advice Michelle is seeking. She knows she’s being steered into choices that create profit for Google’s ecosystem.
Faced with those kinds of results, Michelle is more likely to call a friend who has already done what she’s looking to do, get one simple answer (even if it’s not exactly right), and go with it. And this is the first insight that springs to mind when I talk to Michelle about ChatGPT. For her, the service has become the equivalent of a “first phone call” for a competent point of view, minus the cruft. It’s as if OpenAI has culled most of the crap from the content mills that have larded up Google, and delivers only the good stuff.
Continuing along those lines, Michelle tells me she’s found a hack of sorts – she’ll ask ChatGPT for that competent first answer, then ask it to name top design blogs that focus on her particular query. Then she’ll dig into those sites, and more often than not, she’ll find confirmation of ChatGPT’s advice, or sometimes even better ideas. This is how, for example, she figured out which shade of paint would work for the property she’s working on. Here’s ChatGPT answering her query about two very specific paint choices:
In a darker home on the beach that faces east in New England, I would recommend using Benjamin Moore’s White Dove (OC-17) over Simply White (OC-117). The subtle gray undertone in White Dove can help to balance the natural light coming in from the east-facing windows, while the higher LRV of 85 can reflect more light and create a brighter and more spacious feel in the darker home.
Using a purer white like Simply White in a darker home can sometimes make the space feel stark and cold, especially if there’s not a lot of natural light coming in.
I mean, the confidence!
When I asked Michelle if she was worried that ChatGPT might sometimes be full of shit, she responded that in the end, she got what she wanted, a confident response that she could then put to the test in other ways – talking to folks at the paint store, friends who are designers, the aforementioned design blogs. And she got it in a few minutes – with no Google sludge to wade through. “The complexity of the questions that I can ask,” Michelle told me, “and the nuance in the answers that my questions can provoke, if asked the right way, is what hits right and keeps me asking even more. Plus, it’s such a relief to not have to scroll through all the ads and sponsored links.”
The implications here are pretty significant, given that these kinds of commercial searches drive a significant share of Google’s search margins. As long as OpenAI avoids an advertising model, its results will convincingly outshine Google’s in this category. This argues for refining the subscription model OpenAI is already pursuing, and will likely inform whatever Google’s been working on feverishly these past few months. Were I Google, I’d quickly launch a ChatGPT competitor that was free of ads, matched ChatGPT’s subscription offering feature for feature, and for the first year or two, is free to use. I’d market it as far superior to OpenAI because it’s updated by Google’s core search index – a differentiator against Bing Chat as well. Then, as the service gains millions of users, I’d slowly introduce premium upgrades – just as the company has done with Gmail and its office suite. Were that to happen, I’m pretty sure Michelle would become a paying customer – and so would I.
One last note: Michelle is keenly aware that OpenAI is building a formidable dataset on her queries, including information she said she’d usually not share with Google or any online service. This concerns her, and should concern us all – but that’s fodder for another post.
On Sunday The New York Times reported that Google is furiously working to incorporate conversational AI into its core search products – not exactly news, but there was a larger takeaway: Google has got to get some killer AI products out the door, and fast, or it risks losing its core users for good. And if my own family is any indication, the company is already imperiled. More on that below, but first, a bit more on the Times piece.
The article led with big news: Samsung may decamp from Google and partner with Microsoft’s Bing instead. This would be a major blow both financially as well as optically – Samsung’s commitment to Android is a key reason Google’s mobile platform towers over Apple’s iOS in terms of worldwide market share.
But the real partnership to watch is Google’s deal with Apple itself. Estimated at $20 billion annually, this deal ensures that Google’s core search engine is the default on more than 1 billion iOS devices. If Google loses that deal to Microsoft, the entire tech world will be re-ordered. For now, Wall Street seems to think the deal isn’t in jeopardy (the stock price is a handy gauge), but even the speculation that Google might lose Apple leaves Apple with an extraordinary amount of leverage for the balance of this year (details are thin as to when the deal actually renews, but analysts think it’s late 2023).
The short of it is this: Google’s got to respond, and soon – or it risks losing its most important distribution deals, and by extension, its most profitable customers.
Then again, if my wife Michelle is representative of a larger swath of Internet users, Google’s got a fight on its hands today – not sometime in the future should Samsung or Apple decide to bolt. And it’s not Bing that’s winning – it’s ChatGPT. Yes, ChatGPT had “only” 1.6 billion visits in March, roughly 1% of Google’s totals. But that’s up from 1 billion in February, and with compounding growth like that, it won’t be long before Google’s facade of immutability starts to crack.
Given all this, the Times piece reads as obvious – of course Google is rushing to “incorporate AI into search” – but what will those products really look like? For answers, it makes sense to look at how regular folks are using GPT-driven products. And while my own habits haven’t really changed yet, I can’t say the same for others in my orbit. Perhaps the most interesting of the bunch is – caveat alert – the aforementioned Michelle.*
Unlike me and probably most of you, my wife is not an early adopter of tech products and services. We were a decade into our marriage before she started regularly checking her personal email, and she pretty much skipped the Facebook and Twitter phases of early Web 2. Like all of us she quickly picked up the smart phone habit, but she has something like 1000 unread emails and texts, which is incomprehensible to me – I can’t sleep until my inbox has less than ten messages, and I respond to (or delete) texts almost instantaneously.
Michelle does use Instagram pretty regularly – more to browse than to post, and she’s a sophisticated user of the “rest of the Internet” – which means she’s a pretty seasoned Google user. Until recently, Google has been her main window to the Web – the glue that held together hours of weekly research into whatever she was working on at a given time.
That’s all changed in the past few months – and if Michelle represents the average Google user, it’s no wonder folks in Mountain View are “freaking out,” to quote a source in the Times‘ recent report.
Here’s why. A few months ago Michelle started playing around with ChatGPT, first just to see what the fuss was about, but more recently in a focused and highly utilitarian way. Put in crass, commercial terms, ChatGPT converted her. Michelle has several information-intensive projects going at any given time in areas ranging from real estate to documentaries, food to finance. Before ChatGPT, she’d start her work inside Google – asking the search engine to answer a simple query, then refining and re-searching – over and over – until her browser was crowded with dozens of opened and often unread tabs.
This process of culling insight and knowledge from an infinite and maddening list of blue links is familiar to anyone who uses Google, in particular on a desktop machine where there’s plenty of real estate for multiple browser windows and tabs. In essence, Google acts as a kind of real time Memex – a temporary** and fragile thread holding our research efforts together. At some point it becomes our job to make sense of all those open tabs and windows, a process I’ve come to call internet bricolage.
Then again, sometimes we’ll just give up in frustration and leave the whole mess behind. Whenever I happen to be using Michelle’s laptop I’ve noticed windows with 20 or more tabs open, and I’ll ask if I should close them out. “Oh no,” she’ll say. “I need those, I might get back to that…”
But when Michelle uses ChatGPT, the service essentially inverts all that bricolage, confidently offering a summary based on orders of magnitude more input, a crisp response that feels magical compared to the sludge of a typical Google search session. It’s addictive, and it’s changed Michelle’s habits completely.
Before ChatGPT, Michelle used Google for many hours each week. But after, her use of Google has plummeted more than 50 percent. Interestingly, her engagement with certain trusted publications and websites has increased.
I asked Michelle how her usage had shifted in these previously Google-dominated kinds of searches, and we worked up this chart:
In short, Google’s down dramatically, ChatGPT now takes more than a third of her time, random web sites have receded, and “trusted” sites – she often will ask ChatGPT which sites to trust – have skyrocketed.
All of this has some pretty interesting implications for the Internet (and for the advertising that supports it), if Michelle’s usage is anything like the rest of the world’s. In my next post, I’ll go into specifics about how Michelle uses ChatGPT, and what some of those implications might be.
—
*I know, I know, never write a story using your family, or a cabbie, as your source. Fortunately, I don’t have editors and this is a blog post.
**It’s crazy that Google doesn’t remember search streams over time, a presumptive feature of ChatGPT.