I’ve written a long-ish post attempting to answer that question over at P&G’s Signal360 publication, please head there (and sign up for their newsletter!) if you’d like to read the whole thing. Below is a teaser for those of you who aren’t sure you want to click the link (so few of us do these days!).
Last week, while working on a post about what the ads might look like inside chat-based search, I got a surprising note from the communications team at Google. I had emailed them asking for comment on ads inside Bard, which Google had announced earlier in the month. To be honest, I was expecting the polite “no comment” I ultimately did receive, but I also got this clarification:
[We] wouldn’t have anything additional to share from the Search POV, as Bard is a standalone AI interface and doesn’t sit within Search.
Last week I asked if Google was f*cked, and since then quite a few of you have reached out asking what I think the company could do to … un-f*ck itself. “Easy enough to declare the company is too big, too stuck in the mud, too cautious, too dependent on its cash cow,” you told me. “Much harder to advise them on what to do about it.” One of you just sighed to me on the phone, then said “it’s always been this way. No large company can escape the innovator’s dilemma.”
Well, maybe so, but wouldn’t it be fun to try? I’ve been in touch with various Googlers over the past few weeks, as I’m still working on a “What should the ads look like” piece around ChatGPT and AI-driven search (promise, it’ll be done soon). While folks at Google are polite and engaged, they’ve also given me the extended play version of “No comment” – stating it’s too early to declare the business model for conversational search. In short, they’re waiting for the market to reveal itself a bit more before making any public statements or declaring themselves all in on tech’s next big trend.
This question is pulsing through most of the conversations I’ve been having with tech and media industry folk these past few weeks. The company’s narrative has shifted dramatically in the wake of Microsoft’s partnership with OpenAI. Nearly everyone I’ve spoken with is convinced the company is in serious trouble – and Wall Street has validated those concerns by trimming $200 billion from the company’s market cap over the past two weeks.
Given the news around AI’s impact on the tech industry, search, and jobs in general, I thought it made sense to re-up a piece I wrote back in 2018, triggered at the time by the launch of Amazon Go (which, not surprisingly, did not exactly go as Amazon might have wished). I re-read it recently and thought it held up pretty well (and I’ve been on the road for over a week, so fresh pieces will have to wait for a few more days!).
Do generative AI innovations like OpenAI’s ChatGPT and Google’s LaMDA represent a new and foundational technology platform like Microsoft Windows, Apple iOS or the Internet? Or are they just fun and/or useful new products that millions will eventually use, like Google Docs or Instagram? I think the answer can and should be “both” – but to get there, the Valley is going to have to forego the walled garden destination model it’s employed these past 15 or so years.
The question of OpenAI’s ultimate business model has dominated nearly every conversation I’ve had this week, whether it’s with reporters from the Economist and the Journal, senior executives at large-scale public companies, or CEOs of ad-tech and data startups. Everyone wants to know: What’s the impact of generative AI on the technology industry? Will OpenAI be the next Google or Apple? Who wins, and who will lose?
How long have I been staring at a blank screen, this accusing white box, struggling to compose the first sentence of a post I know will be difficult to write? About two minutes, actually, but that’s at least ten times longer than ChatGPT takes to compose a full page. And it’s those two minutes – and the several days I struggled with this post afterwards – that convince me that ChatGPT will not destroy writing. In fact, I think it may encourage more of us to write, and more still to consume the imperfect, raw, and resonant product of our efforts.
I’m a pretty fast writer, but I’m a deliberate and vicious editor – I’ll happily kill several paragraphs of my own text just minutes after I’ve composed them. I know that the best writing happens in the editing, and the most important part of composition is to simply get some decent clay on the wheel. ChatGPT seems to be really good at that clay part. But it’s in the second part – the editing – that the pot gets thrown*.
Watching the hype cycle build around OpenAI’s ChatGPT, I can’t help but wonder when the first New York Times or Atlantic story comes out calling the top – declaring the whole thing just another busted Silicon Valley fantasy, this year’s version of crypto or the metaverse. Anything tagged as “the talk of Davos” is destined for a ritual media takedown, after all. We’re already seeing the hype start to fade, with stories reframing ChatGPT as a “co-pilot” that helps everyone from musicians to coders to regular folk create better work.
But I think there’s far more to the story. There’s something about ChatGPT that feels like a seminal moment in the history of tech – the launch of the Mac in 1984, for example, or the launch of the browser one decade later. Is this a fundamental, platform-level innovation that could unleash a new era in digital?
What’s the hardest thing you could do as a tech-driven startup? I’ve been asked that question a few times over the years, and my immediate answer is always the same: Trying to beat Google in search. A few have tried – DuckDuckGo has built itself a sizable niche business, and there’s always Bing, thought it’s stuck at less than ten percent of Google’s market (and Microsoft isn’t exactly a startup.) But it’s damn hard to find venture money for a company whose mission is to disrupt the multi-hundred billion dollar search market – and for good reason. Google is just too damn well positioned, and if Microsoft can’t unseat them, how the hell could a small team of upstarts?
Today I’d like to ponder something Kevin Kelly – a fellow co-founding editor of Wired – said to me roughly 30 years ago. During one editorial conversation or another, Kevin said – and I’m paraphrasing here – “The most creative act a human can engage in is forming a good question.”
That idea has stuck with me ever since, and informed a lot of my career. I’m likely guilty of turning Kevin into a Yoda-like figure – he was a mentor to me in the early years of the digital revolution. But the idea rings true – and it lies at the heart of the debate around artificial intelligence and its purported impact on our commonly held beliefs around literacy.