Well that was something. Yesterday the Center for AI Safety, which didn’t exist last year, released a powerful 22-word statement that sent the world’s journalists into a predictable paroxysm of hand-wringing:
“Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.”
The tech press has breathlessly speculated that, freshly invigorated thanks to ChatGPT, Microsoft’s Bing might steal a major distribution partner from Google. First it was Samsung (wrong), then it was Apple (unlikely), and always there was Firefox, with its 200 million monthly users and its tumultuous relationship with its Googley paymaster.
Last week I was traveling – and being in four places in six days does not make for a good writing vibe. But today I’m back – and while the pace is picking up for the annual Signal conference I co-produce with P&G, I wanted to take a minute to reflect on last week’s news – no, not that CNN shitshow, but Google’s big I/O conference, where the company finally revealed its plans around search, AI, and a whole lot more.
Leading tech analyst Ben Thompson summarized how most of the pundit-ocracy responded to Google I/O: “the ‘lethargic search monopoly’ has woken up.” He also noted something critical: “AI is in fact a sustaining technology for all of Big Tech, including Google.” Put another way, the bar has been reset and no one company is going to own a moat around AI – at least not yet. Over time, of course, moats can and will be built, just as they were with core technologies like the microprocessor, the Internet itself, and the mobile phone. But for now, it’s a race without clear winners.
Head to The Verge if you want a summary of what went down at I/O – beyond AI, Google doubled down on devices – positioning itself as a serious competitor to Apple (I’ve been a Google Pixel user for years, and all I want is for the two companies to figure out how to deliver a text…).
Once upon a time when search was new, Google came along and put the whole darn Internet in RAM. This was an astonishing (and expensive) feat of engineering at the time – one that gave Google a significant competitive moat. Twenty years ago, very few companies had the know how or the resources to keep an up-to-date copy of the entire web in expensive, super fast silicon. Google’s ability to do so allowed it unprecedented flexibility and speed in its product, and that product won the search crown, building a trillion-dollar market cap along the way.
Since then compute, storage, and engineering costs have declined in a kind of reverse version of Moore’s Law. Pretty much anyone with a bit of funding and some basic Internet crawling skills can stand up a web index – but there’s been no reason to do so. For 15 or so years one of the biggest clichés in venture circles was “no one will ever fund another search engine.” (A second cliché? “No one’s ever said “Just Bing it.”)
Microsoft today announced a cluster of upgrades to its Bing-ChatGPT product, including:
Eliminating the Bing chat waitlist, which effectively throttled the product’s growth by adding steps to a consumer’s journey.
Integrating more visual search results, which will enliven the consumer experience and potentially engage visitors for longer.
Adding chat history and persistence, a major differentiation between Bing chat and OpenAI’s ChatGPT, and for me anyway, the main reason I didn’t use Bing.
Adding more long document summarization, which is another feature that ChatGPT excels at.
Adding a platform layer to Bing, so third party developers can integrate in much the same manner as they can with ChatGPT’s plugins, which I’ve both praised and trashed in past posts (praised because of their potential, trashed because the model reminds me of the app store, which is a walled garden nightmare).
Overall, this news strikes me as Microsoft upping the ante not only on Google, which now has even more catching up to do, but also on Microsoft’s own partner OpenAI, which until now had a superior product. I’m on the road and not able to write as much as I’d like on this, but it’s worth noting. I’m sure the product managers in Mountain View aren’t getting much sleep these days – the pressure is mounting for Google to respond. And in OpenAI headquarters, the frustration has to be building as well – they cut that deal with Microsoft, and now have to live with its terms.
Last week I wrote a piece noting how my wife Michelle’s Google usage was down by nearly two thirds, thanks to her discovery of ChatGPT. I noted that Michelle isn’t exactly an early adopter – but that’s not entirely true. Michelle is more of a harbinger – if an early tech product “fits” her, she’ll adopt it early and often – and it’s usually a winner once it goes mainstream. The early Tivo DVRs come to mind – and they remain a better product than anything that’s come since in the television world (another example of how entrenched business models kill innovation).
But few early versions of any new product get to “Michelle market fit” on first attempt. For it to happen with an AI chatbot – well before I developed the habit – is rarer still. I mean, I’m supposed to be the early adopter around here!
On Sunday The New York Times reported that Google is furiously working to incorporate conversational AI into its core search products – not exactly news, but there was a larger takeaway: Google has got to get some killer AI products out the door, and fast, or it risks losing its core users for good. And if my own family is any indication, the company is already imperiled. More on that below, but first, a bit more on the Times piece.
The article led with big news: Samsung may decamp from Google and partner with Microsoft’s Bing instead. This would be a major blow both financially as well as optically – Samsung’s commitment to Android is a key reason Google’s mobile platform towers over Apple’s iOS in terms of worldwide market share.
Would you pay $200 a month for generative AI services? It may sound crazy, but I think it’s entirely possible, particularly if the tech and media industries don’t repeat the mistakes of the past.
Think back to the last time you decided to fork over a substantial monthly fee for a new technology or media service. For most of us, it was probably the recent shift to streaming services. If you use more than a few, that bill can add up to nearly $100 a month. But streaming is a (not particularly good) replacement for cable – it’s not a technological marvel that changes how we live, work, and play. To find a new service that rises to that level, we have to go back to the introduction of the smart phone – a device we were willing to spend hundreds of dollars to obtain and an average of $127 a month to keep.
Of all the structural problems “Web 2” has brought into the world – and there are too many to list – one of the most vexing is what I call the “meta-services” problem. Today’s commercial internet encourages businesses and services to create silos of our data – silos that can not and will not connect to each other. Because of business model constraints (most big services are “free,” revenues come from advertising and/or data sales), it’s next to impossible for anyone – from an individual consumer to a Fortune 50 enterprise – to create lasting value across all those silos. Want to compare your Amazon purchase history to prices for the same goods at Walmart? Good luck! Want to compare the marketing performance of your million-dollar campaigns between Facebook and Netflix? LOL!
For the past 15 or so years, I’ve written about a new class of “meta-services” that would work across individual sites, apps, and platforms. Working on our behalf, these meta-services would collect, condition, protect, and share our information, allowing a new ecosystem of services and value to be unlocked. OpenAI’s recent announcement of plugins, along with their already robust APIs, has brought the meta-service fantasy tantalizingly close to reality. But it’s more likely that, just as with the “open internet,” the fantasy will remain just that. Internet business models have been built to collect short term rent. Truly open systems rarely win over time – regardless of whether the company uses the word “open” in its name.
Google continues to be extremely cautious in its approach to generative AI, but it seems to have realized it has to at least mention the subject once in a while – and today’s release of Bard, albeit in limited fashion – is one of those moments. The company is obsessively calling Bard “an experiment” – but it’s managed to orchestrate a slew of press outlets to simultaneously cover Bard’s launch today. Reading through the coverage, my initial response is … underwhelmed – and I think that’s what Google wanted.
From the almost stultifying blog post announcing Bard’s limited release to the sanitized examples offered to the press, this announcement has been calculated to make exactly zero waves. As I wrote earlier, Google seems terrified that Bard might upstage its core business in search.