Once upon a time when search was new, Google came along and put the whole darn Internet in RAM. This was an astonishing (and expensive) feat of engineering at the time – one that gave Google a significant competitive moat. Twenty years ago, very few companies had the know how or the resources to keep an up-to-date copy of the entire web in expensive, super fast silicon. Google’s ability to do so allowed it unprecedented flexibility and speed in its product, and that product won the search crown, building a trillion-dollar market cap along the way.
Since then compute, storage, and engineering costs have declined in a kind of reverse version of Moore’s Law. Pretty much anyone with a bit of funding and some basic Internet crawling skills can stand up a web index – but there’s been no reason to do so. For 15 or so years one of the biggest clichés in venture circles was “no one will ever fund another search engine.” (A second cliché? “No one’s ever said “Just Bing it.”)
Microsoft today announced a cluster of upgrades to its Bing-ChatGPT product, including:
Eliminating the Bing chat waitlist, which effectively throttled the product’s growth by adding steps to a consumer’s journey.
Integrating more visual search results, which will enliven the consumer experience and potentially engage visitors for longer.
Adding chat history and persistence, a major differentiation between Bing chat and OpenAI’s ChatGPT, and for me anyway, the main reason I didn’t use Bing.
Adding more long document summarization, which is another feature that ChatGPT excels at.
Adding a platform layer to Bing, so third party developers can integrate in much the same manner as they can with ChatGPT’s plugins, which I’ve both praised and trashed in past posts (praised because of their potential, trashed because the model reminds me of the app store, which is a walled garden nightmare).
Overall, this news strikes me as Microsoft upping the ante not only on Google, which now has even more catching up to do, but also on Microsoft’s own partner OpenAI, which until now had a superior product. I’m on the road and not able to write as much as I’d like on this, but it’s worth noting. I’m sure the product managers in Mountain View aren’t getting much sleep these days – the pressure is mounting for Google to respond. And in OpenAI headquarters, the frustration has to be building as well – they cut that deal with Microsoft, and now have to live with its terms.
Last week I wrote a piece noting how my wife Michelle’s Google usage was down by nearly two thirds, thanks to her discovery of ChatGPT. I noted that Michelle isn’t exactly an early adopter – but that’s not entirely true. Michelle is more of a harbinger – if an early tech product “fits” her, she’ll adopt it early and often – and it’s usually a winner once it goes mainstream. The early Tivo DVRs come to mind – and they remain a better product than anything that’s come since in the television world (another example of how entrenched business models kill innovation).
But few early versions of any new product get to “Michelle market fit” on first attempt. For it to happen with an AI chatbot – well before I developed the habit – is rarer still. I mean, I’m supposed to be the early adopter around here!
On Sunday The New York Times reported that Google is furiously working to incorporate conversational AI into its core search products – not exactly news, but there was a larger takeaway: Google has got to get some killer AI products out the door, and fast, or it risks losing its core users for good. And if my own family is any indication, the company is already imperiled. More on that below, but first, a bit more on the Times piece.
The article led with big news: Samsung may decamp from Google and partner with Microsoft’s Bing instead. This would be a major blow both financially as well as optically – Samsung’s commitment to Android is a key reason Google’s mobile platform towers over Apple’s iOS in terms of worldwide market share.
Would you pay $200 a month for generative AI services? It may sound crazy, but I think it’s entirely possible, particularly if the tech and media industries don’t repeat the mistakes of the past.
Think back to the last time you decided to fork over a substantial monthly fee for a new technology or media service. For most of us, it was probably the recent shift to streaming services. If you use more than a few, that bill can add up to nearly $100 a month. But streaming is a (not particularly good) replacement for cable – it’s not a technological marvel that changes how we live, work, and play. To find a new service that rises to that level, we have to go back to the introduction of the smart phone – a device we were willing to spend hundreds of dollars to obtain and an average of $127 a month to keep.
Of all the structural problems “Web 2” has brought into the world – and there are too many to list – one of the most vexing is what I call the “meta-services” problem. Today’s commercial internet encourages businesses and services to create silos of our data – silos that can not and will not connect to each other. Because of business model constraints (most big services are “free,” revenues come from advertising and/or data sales), it’s next to impossible for anyone – from an individual consumer to a Fortune 50 enterprise – to create lasting value across all those silos. Want to compare your Amazon purchase history to prices for the same goods at Walmart? Good luck! Want to compare the marketing performance of your million-dollar campaigns between Facebook and Netflix? LOL!
For the past 15 or so years, I’ve written about a new class of “meta-services” that would work across individual sites, apps, and platforms. Working on our behalf, these meta-services would collect, condition, protect, and share our information, allowing a new ecosystem of services and value to be unlocked. OpenAI’s recent announcement of plugins, along with their already robust APIs, has brought the meta-service fantasy tantalizingly close to reality. But it’s more likely that, just as with the “open internet,” the fantasy will remain just that. Internet business models have been built to collect short term rent. Truly open systems rarely win over time – regardless of whether the company uses the word “open” in its name.
Google continues to be extremely cautious in its approach to generative AI, but it seems to have realized it has to at least mention the subject once in a while – and today’s release of Bard, albeit in limited fashion – is one of those moments. The company is obsessively calling Bard “an experiment” – but it’s managed to orchestrate a slew of press outlets to simultaneously cover Bard’s launch today. Reading through the coverage, my initial response is … underwhelmed – and I think that’s what Google wanted.
From the almost stultifying blog post announcing Bard’s limited release to the sanitized examples offered to the press, this announcement has been calculated to make exactly zero waves. As I wrote earlier, Google seems terrified that Bard might upstage its core business in search.
Once again, Google and Microsoft are battling for the AI spotlight – this time with news around their offerings for developers and the enterprise*. These are decidedly less sexy markets – you won’t find breathless reports about the death of Google search this time around – but they’re far more consequential, given their potential reach across the entire technology ecosystem.
Highlighting that consequence is Casey Newton’s recent scoop detailing layoffs impacting Microsoft’s “entire ethics and society team within the artificial intelligence organization.” This team was responsible for thinking independently about how Microsoft’s use of AI might create unintended negative consequences in the world. While the company continues to tout its investment in responsible AI** (as does every firm looking to make a profit in the field), Casey’s reporting raises serious questions, particularly given the Valley’s history of ignoring inconvenient truths.
I’ve written a long-ish post attempting to answer that question over at P&G’s Signal360 publication, please head there (and sign up for their newsletter!) if you’d like to read the whole thing. Below is a teaser for those of you who aren’t sure you want to click the link (so few of us do these days!).
Last week, while working on a post about what the ads might look like inside chat-based search, I got a surprising note from the communications team at Google. I had emailed them asking for comment on ads inside Bard, which Google had announced earlier in the month. To be honest, I was expecting the polite “no comment” I ultimately did receive, but I also got this clarification:
[We] wouldn’t have anything additional to share from the Search POV, as Bard is a standalone AI interface and doesn’t sit within Search.