Is OpenAI Today’s Netscape? Or Is It AOL?

As is his want, last week Fred Wilson wrote a provocative post I’ve been thinking about for the past few days. Titled “Netscape and Microsoft Redux?“, Fred notes the parallels between the browser wars of the late 1990s and the present-day battle for dominance in the consumer AI market. And he asks a prescient question: What new, world-defining product might we be missing by focusing on AI chatbots?

In the early days of the Web, everyone thought the most important new product to emerge from the Internet was the browser. Netscape, a startup with just a few months of operating history, defined the market for those browsers in 1994, then dominated it for several years thereafter. But by the late 1990s, the lumbering incumbent Microsoft had stolen Netscape’s lead by leveraging distribution and pricing advantages inherent to its massive Windows monopoly.

But here’s the rub, as Fred points out: “Ironically, that battle for Internet dominance missed that the most important piece of software was the search engine, not the browser. And so the winner ended up being an entirely different company – Google.”

Fred notes that today’s version of the browser wars is playing out in chatbots, with OpenAI playing the role of the upstart (Netscape), and Google the incumbent (Microsoft). Sure, OpenAI has the lead today, but Google has woken up, and is using its dominance in search, infrastructure, and browsing to take share from its upstart competitor.

But if that metaphor holds, Fred wonders, are we once again missing “the most important new piece of software,” just as we did around search in the late ’90s? And if so, what is it?

I keep turning these questions over in my head, and it feels like an answer is tantalizingly close, but still out of reach. So I’m doing what I always do when faced with these kinds of puzzles: I’m thinking out loud through writing.

Let’s start by identifying what made search the breakout winner of the early Internet era. By the late 1990s, the dominant tool for accessing  the Internet was the browser. Those browsers let you surf an endless wave of sites, but they didn’t help you find the sites mattered to you.

The early Web had a navigation problem, and its first solution was the directory – Yahoo!, the first directory built for every day consumers, became the most visited site on the Web. But directories were soon swamped by waves of new sites coming online each month.

Google solved the Web’s navigation problem by continually crawling and indexing every site on the Internet, then delivering just the right answer based on a novel signal that had been overlooked by everyone: The link.

Google PageRank algorithm refined links – human-built connections between pages on the web –  into the most successful business in the history of the Internet, changing consumer behavior in the process. By the early 2000s, consumers began using Google as their first stop on the Web. Google became a verb, search became a habit, and portals like Yahoo languished.

Which leads me to wonder: If consumer AI really is in its “early 1990s” phase, what’s the equivalent of the navigation problem we all encountered prior to the emergence of Google search? And is there a novel signal – a new class of data emerging from our use of AI that corresponds to the link?

Exploration of these questions is complicated by how the Web works today. Twenty five years ago the Web was essentially a massive public commons. The vast majority of sites were open and free, and very few sites opted out of Google’s approach to crawling the Web. When people made new pages and links – essentially new data for Google to ingest – Google simply indexed that data, then used PageRank to figure out a way to make sense of it all. It didn’t have to ask permission – people gave it freely as a matter of course.*

Today’s Internet is markedly different. Most sites – in particular large platforms like Amazon, OpenAI, TikTok or Meta’s Instagram – operate as walled gardens that refuse to share data. And while people are still building websites, the majority of valuable data are locked behind walls of personalization and corporate terms of service. That means there’s no equivalent to the link in today’s AI-driven Internet – no novel public resource waiting to be exploited for a breakthrough service like search back in the day.

While we’ve not found an obvious analog to the link, perhaps I’ve been putting the horse before the cart. Let’s think for a spell about the problem. By the late 1990s, the problem of navigation on the Internet was widely understood. Is there a similar problem now for users of AI chat services?

The first thing that comes to mind is this: You can’t effectuate anything with services like ChatGPT or Gemini. You can’t ask them to take action on your behalf. Sure, you can ask them to write you an essay, act as a friend or therapist (or more), summarize a white paper, etc., etc., but as soon as you want to do something, chatbots grind to a halt.  Consumer AI has a “getting shit done” problem – they exist in rarified silos, incapable of anything that requires engagement beyond the confines of their chatbot interfaces.

Certainly the tech industry knows about this problem – and it has devised a solution: Agents. The next wave of AI innovation centers on “the agentic web,” with personalized agents that will do our bidding in every imaginable way. Every major AI company has announced agentic products, but unfortunately, they don’t work, because the ecosystem in which they operate is hostile to their success. Want an agent that compares prices across commerce sites like Amazon or Walmart, then makes a purchase? Sorry, ‘user agents’ are blocked by Amazon’s terms of service. In fact, nearly all commercial Internet services block ‘non human’ user agents from engaging with their sites. It’s not hard to understand why: Non human agents are, well, not human, and most sites depend on advertising revenue that’s targeted to humans, after all. Plus, user agents threaten to undermine the information asymmetry that underpins most of capitalism these days – once you’ve tasted the profits driven by dynamic pricing, it’s hard to go back.

What we have here is an architectural problem: The Internet as currently built simply cannot support an agentic future, no matter how many well-manicured hands might wave at it. Realizing that future will require a fundamental redesign of the Internet – a process as radical as the invention of the Web itself.

This leads me to an unexpected conclusion when it comes to pondering Fred’s timeline comparing the late 1990s to now. Perhaps, in fact, he’s off by a decade. Could it be that consumer AI is comparable not to the early Web, but rather to the era of nearly forgotten online services that preceded the early Web in the late 1980s? When the history of consumer AI is written, might it treat OpenAI, Claude and Gemini as the equivalent of pre-Web services like CompuServe, AOL, and MSN? These fascinating but frustrating services attempted to build online worlds, but they were built on brittle architectures that couldn’t connect easily and reliably to the “rest of the world.” The original Web offered a better model. Maybe it’s time to abandon the Web as we now know it, and dream up something entirely new once again.

*Plus, there was an explicit deal that quickly developed: Letting Google crawl your site meant inclusion in its index, resulting in visitors and potential business opportunities via advertising and/or commerce. 

You can follow whatever I’m doing next by signing up for my site newsletter here. Thanks for reading.

2 thoughts on “Is OpenAI Today’s Netscape? Or Is It AOL?”

Leave a Reply

Your email address will not be published. Required fields are marked *