The Web We Want Vs. The Web We Have

The Warrant for the Town of Oak Bluffs, MA.

Are you frustrated with how the internet works?

Me too. Today I’m going to think out loud about why.

I’ve been writing for decades about what I’ve been calling “conversational media.” The basic idea was this: Our interactions with media would soon shift from a model of one-to-many (think broadcast) to a personalized, interactive model (think chatbots). Soon, I posited, the Internet would become “conversational” in nature – its core interface would shift from the awkwardness of point-and-click to the fluency of natural language.

This was – ahem – 20 years ago. I may have been a bit optimistic about when all this would go down.

Over the past few years, the rise of generative AI has shown us what it feels like to interact with conversational interfaces. You ask a question, you get a reasonably smart and helpful answer. You refine your prompt and magic happens. It feels effortless.

It works great on Claude.ai or ChatGPT, but when you go back to the rest of the web? Not so much.


I live in a small town, and every year we have a town meeting where residents debate and vote on a thick slate of proposals. Last night we approved roughly 40 items, but we left just as many unresolved. Anyone in town can raise objections or make a point – we spent nearly 20 minutes debating whether we should work with the next town over to coordinate dredging activities on our local waterways. These gatherings rarely end early – last night’s meeting was gaveled to a close around 11pm. The unresolved items were tabled to a second session to be held tonight.

These meetings aren’t exactly stimulating, but they are the backbone of local self-governance, and even for a small town, millions of taxpayer dollars are at stake at each meeting. A large portion of time is devoted to answering questions that could have easily been addressed if the voter had simply read the 120-page “warrant” describing each voting item. I mean, RTFM*, amiright?

But it’s not easy to access that information. For example, I was particularly interested in two of the proposals up for a vote – one had to do with short term rentals, another with the use of eminent domain to reclaim blighted properties. The language around each of these items was buried in the warrant, which could be found on the town website as a downloadable PDF. To get smart on the issues, I had to navigate to the right place on the site, download the PDF, then scroll through pages of text until I found the right section. Frustrating!

Wouldn’t it be cool if I could ask the town website a simple question instead? I’d just prompt it “Show me the language relating to the short term rental and eminent domain issues” and presto, there’s the information I wanted. I could then ask the AI to give me a bulleted list of pro and con arguments. Cool, right?!

Instead, I downloaded all 120 pages, tossed the PDF into Claude, and asked my personal AI to help me make sense of it. While that was certainly an upgrade, I doubt most of my town’s (mostly retired) residents have a Claude Max subscription, much less the instinct to use AI the way I do.


This is all a long way of saying that today’s web is not built for AI. And while it would be awesome to bolt a chatbot interface on top of my town’s municipal website, it’s impractical to think that change is coming anytime soon. Here are a few reasons why:

  • The web is an ecosystem built, in large part, with a presumption that human beings will navigate it through a latticework of links, modal dialog boxes, and search boxes. For the most part, AI agents cannot navigate these human interfaces, in large part because …
  • The web was built to deny machines the ability to pretend they are humans. This is considered a feature, not a bug: Spammers, hackers, and other bad actors are experts at writing automated scripts that pretend to be human so they can either overwhelm a site, or manipulate various outcomes (think about automated comment spammers on public comment sites, for example). And large sites like Amazon and Facebook have policies forbidding automated retrieval of data – as much to protect their bottom lines as to protect their customers.
  • AI is (currently) expensive. While is sounds wonderful to imagine an all knowing AI sitting on top of oakbluffsma.gov, I’m pretty sure I don’t want to be arguing with my fellow townspeople about the new budget item allocating tens of thousands of dollars for Claude API calls. At some point we’ll have elegant, small scale models that might do that work for a fraction of current costs, but we’re not there yet.
  • Most sites don’t have the technical chops or budget to make a transition to AI interfaces. Traditional sites are complicated enough. How are overstretched webmasters and IT staff supposed to implement an AI interface, particularly if they’re among the first to do it? Short answer: They won’t.
  • The web’s broader ecosystem is based on conversion of traffic to actions taken on site – and are therefore optimized to direct us toward some kind of commercial action. The web was not built to answer questions posed by automated AI agents trained to ignore honeypots and spammy SEO links. If humans are no longer the primary consumers of the commercial web, that web will collapse. Ecosystems work really hard to avoid that kind of collapse.

Regardless of these obstacles, I’ve no doubt that a new kind of web will begin to emerge, one built around the presumption of a conversational, AI-driven interface. And what might that web look and feel like? I’d love to hear your thoughts – I’ll be writing out loud about just that subject over the next few months.

In the meantime, I’ll be back at the town meeting tonight. Wish me luck.

*Read the Fucking Manual, or in this case, the Warrant.

You can follow whatever I’m doing next by signing up for my site newsletter here. Thanks for reading.

Leave a Reply

Your email address will not be published. Required fields are marked *