As is his want, last week Fred Wilson wrote a provocative post I’ve been thinking about for the past few days. Titled “Netscape and Microsoft Redux?“, Fred notes the parallels between the browser wars of the late 1990s and the present-day battle for dominance in the consumer AI market. And he asks a prescient question: What new, world-defining product might we be missing by focusing on AI chatbots?
In the early days of the Web, everyone thought the most important new product to emerge from the Internet was the browser. Netscape, a startup with just a few months of operating history, defined the market for those browsers in 1994, then dominated it for several years thereafter. But by the late 1990s, the lumbering incumbent Microsoft had stolen Netscape’s lead by leveraging distribution and pricing advantages inherent to its massive Windows monopoly.
Finance has always leveraged technology – at Wired in the early 1990s, we were fond of saying that technology’s twin engines of innovation were money and sex – but the most interesting story was always money. Care to understand the future of internet infrastructure? Bone up on how hedge funds optimize network latency. Want to peer into the future of online consumer services back when the Web was a glint in Marc Andreessen’s eye? Start with online banking.
The Oak Grove cemetery in Tisbury, Massachusetts encompasses roughly ten acres of rolling woodlands and narrow dirt roads. Its 1,800 or so headstones date back two centuries, making Oak Grove a relative newcomer as New England graveyards go. I’ve been visiting this sacred, spectral spot on the island of Martha’s Vineyard for nearly five decades. Half my family is buried there.
As generative AI reaches a fever pitch of investment, product releases, and hype, most of us have ignored a profound flaw as we march relentlessly toward The Next Big Thing. Our most dominant AI products and services (those from OpenAI, Google, and Microsoft, for example) are deployed in the cloud via a “client-server” architecture – “a computing model where resources, such as applications, data, and services, are provided by a central server, and clients request access to these resources from the server.”
Now, what’s wrong with that? Technically, nothing. A client-server approach isn’t controversial; in fact, it’s an efficient and productive approach for a company offering data-processing products and services. The client – that’s be you and your device – provides input (a prompt, for example) which is relayed to the server. The server takes that input, processes it, and delivers an output back to the client.
Non-controversial, right? Well, sure, if the “server” in question is a neutral platform that’s only in the business of processing your data so you can use the services it offers. Banks, for example, use neutral client-server architectures to provide online financial services, as do most health care providers. The data you share with them isn’t used for anything other than the provision of services.
I’m going to try to write something difficult. I don’t know if I’m going to pull it off, but that’s kind of the point. This is how writers improve: We tackle something we’re not sure we can do. Along the way, I am committing a minor sin in the world of writing – I am writing about writing.
But wait, don’t bail, here’s a topical tidbit to keep you engaged: I’m also going to write about AI, and who doesn’t want to hear more about that?! My prompt, as it were, is “Audience of One,” a post by Mario Gabriele, who writes the interesting and hyperbolic newsletter The Generalist. Gabriele’s optimistic prose focuses on venture, startups, tech, and tech culture. I find his work thought provoking and sometimes infuriating. “Audience of One” falls into the latter category.
Yes, I have no patience for perfecting image prompts using AI.
Listen up, tech oligarchs; lend an ear, simpering brohanions. We’re doing this generative AI thing all wrong, and if you continue down your current path, your house of cards will fall, leaving all of us wanting, but most importantly, leaving you out of power. And given that you value power over all else, it strikes me it might be in your own self interest to consider an alternate path.
Here’s the problem: you’ve managed to convince nearly all of us that sometime real soon, generative AI will deliver us powerful services that will automate nearly every difficult and/or deadly boring task we currently have to perform. From booking complex yet perfectly priced itineraries to delivering personalized health diagnoses that vastly outperform even the most cogent physician, your AI agents have us starstruck, bedazzled, and breath-baited.*
If you want to understand where the zeitgeist is headed in Silicon Valley, you have to study The Information, the clubby, well-sourced favorite read of Valley oligarchs. The publication made its reputation by commanding lofty subscription prices back when nearly all tech news was free; it now enjoys multiple revenue streams, including advertising, events, and a “pro” version for $750-$999 a year. I’ve been a subscriber (of the “regular” variety) for years, and I probably always will be.
That said, every so often The Information runs a story that is so clearly aligned with the interests of the plutocracy it begs to be called out. “Advertisers Retreat From Social Media Policing” is its latest entry in this category. The piece opens with a stupendous straw man: “For several years, a favorite tactic of progressives agitating against social media and conservative news outlets has been pressuring marketers to pull their ads.”
The original MusicPlasma interface. Author’s musical preferences not included…
No Longer Mine
When I write, I like to listen to music. Most of my first book was written to a series of CDs I purchased from Amazon and ripped to my Mac – early turn of the century electronica, for the most part – Prodigy, Moby, Fat Boy Slim and the like. But as I write these words, I’m listening to an unfamiliar playlist on Spotify called “Brain Food” – and while the general vibe is close to what I want, something is missing.
This got me thinking about my music collection – or, more accurately, the fact that I no longer have a music collection. I once considered myself pretty connected to a certain part of the scene – I’d buy 10 or 15 albums a month, and I’d spend hours each day consuming and considering new music, usually while working or writing. Digital technologies were actually pretty useful in this pursuit – when Spotify launched in 2008, I used it to curate playlists of the music I had purchased – it’s hard to believe, but back then, you could organize Spotify around your collection, tracks that lived on your computer, tracks that, for all intents and purposes, you owned. Spotify was like having a magic digital assistant that made my ownership that much more powerful.
The buildings are the same, but the information landscape has changed, dramatically.
Today I’m going to write about the college course booklet, an artifact of another time. I hope along the way we might learn something about digital technology, information design, and why we keep getting in our own way when it comes to applying the lessons of the past to the possibilities of the future. But to do that, we have to start with a story.
Forty years ago this summer I was a rising Freshman at UC Berkeley. Like most 17- or 18- year olds in the pre-digital era, I wasn’t particularly focused on my academic career, and I wasn’t much of a planner either. As befit the era, my parents, while Berkeley alums, were not the type to hover – it wasn’t their job to ensure I read through the registration materials the university had sent in the mail – that was my job. Those materials included a several-hundred-page university catalog laying out majors, required courses, and descriptions of nearly every class offered by each of the departments. But that was all background – what really mattered, I learned from word of mouth, was the course schedule, which was published as a roughly 100-page booklet a few weeks before classes started.
How long have I been staring at a blank screen, this accusing white box, struggling to compose the first sentence of a post I know will be difficult to write? About two minutes, actually, but that’s at least ten times longer than ChatGPT takes to compose a full page. And it’s those two minutes – and the several days I struggled with this post afterwards – that convince me that ChatGPT will not destroy writing. In fact, I think it may encourage more of us to write, and more still to consume the imperfect, raw, and resonant product of our efforts.
I’m a pretty fast writer, but I’m a deliberate and vicious editor – I’ll happily kill several paragraphs of my own text just minutes after I’ve composed them. I know that the best writing happens in the editing, and the most important part of composition is to simply get some decent clay on the wheel. ChatGPT seems to be really good at that clay part. But it’s in the second part – the editing – that the pot gets thrown*.
Everyone from educators to legislators seem to be asking how we can distinguish between writing done by AIs, and writing done by actual humans. But if the age of the centaur is truly upon us, perhaps we don’t have to. Authorship is already a complicated process of bricolage and outright theft. I don’t see why adding a tool like ChatGPT should be anything but welcomed.
Some argue that ChatGPT already is writing like humans – which implies it will replace writing, instead of merely complementing it. Indeed, ChatGPT can string sentences together in often extremely useful or humorous ways. And sure, it may likely replace structured text like sports summaries or earnings reports. But I don’t think tools like ChatGPT will ever be able to write like Sam Kriss, or Zeynep Tufecki, or Anil Dash.
When I write, I have no idea how the work is going to end, much less what ideas or points I’ll make as I pursue its composition. For a reader, the beauty in a piece of writing is its wholeness. It’s a complete thing – it starts, it blathers on for some period of time, it ends. But for a writer, an essay is a process, a living thing. You compose, you reflect, you edit, reject, reshape, and repeat. Once it’s finished, the piece quickly settles into an afterlife, a fossilized artifact of a process now complete. The principle joy** of writing for the writer isn’t in admiring what you’ve made (though there’s a bit of that as well), it’s in its creation.
And that process of creation – the struggle, the chuckles, the bloodied revisioning; the sense that a piece is starting to come together, the constant editing – all of it works together to make something that is distinctly human. Intelligences like ChatGPT can parrot that output, but by definition they cannot actually create it.*** What they can do is aid in its creation, but providing a muse-like response to the questions and hypotheses that naturally arise while one struggles to write.
About halfway through this piece I had the notion of illustrating this concept by asking ChatGPT for help in this essay’s composition, but alas, the service is not currently available – it’s overwhelmed by demand. That’s a huge opportunity for OpenAI, the service’s owner, and I doubt ChatGPT is going to end up a fad like Clubhouse or 99 percent of crypto. I think it’s got legs, because all of us, whether we’re professionals or not, could use someone smart whispering in our ear as we compose. We may even find new kinds of writing through the relationships we cultivate with services like ChatGPT, just as we will find new ways of coding, making art, or making music.
There is something special about how we humans create works like essays or a piece of music. It’s uniquely a product of how we think, and no other species – including machines – think quite like we do. That thought process is infinitely plastic, and will certainly incorporate tools like ChatGPT, remaining distinctly human as it does. By extension, a piece wrought from a human intelligence has a particular effect on humans, one that can’t be recreated by any other intelligence, artificial or not. In short, we’re people, and we like stuff made by people. If ChatGPT can help us make more people-made stuff, well then, I say bring it on.
—
*Of course most writers would argue with my choice of the word “joy”- perhaps “ecstasy” is more appropriate, in the second Oxford sense: “an emotional or religious frenzy or trance-like state, originally one involving an experience of mysticself-transcendence.” If the writing’s going well, I certainly do lose myself inside of it. Time is suspended, flow is found.
** I mean “thrown” as in “shaping a pot on the wheel,” not “thrown” as in “one throws a pot against the wall in frustration.” I chose the word intentionally, considered it, and then decided to keep it. “Thrown” is a word choice that, if made by an algorithm, could prove its inhumanity – an error made by a computer. Yet when a person pushes through the obvious and into an odd timbre or slightly discordant usage, well, that can make for good writing.
***I’ve used that word “intelligences” intentionally. I’m reading James Bridle’s Ways of Being. Part of its core argument is that there are many kinds of intelligences, and more likely than not, machines already have their own. Bridle also argues – pretty persuasively – that there’s nothing particularly special about ours.