I spend a lot of time engaged in the craft of writing – I’ve penned more than 1.5 million words on Searchblog alone. Writing anchors nearly all my projects, from teaching at universities to my board and investing work, not to mention the hundreds of pieces I’ve either authored or edited at places like P&G Signal and DOC. I write a few pages nearly every day in longhand journal (I’ve filled nearly 30 of them over the past four decades), and I recently embarked on a long-form writing project that may (or may not) produce another book over the coming months.
So writing matters to me, and I’ll admit I’m uncomfortable with how generative AI is changing my chosen field. I recoil from the idea of AI-written articles, blog posts, or academic assignments. And I support the various efforts by authors, journalism institutions, and creative groups who are pushing back against what feels like wholesale theft of our work to train LLMs.
That said, I’m not precious about this craft, despite the fact that it’s been the foundation of my career. I know most people aren’t professional writers, and for them, AI has addressed a very real problem – I may be able to bang out a post like this in less than an hour, but grinding out 750 words of readable text can be an all day chore for someone whose skills lie elsewhere. Plus, there’s a fair amount of writing that is pedestrian in both its purpose and its prose – press releases, short announcements, and summaries of meetings, for example. For those uses cases, AI does a perfectly adequate job of first drafts – as long as a human reviews, edits, and fact checks them, I have no problem with the idea of releasing that work into the world. In fact, over at DOC, we partnered with Anthropic to create AI summaries of many of our sessions from last year’s gathering.
Increasingly writers, organizations, and even publications are working in concert with AI tools to produce finished works, and it’s in this space that I feel our language is failing us.
Journalism has a long established practice of publishing pieces under “the byline” – a sacred concept in our profession. I remember my first byline in the Los Angeles Times, back in 1992. It was both a thrill and a heavy responsibility . That byline meant the Times and I were responsible for everything in the piece – its accuracy, tone, and its effect on the world. Seeing it in print made it real, connecting my work directly to its potential impact. (OK, that first piece was about the police shooting a dog, but still.)
But what do we call a piece that was co-authored with AI? I’m not talking about using AI to help with research or sourcing, but rather a piece that contains writing authored, in part or in whole, by an AI tool? We lack a phrase that contextualizes this kind of writing, so I’d like to suggest a neologism: The withline.
A withline is a formal acknowledgement that the associated work has been co-created with an AI tool. It might take the form of “By John Battelle with Claude AI,” for example. It should carry an understood context of responsibility and human accountability. In brief, a withline would:
- Signal that a work has been co-created using AI tools.
- Indicate that human intelligence has reviewed and validated the claims in the work.
- Connect, ideally, to a policy of AI usage that has been established and transparently communicated by the organization or author conjoined in the withline.
I don’t expect that the withline will become a journalistic standard anytime soon, but we do lack a term to express the evolving nature of creative work in an era dominated by generative AI. So I propose we consider “the withline” as a way to move forward with a practice that is already well underway.
And no, this piece was not created with AI ;-). But I did ask Claude to come up with a definition, which I posted as the art above.
—
