Today I’m going to write about the college course booklet, an artifact of another time. I hope along the way we might learn something about digital technology, information design, and why we keep getting in our own way when it comes to applying the lessons of the past to the possibilities of the future. But to do that, we have to start with a story.
Forty years ago this summer I was a rising Freshman at UC Berkeley. Like most 17- or 18- year olds in the pre-digital era, I wasn’t particularly focused on my academic career, and I wasn’t much of a planner either. As befit the era, my parents, while Berkeley alums, were not the type to hover – it wasn’t their job to ensure I read through the registration materials the university had sent in the mail – that was my job. Those materials included a several-hundred-page university catalog laying out majors, required courses, and descriptions of nearly every class offered by each of the departments. But that was all background – what really mattered, I learned from word of mouth, was the course schedule, which was published as a roughly 100-page booklet a few weeks before classes started.
Today I’d like to ponder something Kevin Kelly – a fellow co-founding editor of Wired – said to me roughly 30 years ago. During one editorial conversation or another, Kevin said – and I’m paraphrasing here – “The most creative act a human can engage in is forming a good question.”
That idea has stuck with me ever since, and informed a lot of my career. I’m likely guilty of turning Kevin into a Yoda-like figure – he was a mentor to me in the early years of the digital revolution. But the idea rings true – and it lies at the heart of the debate around artificial intelligence and its purported impact on our commonly held beliefs around literacy.
I’ve spent a lot of the last few decades as an interlocutor on stage or as a reporter on the ground, and I find that preparing for interviews requires not just a ton of research, but a rather formal process of interrogation of the facts prior to any actual dialog. It starts with naive, even ignorant queries, and each response yields fresh questions, each of which become more subtle, specific, and pointed. The question is the tool, it can be wielded like a spade in sand, a pick axe against stone, a paintbrush, a hex key, a hammer, a pen. It may well be the most human expression we have – our core differentiator from the stochastic parrots we can’t help but create.
All of this came rushing back to me when I read Jeff Jarvis’ post on the impact of ChatGPT on literacy. In “Writing and Exclusion,” penned prior to the New York School District banning ChatGPT, Jarvis writes “It occurs to me that we will probably soon be teaching the skill of prompt writing: how to get what you want out of a machine.” Indeed. I’d argue we’ve already been in dialog with a semi-intelligent machine for decades – ever since the dawn of search, and certainly since the rise of Google, where every interaction is considered a “query” and every response a “result.”
Back in 2005 I suggested that our schools start teaching what I then called “search literacy” – a formalized coursework to help kids understand how to ask intelligent questions of what was at the time a novel technology:
In an age where the knowledge of humankind is increasingly at our fingertips through the services of Internet search, we must teach our children critical thinking. One can never have all the answers, but if prepared, one can always ask the right question, and from that creative act, learn to find his or her own answer.
Instead, we have leaders that believe that questions have one answer, and they already know what it is. Their mission, then, is to evangelize that answer. That, to me, is a dangerous course. Reversing it by teaching our children to learn, rather than to answer, seems to me to be a noble cause.
I’m not sure any academic institution ever took me up on that call (driven, as I recall, by the Bush administration’s fixation on test scores), but Jarvis has issued an updated version of it:
…writing a prompt for the machine — being able to exactly and clearly communicate one’s desires for the text, image, or code to be produced — is itself a new way to teach self-expression.
In an age of DALL-E, ChatGPT, and large language models augmenting and/or becoming our lawyers, our lobbyists, and our programmers, perhaps it’s time to once again demand our schools teach our children how to ask interesting questions. That’s something I doubt AI will ever get right.
PS – I wanted to ask ChatGPT the question in this post’s title, but it’s clear the service is overwhelmed at the moment….