How long have I been staring at a blank screen, this accusing white box, struggling to compose the first sentence of a post I know will be difficult to write? About two minutes, actually, but that’s at least ten times longer than ChatGPT takes to compose a full page. And it’s those two minutes – and the several days I struggled with this post afterwards – that convince me that ChatGPT will not destroy writing. In fact, I think it may encourage more of us to write, and more still to consume the imperfect, raw, and resonant product of our efforts.
I’m a pretty fast writer, but I’m a deliberate and vicious editor – I’ll happily kill several paragraphs of my own text just minutes after I’ve composed them. I know that the best writing happens in the editing, and the most important part of composition is to simply get some decent clay on the wheel. ChatGPT seems to be really good at that clay part. But it’s in the second part – the editing – that the pot gets thrown*.
Watching the hype cycle build around OpenAI’s ChatGPT, I can’t help but wonder when the first New York Times or Atlantic story comes out calling the top – declaring the whole thing just another busted Silicon Valley fantasy, this year’s version of crypto or the metaverse. Anything tagged as “the talk of Davos” is destined for a ritual media takedown, after all. We’re already seeing the hype start to fade, with stories reframing ChatGPT as a “co-pilot” that helps everyone from musicians to coders to regular folk create better work.
But I think there’s far more to the story. There’s something about ChatGPT that feels like a seminal moment in the history of tech – the launch of the Mac in 1984, for example, or the launch of the browser one decade later. Is this a fundamental, platform-level innovation that could unleash a new era in digital?
What’s the hardest thing you could do as a tech-driven startup? I’ve been asked that question a few times over the years, and my immediate answer is always the same: Trying to beat Google in search. A few have tried – DuckDuckGo has built itself a sizable niche business, and there’s always Bing, thought it’s stuck at less than ten percent of Google’s market (and Microsoft isn’t exactly a startup.) But it’s damn hard to find venture money for a company whose mission is to disrupt the multi-hundred billion dollar search market – and for good reason. Google is just too damn well positioned, and if Microsoft can’t unseat them, how the hell could a small team of upstarts?
Today I’d like to ponder something Kevin Kelly – a fellow co-founding editor of Wired – said to me roughly 30 years ago. During one editorial conversation or another, Kevin said – and I’m paraphrasing here – “The most creative act a human can engage in is forming a good question.”
That idea has stuck with me ever since, and informed a lot of my career. I’m likely guilty of turning Kevin into a Yoda-like figure – he was a mentor to me in the early years of the digital revolution. But the idea rings true – and it lies at the heart of the debate around artificial intelligence and its purported impact on our commonly held beliefs around literacy.
Just last week I predicted that Google would leverage ChatGPT to create a conversational interface to its search business, and that Microsoft would do the same in the enterprise data market. I briefly considered that I might have gotten it exactly backwards – Google has a robust enterprise data business in its cloud business (known as GCP), and of course Microsoft has Bing. But I quickly dismissed that notion – figuring that each behemoth would play the GPT card toward their strengths.
While I may have been right about ChatGPT getting a business model this year, it looks like I could be wrong on the details. Here’s The Information with a scoop:
I’ve used the image above for many years, mainly because I love how surprised the guy looks as he gazes into the crystal ball. Or maybe he’s just sat on something unpleasant. In any case, it pretty much sums up my approach to this, my 20th edition of annual predictions. I sit down, I might have an adult beverage on hand, and I just write until I feel like I’m done.
While reviewing my ’22 predictions (I did pretty well!) I promised to do something new: One post per predictions, ten posts total. But as I began that promised work, I realized it would test the limits of even my most dedicated readers (I see you, kids). So instead I wrote three long form posts, each with three or four predictions apiece. The first focused on AI, the second on advertising, and the third on markets, with a bonus call related to the ’24 election. Having now written all of them, I’m going to summarize them briefly in this “master post.” Grab your own favorite beverage, have a wonderful New Year, and read on!
Let’s start our 2023 predictions off with some thoughts on artificial intelligence. With ChatGPT, Silicon Valley seems to have gotten a bit of its mojo back. After two decades spent simmering the magic of Apple, Google, Amazon and Facebook into a sticky lucre of corporate profit, here was the kind of technological marvel the industry seemed to have forgotten how to make – a magical tour de force that surprised, mystified, and delighted millions.
Even better, ChatGPT didn’t come from any of those corporate titans – not directly, anyway. Instead it came from a non-profit artificial intelligence research laboratory called OpenAI. Founded in 2015 with a mission of furthering “responsible AI,” OpenAI is backed by some of the most celebrated names in Valley technology – LinkedIn’s Reid Hoffman, PayPal’s Peter Theil, Tesla’s Elon Musk among them. Now this was more like it!
As has been my practice for nearly two decades, I penned a post full of prognostications at the end of last year. As 2021 subsequently rolled by, I stashed away news items that might prove (or disprove) those predictions – knowing that this week, I’d take a look at how I did. How’d things turn out? Let’s roll the tape…
My first prediction: Disinformation becomes the most important story of the year. At the time I wrote those words, Trump’s Big Lie was only two months old, and January 6th was just another day on the calendar. A year later, that Big Lie has spawned countless others, culminating in one of the most damaging shifts in our nation’s politics since the Civil War. The Republican party is now fully captured by bullshit, and countless numbers of local, state, and national politicians are busy undermining democracy thanks to the Big Lie’s power. A significant percentage of the US population has become unmoored from truth – and an equally significant group of us have simply thrown our hands up about it. Trust is at an all time low. This Barton Gellman piece in The Atlantic served as a wake up call late in the year – and its conclusions are terrifying: “We face a serious risk that American democracy as we know it will come to an end in 2024,” Gellman quotes an observer stating. “But urgent action is not happening.” I’m not happy about getting this one right, but as far as I’m concerned, this is still the most important story of the year – and the most terrifying.
Never in my five-plus decades has a year been so eagerly anticipated, which makes this business of prediction particularly daunting. I’m generally inclined to be optimistic, but rose-colored glasses stretch time. Good things always take longer to emerge than any of us would wish. Over 18 years of doing this I’ve learned that it’s best to not predict what I wish would happen, instead, it’s wise to go with what feels most likely in the worlds I find fascinating (for me, that’s media, technology, and business, with a dash of politics given my last two years at The Recount). As I do each year, I avoid reading other folks’ year-end predictions (though I plan on getting to them as soon as I hit publish!). Instead, I just sit down at my desk, and in one rather long session, I think out loud and see where things land.
Google’s (and now Alphabet’s) CEO opines in the FT (sub required) on why AI needs to be regulated, joining the chorus of tech leaders who have taken the apparent high road when it comes to regulation, even as governments around the world have shown next to no ability to actually regulate anything (well, I guess the Chinese have certainly regulated tech…in a not so great way). Astute readers will note that an op-ed in a paywalled publication, on a holiday no less, is not exactly placed to go viral. However, look a bit deeper, and you’ll realize that the Financial Times is very well read by Wall St., number one, and number two, it ain’t a holiday in Europe, where the most powerful people on the planet are gathering for Davos this week. Indeed.
While most of the op-ed is pretty weak sauce, a predictable call for governments to “work together” to “harness this technology for good,” I found this quote the most interesting: “Companies such as ours cannot just build promising new technology and let market forces decide how it will be used.” I wish Google, Facebook, Amazon and Apple had that point of view before they built the AI-driven system we now all live with known as surveillance capitalism.