This essay from Anthropic CEO Dario Amodei is… a lot to parse. But this passage alone makes me take it seriously:
“It is somewhat awkward to say this as the CEO of an AI company, but I think the next tier of risk is actually AI companies themselves…the governance of AI companies deserves a lot of scrutiny.”
So much to say about this. But read the essay. Then we’ll talk.
Every so often I am asked to participate in a survey fielded by Elon University’s Center for Imagining the Digital Future. As you might expect, this year’s survey focuses on the impact of AI, and includes this prompt:
If you do think it is likely that AI systems will begin to play a much more significant role in shaping our decisions, work and daily lives: How might individuals and societies embrace, resist and/or struggle with such transformative change? As opportunities and challenges arise due to the positive, neutral and negative ripple effects of digital change, what cognitive, emotional, social and ethical capacities must we cultivate to ensure effective resilience? What practices and resources will enable resilience? What actions must we take right now to reinforce human and systems resilience? What new vulnerabilities might arise and what new coping strategies are important to teach and nurture?
It’s rare that a survey asks its respondents to actually write something cogent and long form, so I figured I’d publish my response here. I’d be curious to hear your thoughts! If you’d like to participate, the link to the survey is here.
Google launched as a free public beta in the Fall of 1998. It was a revelation – a 10X improvement on Internet navigation and research. But from its launch forward, Google’s founders were hounded with questions as to how their company planned on actually making money. John Doerr, one of Google’s earliest backers, famously answered that question by citing Google’s extraordinary growth: With all that traffic, he said, we’ll figure it out.
Google’s founders were famously suspicious of advertising – in their white paper explaining Google’s PageRank technology, Larry Page and Sergey Brin argued that advertising-funded search engines would be “inherently biased towards the advertisers and away from the needs of consumers.”
It took me two weeks, 6000 words and nine posts, but I can finally round up my predictions for 2026 in one place. Here’s the complete list in one handy, blissfully shortened post. Thanks for reading, and once (and for good), I wish you a happy, healthy New Year.
Cecco de Caravaggio The Conjurer (The Musician) c. 1600-1620
The modern English verb ‘to conjure’ is derived from the Latin conjurare, meaning ‘band together by an oath, conspire.’ Its roots con (‘with’) and jur (‘legal right or authority, law’) echo with questions central to our present day struggle with technology: Who do we trust to determine authority? Why do we believe in them?
Conjuring also evokes magic, sorcery, and wonder, essential elements of the tech industry mythos. My earliest pieces on the impact of generative AI leaned on the metaphor of magical “genies” doing our bidding in a relationship bound by loyalty and trust. Do those genies work for us, or are they the product of conjurers beyond our control? Do they demand faith, or instill it?
If only Nano Banana could spell, AI would be a thing.
I’m not even finished with my predictions this year, and already one of them is coming true. In The Year Tech Gets Even Bigger (Predictions 2026, #7), I wrote:
Google in particular will be building a Death Star of AI distribution featuring all the same players from its days of search monopoly: Apple, Samsung, and other Android partners.
I concluded my post “Magic and Mayhem” with a bit of a tease about the impact of AI on our society:
There will be lots of magic this year. But there will also be plenty of carnage as previously unbreachable moats start to crumble, not only in business, but also in society at large.For more on that, stay tuned for prediction #2.
Do you remember the last time you felt the magic? When you encountered something truly novel, something that was both surprising and at the same time deeply familiar, because you had imagined such a thing, but until that very moment, believed it impossible?
I’ve had only a handful of such moments in my long relationship with digital technology. The first was in 1981, when I programmed a game of tic-tac-toe on an underpowered IBM PC. I compiled the crude lines of code I’d been assigned to write, issued the command “RUN” at the C: prompt, and damned if the thing didn’t actually work.
Yesterday The Informationscooped my well laid plans for today’s health and AI-related predictions. If you’ve been following along this past week, you know I decided to write one prediction post a day for the first working week of the year. Today marks #5, which predicts that health will become a central player in society’s debate around AI, and #4, which predicts OpenEvidence will be acquired. I knew that OpenAI was working on health-related product offerings – the company said as much when it hired Fidji Simo from Instacart. But I didn’t know OpenAI would announce its health product so early in the year. Oh, and by the way, Google is expected to quickly do the same.
That said, I think there’s a lot more room to run in this story. OpenAI’s announcement is just the prelude. Health offers the perfect test case of just about every crucial limitation – and every massive opportunity – that AI represents in society today.
Ten years ago a new and promising technology burst into our homes – the smart speaker. Like many tech-forward families, our household went all in. We got two Alexa speakers and two Google Homes, plugged them in, and they became fixtures in our kitchen and bedrooms for years.
Problem is, we kind of hate them now. At first they were cool – it was novel to talk to a device and have it actually work, at least for simple tasks like “what’s the weather today” or “play Vampire Weekend.” But we quickly grew disaffected with our new purchases, because more often than not, they failed when presented with even moderately complicated queries like “what time is the Giants game tonight” or “what’s on my grocery list.” In short, the first generation of smart home speakers were limited by a rigid approach to “intelligence” that didn’t scale. Only one sad, bedraggled Google Home remains in service in our kitchen, serving as a glorified clock radio (that’s it in the picture above). And it’s not doing Google any favors in the branding department, because whenever we ask it anything even slightly complicated, it fails, earning a string of expletives in the process*.