(This is a preview of a piece I’m working on for Signal360, to be published next week.)
“The US litigates, the EU legislates.” That’s what one confidential source told me when I asked about theDigital Services Act and theDigital Markets Act, the European Union’s twin set of Internet regulations coming into force this year. And indeed, even as the United States government continues anendless parade of lawsuits aimed at big tech, the EU has legislated its way to the front of the line when it comes to impacting how the largest and most powerful companies in technology do business. It may be tempting to dismiss both theDSA and the DMA as limited to only Europe, and impacting only Big Tech, but that would be a mistake. It’s still very early – much of the laws’ impact has yet to play out – but there’s no doubt the new legislation will drive deep changes to markets around the world. And even if you aren’t a digital platform, your own business practices may well be in for meaningful change.
I’ve been following the story of generative AI a bit too obsessively over the past nine months, and while the story’s cooled a bit, I don’t think it any less important. If you’re like me, you’ll want to check out MIT Tech Review’s interview with Mustafa Suleyman, founder and CEO of Inflection AI (makers of the Pi chatbot). (Suleyman previously co-founded DeepMind, which Google purchased for life-changing money back in 2014.)
Inflection is among a platoon of companies chasing the consumer AI pot of gold known as conversational agents – services like ChatGPT, Google’s Bard, Microsoft’s BingChat, Anthropic’s Claude, and so on. Tens of billions have been poured into these upstarts in the past 18 months, and while it’s been less than a year into since ChatGPT launched, the mania over genAI’s potential impact has yet to abate. The conversation seems to have moved from “this is going to change everything” to “how should we regulate it” in record time, but what I’ve found frustrating is how little attention has been paid to the fundamental, if perhaps a bit less exciting, question of what form these generative AI agents might take in our lives. Who will they work for, their corporate owners, or …us? Who controls the data they interact with – the consumer, or, as has been the case over the past 20 years – the corporate entity?
Every so often I get an idea for a new website or service. I imagine you do as well. Thinking about new ideas is exciting – all that promise and potential. Some of my favorite conversations open with “Wouldn’t it be cool if….”
Most of my ideas start as digital services that take advantage of the internet’s ubiquity. It’s rare I imagine something bounded in real space – a new restaurant or a retail store. I’m an internet guy, and even after decades of enshittification, I still think the internet is less than one percent developed. But a recent thought experiment made me question that assumption. As I worked through a recent “wouldn’t it be cool” moment, I realized just how moribund the internet ecosystem has become, and how deadening it is toward spontaneous experimentation.
Those of you who’ve been reading for a while may have noticed a break in my regular posts – it’s August, and that means vacation. I’ll be back at it after Labor Day, but an interesting story from The Information today is worth a brief note.
Titled How Google is Planning to Beat OpenAI, the piece details the progress of Google’s Gemini project, formed four months ago when the company merged its UK-based DeepMind unit with its Google Brain research group. Both groups were working on sophisticated AI projects, including LLMs, but with unique cultures, leadership, and code bases, they had little else in common. Alphabet CEO Sundar Pichai combined their efforts in an effort to speed his company’s time to market in the face of stiff competition from OpenAI and Microsoft.
Today I’m going to write about the college course booklet, an artifact of another time. I hope along the way we might learn something about digital technology, information design, and why we keep getting in our own way when it comes to applying the lessons of the past to the possibilities of the future. But to do that, we have to start with a story.
Forty years ago this summer I was a rising Freshman at UC Berkeley. Like most 17- or 18- year olds in the pre-digital era, I wasn’t particularly focused on my academic career, and I wasn’t much of a planner either. As befit the era, my parents, while Berkeley alums, were not the type to hover – it wasn’t their job to ensure I read through the registration materials the university had sent in the mail – that was my job. Those materials included a several-hundred-page university catalog laying out majors, required courses, and descriptions of nearly every class offered by each of the departments. But that was all background – what really mattered, I learned from word of mouth, was the course schedule, which was published as a roughly 100-page booklet a few weeks before classes started.
Threads is a week old today, and in those short seven days, the service has lapped generative AI as the favorite tech story of the mainstream press. And why not? Threads has managed to scale past 100 million users in just five days — far faster than ChatGPT, which broke TikTok’s record just a few months ago. That’s certainly news — and news is what drives the press, after all.
Threads has re-established Meta as a hero in tech’s endless narrative of good and evil — I can’t count the number of posts I’ve seen from influential public figures joking that, thanks to Threads, they actually like Mark Zuckerberg again. And Meta can certainly relish this win — the company has been the scapegoat for the entire tech industry for the better part of a decade.
But were I an executive at Meta responsible for Threads, I’d not be sleeping that well right about now. As they well know, the relationship between the tech industry and the press can shift in an instant. Glowing stories about breaking app download records can just as quickly become hit pieces about how Meta has leveraged its monopoly position in social media to vanquish yet another market, killing free speech and “real news” along the way. So far that story has been confined to the fringes of Elon’s bitter troll army over on whatever remains of Twitter these days, but should Threads lap Twitter as the largest app focused on creating a “public square” — whatever that means — the worm will quickly turn.
Meta has a tiger by the tail here, and so far, they’ve been working hard to tamp down expectations. Both Zuckerberg and Instagram CEO Adam Mosseri have been active on Threads, posting daily with both practiced humility (“gosh this thing is succeeding well beyond our expectations,” “we’re just at the starting line,” “we know we’re over our skis”) and reminders about how Threads isn’t like Twitter. Mosseri, for example, has downplayed the role of news — Twitter’s main differentiation and its endlessly maddening Achilles hell; Zuckerberg’s first Thread defined his new service as “an open and friendly public space” — prompting Musk to fire back that he’d rather be “attacked by strangers on Twitter” than live in “hide the pain” world of Instagram.
But The News — with all of its complications — is coming for Threads. I left Twitter more than six months ago, and while I sometimes missed feeling connected to the real time neural net the app had become for me, I almost instantly felt better about both myself and the world. Living on Twitter means navigating an unceasing firehose of toxicity, and Musk’s interventions only worsened the poisonous atmosphere of the place. I joined Threads a half hour after it launched, and indeed, it was a giddy place, its initial users basking in the app’s surprising lack of toxicity.
Other journalists have noticed the same thing. For now, the narrative around Threads centers on its extraordinary growth, but a close second is how “nice” the place feels compared to Twitter. Meta executives would like to keep it that way — combining “what Instagram does best” with “a friendly place for public conversation,” as Zuck put it in his first post.
To that fantasy, I say good luck to you, Mr. Zuckerberg. Keeping Threads “nice” means controlling the conversation in ways that are sure to antagonize just about everyone. No company — not Facebook, not Instagram, not Reddit, and certainly not Twitter, has figured out content moderation at scale. If, as Zuckerberg claimed, the goal with Threads is to create a “town square with more than 1 billion people,” the center of that square will have to contain news. And news, I can tell you from very personal experience, is the front door to a household full of humans screaming at each other.
“Politics and hard news are inevitably going to show up on Threads,” Mosseri told the Hard Fork podcast last week, “But we’re not going to do anything to encourage those verticals.”
I’ll have more to say about that sentiment in another post, but for now, I’ll leave it at this: When Threads hits 300 million active users — roughly the size of Twitter — the love affair between the press and Threads will more than likely come to an end.
I’ll be talking to Meta’s head of advertising Nicola Mendelsohn at P&G Signal tomorrow. You can register here for free.
Apparently the open web has finally died. This the very same week Meta launches Threads, which, if its first day is any indication, seems to be thriving (10 million sign ups in its first few hours, likely 50 million by the time this publishes…).
But before Threads’ apparent success, most writers covering tech had decided that the era of free, open-to-the-public, at scale services like Twitter, Reddit, and even Facebook/Insta is over. I’ll pick on this recent one from The Verge: So where are we all supposed to go now?
The piece argues that the decline of Twitter (Elon’s killing it), Reddit (it’s killing itself), and Instagram (it’s just entertainment now!) has left “an everybody-sized hole in the internet. For all these years, we all hung out together on the internet. And now that’s just gone.”
Umm…no. And not because of Threads (I’ll get to that in a minute). We never did “hang out together on the internet.” Anyone who knows Twitter knows it’s always been a cliquey echo chamber run by public narcissists. Reddit’s always been where a relatively small group of highly disaffected kids make fun of…everyone. And Instagram? Last I checked, it was still growing – even before Threads. Besides, no one ever “hung out” on Insta, I mean, it started as a photo service, remember? Complaining that it’s become an entertainment service is equivalent to moaning that TikTok is unusable because you’re getting old. Oh wait, Verge’s cousin Vox has already done that too.
Sure, you can “hang out” on some random subreddit, or get into endless flame wars with 12 other idiots on Twitter, or join an Instagram Live with a few hundred other voyeurs, but…that’s certainly not “everyone hanging out together on the Internet.” The very idea is ridiculous. We’re not built to “hang out with everyone,” and we never will be. Many of us, me included, are built to hang out with about six people at a time. And they change depending on context.
Trend pieces noting that the web has changed aren’t annoying because they’re wrong (of course the web is changing), they’re annoying because they miss the core problem: Centralization. We’ve been living in a centralized web world for more than a decade now, one where all the data, graphs (social, commercial, etc), and value are concentrated and managed by large corporations hell bent on protecting their most precious resource – your attention. To make sure you keep paying attention, corporations have made it very, very difficult to do the one thing all of us want to do from time to time: We want to leave.
The problem with the past ten or so years of Internet history is that we couldn’t leave when we wanted to – at least not without severe penalty. When I left Twitter last November, for example, I instantly lost a social graph I had built over 15 years, tens of thousands of my posts, an audience of nearly 300,000, not to mention my primary real-time news and information source. I couldn’t take any of that with me as I decamped to Twitter imitators like BlueSky or Mastodon. Neither of them had the rich networks of people that Twitter once had, and they were much the poorer for it.
But what they did have was compelling: A decentralized model that promised that, if I wanted to leave again, I could bring the value I helped create anywhere I wanted to. Both BlueSky and Mastodon are built on published protocols – essentially technology specs that other developers and entrepreneurs can leverage to build competing (or complementary) services. One of the most popular of these protocols is called ActivityPub – that’s what powers Mastodon. And in one of the smartest moves I’ve seen out of Meta in ages*, Instagram’s Threads will support ActivityPub.
Threads is built on top of Instagram’s social graph, which means if you’ve created value on that network, you’ll instantly have value on Threads. I have several thousand followers on Insta, an artifact of my early use of the place (I stopped posting regularly years ago). But when I joined Threads last night, I already had thousands of latent connections from Insta, and that network resurfaced almost immediately. People with super active Insta handles saw this effect in a much stronger way – in essence, Meta has created another way to create engagement across its network, so bully for them.
But if Meta keeps its promise to incorporate ActivityPub, that engagement and the social graphs driving it can be exported to any other service that supports the ActivityPub protocol. This means that if Threads turns into a Twitter-like hellscape in coming years, we can all take our attention, and our data, to a competing service like Mastodon. That kind of competitive threat undermines the web’s current business model of centralized, locked-in attention farming. You know, the very model upon which Facebook built an empire. Before yesterday, you couldn’t take your Instagram social graph and its related data to anywhere else on the web. But with Threads, you can. That’s progress.
For more than a decade I’ve been railing about how we’ll never get a truly open, highly innovative Internet until it becomes possible to build services that share data through standardized, easy to use protocols. I called these services “meta services” – services that thrive above the control of any one platform. In one stroke, Meta has capitalized that phrase (in every meaning of the term) and staked out the high ground – declaring itself willing to compete not on its ability to lock your data into a silo, but to provide you a superior service that keeps you engaged regardless of your ability to leave. This will prove extremely valuable for public dialog – a use case that has suffered massively thanks to the terrible incentives created by the attention economy. And for that, I tip my cap to Meta. Never thought that day would come, but here it is.
*Two other smart moves from Meta recently: Open sourcing its LLM, and naming Threads based on Twitter terminology.
This past Monday NewsGuard, a journalism rating platform that also analyzes and identifies AI-driven misinformation, announced it had identified hundreds of junk news sites powered by generative AI. The focus of NewsGuard’s release was how major brands were funding these spam sites through the indifference of programmatic advertising, but what I found interesting was how low that number was – 250 or so sites. I’d have guessed they’d find tens of thousands of these bottom feeders – but maybe I’m just too cynical about the state of news on the open web. I have a hunch my cynicism will be rewarded in due time, once the costs of AI decline and the inevitable economic incentives that have always driven hucksters kick in.
Given 250 is a manageable number for a mere mortal, I decided to ask the good folks at NewsGuard, where I’m an advisor, for a copy of their listings. Nothing like a tour through the post-apocalyptic hellscape of our AI future, right?
What I found was…disappointing. Most of the sites were beyond shoddy – barely literate, obviously automated, full of errors and content warnings, and utterly devoid of any sense of organizational structure. The most common message, upon clicking on a story link, was some variation of an OpenAI violation:
Not exactly a compelling headline. The next most common experience was this:
This of course is evidence that the scammers are rotating URLs to avoid blacklisting, unburdened of any concern about building audience loyalty. Beyond OpenAPI warnings and 404s, there’s the browser warnings that the site you’re about to visit is, well, seedy:
When you do get an actual news experiences, it becomes clear that these publishers have little interest in passing as “real news sites,” IE publications a sane person might intend to visit. They are instead built as SEO chum in the hopes that Google’s indexes might favor them with some low quality traffic, or worse, as destinations for bot traffic destined for arbitrage inside the darker regions of the programmatic ad universe. The editorial decisions on the various home pages I visited were, well, hilariously inchoate:
Perhaps that’s what we should expect with the first phase of this particular genre, but I found their general awfulness depressing: Most reporters will look at these sites and dismiss them. But they shouldn’t.
Traditional “made for advertising” sites already control 21 percent of all programmatic advertising revenues, and these sites tend to dominate Google search results, enshittifying the open web with low-calorie crap that, one would hope, actually good AI might help us avoid. But the relatively low volume of AI sites indicates, at least anecdotally, that so far the economics of replacing human-built content with AI-driven drivel have yet to kick it. Put simply, it’s still too expensive to replace sites like Geeky Post or Explore Reference with AI. For now.
But when costs come down, I expect made for advertising sites will pivot to AI almost overnight. And I wonder if that’s a bad thing. Once the web’s worst sites all shift to AI-driven output, perhaps they’ll find themselves in a positive spiral of competition for actual human attention. If these sites start to create reasonably high quality content, and search and social start to reward them with real traffic that converts to revenue, perhaps we can simply automate away the shitshow that the open web has become.
I recently caught up with a pal who happens to be working at the center of the AI storm. This person is one of the very few folks in this industry whose point of view I explicitly trust: They’ve been working in the space for decades, and possess both a seasoned eye for product as well as the extraordinary gift of interpretation.
This gave me a chance to ask one of my biggest “stupid questions” about how we all might use chatbots. When I first grokked LLM-driven tools like ChatGPT, it struck me that one of its most valuable uses would be to focus its abilities on a bounded data set. For example, I’d love to ask a chatbot like Google Bard to ingest the entire corpus of Searchblog posts, then answer questions I might have about, say, the topics I’ve written about the most. (I’ve been writing here for 20 years, and I’ve forgotten more of it than I care to admit). This of course only scratches the surface of what I’d want from a tool like Bard when combined with a data set like the Searchblog archives, but it’s a start.
My friend explained that my wish is not possible now, despite what Bard confidently told me when I asked it directly:
Well, no. Bard hallucinated all manner of bullshit in its answer. Yes, I write about technology, but not the Internet of things. I guess I write about society, but mainly in the context of policy and consumer data, not “education, healthcare, and the environment.” Culture? When’s the last time you’ve seen me write about movies?! And if I ever start writing about “personal development,” please put one between my eyes.
Bard’s list of supposed articles was even funnier – it reads like an eighth-grade book report culled from poorly constructed LinkedIn clickbait. Bard is a confident simpleton, despite its claim to be able query specific domains (in this case, battellemedia.com). I responded to Bard with this new prompt: “This is not right. That site does not cover music, movies. Nor does it do motivation, well being, productivity. Why did you answer that way?” Bard’s answer was … pretty much the same, though it did clumsily incorporate my corrections in its response:
Gah. My next prompt was an attempt to clarify where Bard was getting its answers, since it was clearly not using the battellemedia.com domain. “Are you actually referring to content on the site to do these answers?”
Ok, then, at least we’re getting some honesty. I decided to try one last time:
Now this was quite the freshly whipped bullshit: Actual percentages of how the content on my site breaks down! Unbeknownst to me, more than one in ten of my posts are about cybersecurity – a topic I’ve rarely if ever written about here.
Ok, enough beating up on poor Bard. My well-placed friend explained that while it’s currently out of scope for a standard chatbot like Bard or ChatGPT to do what I’m asking of it, “domain specific” queries was a hot area of development for all LLMs. So when will it happen? My friend didn’t commit to an answer on that, but I did get the sense it’s coming soon. The ability to apply LLM-level intelligence to large data sets is just too big an opportunity – in both B2C as well as B2B/enterprise markets.
A big reason this is taking more time than I’d like is cost. Noted AI investor Andreessen Horowitz recently posted a long explainer on the state of LLM models, but it all comes down to this money quote: “Today, even linear scaling (the best theoretical outcome) would be cost-prohibitive for many applications. A single GPT-4 query over 10,000 pages would cost hundreds of dollars at current API rates.” By my estimates, this cost would need to come down at least four orders of magnitude – from hundreds of dollars per query to pennies – to unlock the kind of magic that I’ve been dreaming about over the past few months. Not to mention all the technological machinations related to prompt handling, vector database management, orchestration frameworks, and other stuff that makes my brain hurt. But the good news, despite my rather pessimistic post from earlier this week, is that the good shit’s coming – we just need to be a bit more patient.
Not since the iPhone, in the mid aughts. No, not since the rise of the browser and the original web, in the early nineties. No, not since the introduction of the PC, in the 1980s. Ah hell, honestly, not since the Gutenberg printing press in the 15th century – or, fuck it, let’s just go there: Not since the invention of language, which as far as we know marked the moment when homo sapiens first branched from its primate cousins.
That’s how big a deal AI is, according to academics, politicians, and a rapt technology and capital ecosystem starved for The Next Big Thing.
I tend to agree. First we created language, then we created its digital doppelganger with computer code, and with generative AI, we’re melding the two into a shimmering and molten fun house mirror, one that forces us to question our very consciousness. What the hell does it mean to be human when we’ve created machines that seem to transcend humanity?
“…the Digital Revolution is whipping through our lives like a Bengali typhoon…[bringing] social changes so profound their only parallel is probably the discovery of fire.”
Ah, fire. I forgot about fire, which likely preceded language by a good 50,000 years. Those lines introduced the very first issue of Wired magazine 30 years ago. As founders we were convinced every aspect of society would be reshaped – our culture, our economy, our social lives, our faiths, our sense of self. In those early days we were essentially a cult, a non-denominational sect stoned on a buoyant certainty that we were right – that technology offered all of us an offramp from the tired shit-show of the industrial revolution. Of course the Internet was going to rewire everything – it was obvious. If you didn’t see that coming, you just weren’t paying attention. Our job was to slap you into seeing what was right in front of our eyes: The future, coming fast, screaming into our face with possibility and promise.
And now, here we are. The starting gun has been fired once again- this time the release of ChatGPT. After a decade of trillion-dollar platform consolidation based on surveillance capitalism and trickle-down innovation, tech once again brims with optimism, with that original possibility and promise.
If, that is, we don’t fuck it up by forcing our new tools into the structures of the past.
Yesterday Fred posted about voice input over on AVC, and it reminded me how long it takes for consumers to adopt truly new behaviors, regardless of how enthusiastic we might get about a particular technology’s potential. As Fred points out, voice input has been around for a decade or so, and yet just a fraction of us use it for much more than responding to texts or emails on our phones.
While tens of millions of us have begun to use generative AI in various ways, its “paradigm shifting” impacts are likely years away. That’s because while consumers would love to have AI genies flitting around negotiating complex tasks on our behalf, first an ecosystem of developers and entrepreneurs will have to do the painstaking work of clearing the considerable brush which clogs our current technology landscape – and it’s not even certain they’ll be able to.
Some historical context is worth considering. When the World Wide Web hit in 1993, I was convinced this new platform would change everything about, well, everything. Culture, business, government – all would be revolutionized. 1993 was the year Wired first published, and we took to the technology with abandon. We launched Hotwired, one of the first commercial websites, in 1994- but quickly realized the limitations of the early Web. There was no way to collect payment, serve advertising, or even identify who was visiting the site. All of those things and more had to be invented from scratch, and it took several years before the entrepreneurial ecosystem ramped up to the challenge. Then, of course, the hype overwhelmed the technology’s ability to deliver, and it all came crashing down in 2001.
Fast forward to the launch of the iPhone in 2007, and once again, everyone was convinced the world was going to change dramatically. But Airbnb launched in late 2008, Uber in 2009, and both didn’t gain widespread traction until 2011 or 2012. It took another seven to nine years for these two stalwarts of the mobile revolution go public. Along the way tens of thousands of smaller companies were building apps, exploring new opportunities, and generally laying the groundwork for the world as we know it today. But to win, they learned that they had to play by the increasingly rigid policies of the dominant platforms: Apple, Google, Amazon, and Facebook. The dream of “Web 2” – where the Internet would be an open platform allowing innovation to flourish – never truly materialized. The platforms became some of the largest corporations ever to roam the earth, and quite predictably, enshittification followed.
So while many of us are currently enraptured with the rise of generative AI, it’s worth remembering that despite the technology’s huge potential, this will all take time. And unlike 1993, when the Internet was literally a blue ocean opportunity, or 2007, when smart phones were as well, this time everyone’s in on the joke. Yes, billions upon billions of venture capital is now being deployed against what feel like unlimited opportunities in the space, but these new startups will have to battle deeply entrenched incumbents with almost no interest in seeing their moats breached.
Thirty years after the first issue of Wired, it’s still making for one hell of a story.