For the past several years, I’ve led a graduate-level class studying the early history of Internet policy in the United States. It runs just seven weeks – the truth is, there’s not that much actual legislation to review. We spend a lot of the course focused on Internet business models, which, as I hope this post will illuminate, are not well understood even amongst Ivy-league grads. But this past week, one topic leapt from my syllabus onto the front pages of every major news outlet: Section 230. Comprised of just 26 words, this once-obscure but now-trending Internet policy grants technology platforms like Facebook, Google, Airbnb, Amazon, and countless others the authority to moderate content without incurring the liability of a traditional publisher.
Thanks to the events of January 6th, Section 230 has broken into the mainstream of political dialog. Slowly – and then all of a sudden – the world has woken up to the connection between the disinformation flooding online platforms and what appears to be the rapid decay of our society.
Difficult and scary narratives need a villain, and the world’s found one in Section 230, pretty much the only law on the books that can reasonably be connected to this hot mess. No matter if you’re liberal or conservative, it’s pretty easy to logic your way into blaming 230 for whatever bothers you about the events of the past ten days.
For folks on the left, the narrative goes like this: The insurrectionists were radicalized by online platforms like YouTube and Facebook. These platforms have failed to moderate disinformation-driven conspiracy theories like QAnon, or the blatant lies told by politicians like Trump. (When they finally did – two days after the coup attempt – it was far too little, far too late!). The reason Big Tech can get away with such blatant neglect is Section 230. Clearly, 230 is the problem, so we should repeal it! Unfortunately, our President-elect has endorsed this view.
The conservative view ignores any connection between political violence and 230, focusing instead on seductive but utterly wrong-headed interpretations of First Amendment law: Big Tech platforms are all run by libtards who want to crush conservative viewpoints. They’ve been censoring the speech of all true Patriots, kicking us off their platforms and deleting our posts. They’ve been granted this impunity thanks to Section 230. This is censorship, plain and simple, a violation of our First Amendment rights. We have to repeal 230! Naturally, our outgoing President has adopted this view.
The debate is frustratingly familiar and hopelessly wrong. The problem isn’t whether or not platforms should moderate what people say. The problem is in whether or not the platforms amplify what is said. And to understand that problem, we have to understand the platforms’ animating life force: Their business models.
It’s The F*cking Business Model!
Three years ago I wrote a piece arguing that Facebook could not be fixed because to do so would require abandoning its core business model. So what does that model do? It’s really not that complicated: It drives revenue for nearly every modern corporation on the planet.
Let that settle in. The platforms’ core business model isn’t engagement, enragement, confirmation bias, or trafficking in human attention. Those are outputs of their business model. Again, the model is simple: Drive sales for advertisers. And advertisers are companies – the very places where you, I, and nearly everyone else works. They might be large – Walmart, for example – or they might be small – I got an ad for weighted blankets from”Baloo Living” on Facebook just now (HOW DID THEY KNOW?!).
When advertising is the core business model of a platform, that platform’s job is to drive sales for advertisers. For Facebook, Google, Amazon, and even Apple, that means providing existential revenue to tens of millions of companies large and small. This means that “Big Tech” is fundamentally entangled with our system of modern capitalism.
Killing Section 230 does nothing to address that fact.
Let’s get back to the distinction I drew above – between moderating content (the focus of 230) and amplifying that content, a practice Section 230 never anticipated. To understand amplification, you need to understand a practice that nearly all advertising-driven platforms have adopted in the past ten years: Content feeds driven by algorithms. The Wall St. Journal has woken up to this practice, pointing out in a recent technology column that Social-Media Algorithms Rule How We See the World. Good Luck Trying to Stop Them. The piece does a fine job of pointing out what anyone paying attention for the past decade already knows: Our information diet is driven by algorithms we don’t understand, serving not the health of the public dialog, but rather the business model of social media companies and their advertising customers. The conclusion: We’ve lost all agency when it comes to what we consume.
About That Agency
But before feeds became our dominant consumption model, we happily outsourced our agency to journalistic media brands – and to the editors and journalists who worked for those media brands. Some of us still curate our news this way – but our ranks are thinning. Back before platforms became our dominant media platform (all of ten years ago!), anyone who wanted to read the news had to exert a critical, if often fleeting form of agency. We decided which media outlets we would regularly pay attention to. We chose to read The New York Times or the Post (or both), The Wall St. Journal or The Economist. Media brands stood as proxies for a vastly more complicated and utterly overwhelming corpus of information we might potentially consume. The job of the journalists at those media outlets was to curate that information into a coherent diet that conformed to whatever that media outlet’s brand promised: “All the News Fit to Print” if you’re the Times, aloof neoliberal analysis if you’re The Economist.
But that’s not how the vast majority of Americans get their news these days. If anything, Facebook has given tens of millions of people who otherwise might not seek out the news an illusion of news literacy thanks to whatever happens to show up in their feed. For those who do want to chose a news diet, we might parrot the agency of the pre-feed days by following this or that new brand on Facebook, YouTube, or Twitter. But in the feed-driven environment of those platforms, articles from The Economist, The Times, or The Journal must compete, post for post, with the viral videos of flaming Zambonis and titillating proofs of elaborate child pornography rings shared by your friends. Given the platforms’ job is to drive revenue for its advertisers, which group do you think gets more amplification? You already know the answer, of course. Hell, it turns out Facebook has known the answer for years, and has consciously chosen to show us low quality information over accurate journalism. How do we know? It has a “News Ecosystem Quality” index – a SOMA-like tuning fork for its algorithms that dials up quality information whenever things might turn a bit too ugly. Let THAT sink in.
Given all of this, it’s seductive to conclude that the best way to limit bad information on platforms is to ask the platforms to moderate it away, threatening them with repeal of 230 to get there. But that’s a terrible idea, for so many reasons I won’t burden this essay with a recitation (but please, read Mike Masnick if you want to get smart fast).
A far better idea would be to coax that critical layer of agency – the human choice of trusted media brands – back to the fore of our information diet in one way or another. And if we don’t like our choices of media brands, we should start new ones, smarter ones, more responsive ones that understand how to moderate, curate, and edit information in a way that both serves the public good and understands the information ecosystem in which it operates. (Yes, yes, that’s a self-serving reference.)
As a society we’ve at least come to admire our seemingly intractable problem: We’re not happy with who’s controlling the information we consume. The question then becomes, how can we shift control back to the edge – to the consumer of the information, and away from algorithms designed to engage, outrage, and divide?
I’m of the mind this can be done without sweeping Federal legislation – but legislation might actually be helpful here, if it contemplates the economic incentives driving all of the actors in this narrative, including the businesses who currently pay Facebook and its peers for providing them revenue.
In short, I think it’s time to hack the economic incentives which drive the platforms. Section 230 is a dodge – we’re obsessing on a 26-word law that offers nearly every contestant in the dialog a convenient way to duck a far larger truth: No one wants to threaten the profits of our largest corporations. And given I’ve been on for a while, I’m going to stop now, and get into how we might think differently in the next installment. Thanks for reading, and see you soon.
This post is one of a series of “thinking out loud” on our current media ecosystem. Here are a few others: