Larry Lessig – Why Can’t We Regulate the Internet Like We Regulate Real Space?

Wikipedia

I’ve known Larry Lessig for more than 25 years, and throughout that time, I’ve looked to him for wisdom – and a bit of pique – when it comes to understanding the complex interplay of law, technology, and the future of the Internet. Lessig is currently the Roy L. Furman Professor of Law and Leadership at Harvard Law School. He also taught at Stanford Law School, where he founded the Center for Internet and Society, and at the University of Chicago. He is the author of more than half a dozen books, most of which have deeply impacted my own thinking and writing.

As part of an ongoing speaker series “The Internet We Deserve,” a collaboration with Northeastern’s Burnes Center For Social Change, I had a chance to sit down with Lessig and conduct a wide-ranging discussion covering his views on the impact of money in government’s role as a regulator of last resort. Lessig is particularly concerned about today’s AI-driven information environment, which he says has polluted public discourse and threatens our ability to conduct democratic processes like elections. Below is a transcript of our conversation, which, caveat emptor, is an edited version of AI-assisted output. The video can be found here, and embedded at the bottom of this article.

John Battelle   Hi, everybody, thank you for coming in today. Today we’re welcoming Professor Larry Lessig. Larry is one of my heroes, a key actor in a story I’ve cared very deeply about – the evolution of the Internet. I want to thank the Burnes Center, which is the underwriter and sponsor of this event, as well as the Khoury College of Computer Science.

Just this week I saw a full-page ad in The New York Times that said something to the effect of, we need to build the “Internet we deserve.” And given the name of this speaker series, I said to myself – wow, thanks Burnes Center! I really appreciate the $40,000 full page ad in The New York Times! But it turns out that the advertisement, which was essentially a short essay on the glory of blockchain as the next version of how we’ll organize ourselves on the Internet, was sponsored by VC firm Andreessen Horowitz. And it was a promotion for a book that one of the partners has written. I guess that’s how you promote a book about the future of the Internet – with an ad in the print edition of the Times

With that, let’s get started Professor Lessig. When you look back at the Internet over the past 30 to 35 years, are there any key moments, either heralded or unheralded, where something changed dramatically? 

Larry Lessig – Well, there’s a key moment for me. And a key moment for the Internet. Actually, [Burnes Center Director] Beth [Noveck] was in the room at Yale Law School, where we had a conversation about the nature of the fourth amendment. And I don’t remember exactly who framed or formulated it, but there was a recognition that the basic architecture of the Internet, the code of the Internet, was as responsible for the liberties or restrictions of that space, as was the law, or the Constitution or the norms of the society. 

When was that? 

That was 1996? Then my book Code came out in 1999. The idea of Code is to think systematically about the way in which technology or architecture is a tool affordance – constraints on opportunities or liberties. I think in the history of the Internet, though, the most significant moment was when the Internet figured out advertising as central to its economy. You know, I’m sure there were hundreds of articles in the Industry Standard (a prominent Internet newsmagazine in the late 1990s) talking about the future of micropayments, saying there’d be a future where people would get whatever they want by paying tiny little bits – a fraction of a cent. And that’s how we would run the economy of the Internet. But the problem with micropayments is the transaction costs were wildly greater than micropayment itself. That became a non-starter very quickly…until we figured out a micropayment that would have zero transaction costs, which was advertising. Advertising became the business model everybody focused on. 

Early on, newspapers and journalists thought that advertising would reward the very best content – the very best content would have the very best ads. The very best would rise to the top. Many partnered with Google to facilitate ads. What we quickly learned was that the Internet really doesn’t care about quality. Engagement, that’s the only measure of success. Can you put content up that engages people, keeps them connected, brings them back again, and again and again. And what’s engaging is a function of our psychology. And unfortunately, for us, the stuff that’s most engaging turns out not to be the best stuff for us. Kind of like food. The processed food industry is filled with lots of people who are not actually trying to make Americans sick. That’s not their purpose in life. It just so happens that the very best and the most addictive processed food turns out to be the processed food that’s most harmful for us.

There’s a great book – Salt, Sugar, Fat –  which tells a story of executives in the processed food industry who have this come to Jesus moment where they realize that they’re poisoning generations, and they decide they want to turn Kraft into a healthy food company. So they begin to produce healthy processed food. And of course, the market says, yeah, no, we’re not interested. So their market share crashes. And those executives are given golden parachutes and thrown off the ship. The thing that we didn’t recognize is that this core business model of the Internet has produced an influence on society that is poisonous and polarizing, and for some people pathological, but it turned out to be the most efficient way to fund the technology of the Internet. And, as with processed food, I don’t think that there’s an industry-based solution to it. This is a profound problem we face.

I will acknowledge, we may have started the whole advertising issue with a meeting at Wired magazine in 1993. It started with the question of how do we pay the 20 or 30 journalists we had hired for our new online site called Hotwired. So let me ask you, what might be the market intervention to solve this problem?

I had the honor of representing Francis Haugen when she became the Facebook whistleblower. In October, two years ago, she released a whole tranche of documents. She was an internal safety engineer inside of Facebook. And she collected massive amounts of evidence documenting the fact that Facebook engineers were working as hard as they possibly could to do the right thing. Make Facebook a safe platform to avoid misinformation being spread, to avoid the cycles of a rabbit hole behavior that was devastating to many people, including particular demographics like young women. But time and time again, these recommendations would rise to the top of the organizational chart and they would be vetoed. That should drive us to recognize that to solve this, there has to be an externality inside of a competitive market, which is government, which is regulation. And of course, we sort of give up because we all see that we have a government that’s completely incapable of regulation. The business model of the government is to bring tech executives into hearings and to ridicule them. At some point, [tech executives have] got to figure out how to turn this around and say “Look, we’re not the ridiculous ones here. This is your job. And you still haven’t done anything about this problem.” 

My first contact with AI was social media in the lead up to 2016, and a bit of 2020. This next generation of AI – which will be throughout this election – we have done nothing to try to figure out exactly how we’re supposed to do something about it. That terrifies me. There has to be some government intervention. 

So what kind of interventions could there be? 

The Europeans would solve the problem with massive European regulation like GDPR or the DSA and DMA. And, you know, my reaction to Europe is always like, I’m so envious of the fact that you can actually do something. And I’m also outraged with the stupidity of what you actually do. The problem is the business. And you’ve got to figure out a way to change the business model. So for example, imagine you had something called an engagement tax. And engagement tax had a unit where the tax would be zero, until engagement went past, say, two hours. So if a customer is on the platform for an hour, there’s no tax, if a customer is on the platform for two hours, there’s a $1 tax or whatever, and then the tax goes up in a quadratic way. So it gets higher, you know, steeper and steeper. So at some point, Facebook’s saying hey John, go away! That structure would overnight change the incentives of the platforms – they would not be interested in locking you in for perpetual amounts of time. They would have to figure out another way to make money.

It reminds me of China – maybe not the best example, but one of the things they’ve done is impose an hourly limit in some instances.

This upsets some people, but the ability to say “there’s no gaming after 11 o’clock, and the total amount of gaming that you do during the week is going to be an hour or two hours and maybe a couple hours more on the weekend.” All of those controls that we would think intuitively are the kinds of parenting controls that you would want to impose. Of course, we can’t in this country, nobody even thinks about it in this country. But China has done it. 

Well, it’s not necessarily a model that anyone in the United States is going to want to do – it feels anti-American.

In real space, we have those regulations all the time. Right? In real space, we say “here are the opening hours for certain kinds of businesses.” And these businesses after this time are not allowed to admit kids. The zoning of realspace life reflects values we acknowledge.

What is that about the Internet, where it seems we shed all the mores we commonly hold in real space? Why is that? 

Part of it is that people don’t notice things in real space as “regulation.” Of course there is a regulation (in real space) that says that if you’re under 21, you can’t go into a bar. But it doesn’t feel like a regulation. We have embedded regulations, we’ve made them invisible in real space. But when you go into the Internet, and you start talking about how to craft the rules that will construct the space, then it seems like now we’re talking about “regulation,” and the precedent that gives us the policy structure for the Internet – Democrats and Republicans alike – is that this is the golden goose, if you regulate it, you will destroy it. So that of course produces Amazon or Google. One of the most important regulatory decisions that the Clinton Administration adopted was to relax the rules against discrimination in pricing.

How so? 

The basic rule in communications and commerce in the world before the Internet – and this came after many years of antitrust insight – was that major providers had to provide their products at prices that were not discriminatory. So my ability to pay for something should be the same as yours. So we’d get the same price. Price discrimination was what built the railroad trust, sugar trust, and all of that, and we learned that we had to eliminate it. The Clinton administration, the SEC expressly, relaxed that rule online, and that drove the business model of price discrimination in access for all sorts of things, including advertising. That’s what drove the whole business model of creating auctions for advertising, because every advertiser, every ad you buy on the Internet is differently priced based on a real time auction. Economists backed by Google money said this would be much more efficient in the sense of extracting more value from the consumer. And that’s true. 

Because my attention might be less or more valuable than your attention on the Internet – and it’s okay to discriminate based on that?

That was the argument. That certainly built a very large industry. Advertising was obviously driven by that discrimination business model. But then think of platforms like Amazon. Amazon gets to discriminate on both sides. Amazon’s in the real economy, you’ve got suppliers who have some great product, and they come to Amazon and say “We want to make the product available on Amazon.” Then Amazon will price discriminate. So basically, Amazon gets to extract all of the surplus from the supplier. And they will also price discriminate with the consumer. So they also get to extract all of the surplus from the consumer. In basic economics, there’s always a demand-supply curve. And then there are triangles representing consumer surplus and producer surplus. The perfectly competitive market leaves to the consumers a certain amount of surplus, and a certain amount of surplus to the producers. The Amazon structure allows the company to suck all of that out. And so it’s no surprise that the product of this decision, this regulatory decision, was the concentration of a few major companies like Google or Amazon, who have figured a way to leverage this regulatory change – a change that when it was made, nobody had any clue would lead to this.

You have a very strong point of view about where we are today, in particular as it relates to the role of AI and its impact on democracy and governance. What scares you about AI?

Well, I actually think we should naturalize AI. When we think about AI, we think about digital AI. But long before there was digital AI, I think there was analog AI. By analog I mean institutions that get established with some objective, which they are instrumented rationally to advance. So democracy is an AI in the sense like we build an institution, its objective is to promote the public good. We look at the world and we figure out how to promote the public good. And we’d like to do things to achieve that objective. That’s the objective function. Corporations are an AI. Their objective is to maximize profit or maximize shareholder value. And they adjust their behavior to achieve that objective. I think the first thing to recognize is that we have this naive view, that when the objectives of democracy conflict with the objectives of corporations, democracy will win because after all, we have guns, we can lock people up, or shoot them if they misbehave. But the reality is, corporations defeat the objective of democracy all the time. Corporations fund the elections, indirectly through Super PACs.

One of the most depressing moments in the history of social media was when the question of regulating TikTok came up. AOC does her first TikTok and she declares that she’s not in favor of regulating TikTok because we still don’t have privacy regulations. Until we have privacy regulations for American companies, she said, it’s not fair to regulate foreign companies. And it struck me like, gosh, I can’t possibly think that’s the issue with TikTok. And then I quickly verified a huge amount of money flowed into our democratic infrastructure from technology companies who were trying to neutralize the move to regulate technology companies. When corporations conflict with democracy, in fact, most of the time under the American system, corporations win.

Digital AI is just that story orders of magnitude more efficient. But digital AI also has an objective function – “How do I maximize your engagement on my platform?” AI doesn’t necessarily understand the limits of what it should try to do to get you to spend more time on the platform. AI researchers discovered that their AIs are learning the value of misrepresentation – they’re not programming them to misrepresent, it’s AIs that are discovering misrepresentation is a more effective way to get people to (engage). So they’re just learning to lie, they’re learning to do whatever they can, to produce the result that they’ve been programmed to produce. Maximizing engagement produces lots of bad results, that’s the unintended consequence of AI. What happens when you start imagining the unintended consequences of AI when the Russians or the Chinese or whoever deploy AI during elections for the purpose of producing whatever particular result they want to produce? And the AI becomes increasingly sophisticated as it goes, learning about what’s working and what’s not working to get people angry or get them so they’re not going to engage, or whatever results that it wants to produce. 

Tristan Harris has this wonderful point, he says we spent all of this time focused on when AI will cross the point of human strength – when an AI is more intelligent than humans. But actually, the most important point is when AI crosses the point of human weakness. When it overcomes our weakness, then it controls us. Like, we doom scroll just because we’re not strong enough to discipline ourselves to stay away. If AI gets over the weakness line, that’s the only line you’ve got to get over. It’s the same thing with democracy, like all it’s got to do is to get over democratic weakness. And then its capacity to cause whatever harm intentional or unintentional is unconstrained. 

Is there anything about AI that you think is potentially helpful or useful?

Well, look, I think so. This comment has been repeated about technology. AI is both the best technology we could possibly imagine and the worst technology, it’s both at the same time. And it will do unbelievably great things – it can solve energy problems, cancer, all these sorts of things. I don’t want to give any of that up. But what I’m pointing out is that the bad things it’s going to do, we could in theory, regulate against if we had a government, but given we don’t have a government, the anxiety is that these bad things can overwhelm the good. I think that all comes back to the same point, same as it’s ever been, same as it ever was. The whole point is we just don’t have the capacity to govern. And now the consequences of not having a capacity to govern. You talk to AGI researchers, and they say “there’s a 20 to 40% chance that we lose control of AGI and it extinguishes humanity.” Twenty percent! I mean, it’s probably not gonna happen. We don’t even have a government able to say, okay, look, let’s say that that’s wrong by an order of magnitude, it’s not 20%, it’s 2%. It’s a 2% chance that it will end humanity. That’s still a sufficient reason to regulate aggressively to account for a catastrophic risk. 

And the reason is what you mentioned before – the interplay between capitalism and elected representatives of government.

It’s not that the corporations can flip a switch and get the government to do whatever it wants. That’s pretty hard. But what they can do is stop the government from doing almost anything. And, you know, what’s the best evidence? I mean, we’ve had 40 good years of solid data about the danger of climate change. But the inflation Reduction Act basically buys our way out of like a tiny fraction of that problem, like so if we can’t manifest effective government responses to a well-documented catastrophic consequence, why does anybody think we’re going to be able to manifest government’s response to a complex problem that most people don’t even understand? Like you say, the AIs are going to take over – that sounds like science fiction, you have no way to understand that. It’s an extremely hard problem to motivate people around.

If you had a magic wand, which you wave and it enforces one elegant new regulation, what would it be? 

To deal with these issues of AI? I can’t do it. I don’t think there is such a thing. I think we’re screwed if we can’t imagine democracy running differently from how it works now.

So tell me a little bit about that. When we talked on the phone, you mentioned this idea of citizen assemblies.

We’ve been running democracy in an unprotected space. We have elections, and people are persuaded to participate – or not – based on input that they get. But that input is (now) the product of these AIs. It’s in an unprotected, polluted, corrupted space.There are a lot of countries around the world experimenting with something like citizens assemblies, which are, in some sense, a deeply American idea. I call them citizen juries. This is something that lots of people have been doing in different contexts – budgeting, for example. You create a random representative sample of a population. Bring them together. Give them a package of information that both sides agree that’s a fair statement of whatever the issue is, and they have a chance to deliberate, which is a critical part of the story. Make humans be human with each other, sit around a table with people that you don’t agree with, talk to each other about the issue. And then they come up with a resolution of that particular issue that has a structure that is protected from the manipulation (of AI-driven environments). I think that the only way to imagine democracy surviving this next flood – to imagine important parts of it moving into a more protected space. It’s happening in Europe in a really interesting way. But I’m not sure how it can happen quick enough.

All right. We have some questions from the audience.

(in room question) – Thank you so much. With AI and platforms, you talk about the business model, but what about leaders who are using it for their own power? Which is somewhat different than for business? 

Larry Lessig  – Yeah, I think the business model doesn’t explain that completely. Although, you know, the business model of being Trump was not profitable in 1974. And it is profitable today, in part because of the structure of engagement on the Internet. Right? So I don’t think it’s completely unrelated. But I agree with you. It’s not completely related. It’s not completely the business model. And I’m not sure what you can do about that. I imagine if there is a future that has a department of history that cares to think about this particular period of human history, it will be extremely puzzled by the amount of attention we pay to people who are objectively ridiculous. My views are just like, how much time does America focus on what Elon Musk thinks about all sorts of issues? I mean, what do you think about space travel, or how to design electric cars, except for the cyber truck? That might be worthwhile. But the idea that we care what he thinks about policy in the Middle East or policies on campuses, it’s just kind of astonishing. 

Beth Noveck, director of the Burnes Center, I think I saw your hand up.

Beth Noveck –  Thank you. This is wonderful. I wanted to ask you about one particular thing. So you wrote a wonderful book called America Compromised about institutionalized corruption, and money in American politics. I recommend it to everybody. I wanted to ask you about what we’re doing in universities, because in Europe, at least, you both have a positive speech rights tradition that asks “What is the informational diet for democracy?” and you have a different media environment, you have a different academic environment. So all these things work together in terms of institutions. And I want to ask you to shine a light on us as an institution, that is to say, academic institutions, I think we bear a lot of blame for the situation we’ve gotten ourselves into. What should we be doing differently in terms of how we’ve approached the Internet and social media, and now in this AI moment? What should we be doing differently in our research in terms of our thinking, in order to get ourselves out of this vicious circle? I have an optimistic and hopeful answer, but I’m here to ask the question!

Larry Lessig – The reality is, universities have become dependent on money in a way that corrupts the underlying purpose of the institution of the university. I came to Stanford because the dean said, “We want you to teach cyber law.” And I said “Okay, you’ll have to set up a center, the equivalent of the Berkman Center,” and she said fine. And she raised the money for it. And I came, and I taught, and we did all sorts of things unrelated to the interest of the funders. But that changed very quickly. I mean, I left to come to Harvard to study institutional corruption, because I could already see the beginnings of this, but very quickly funding of universities by corporations became transactional. Cell phone companies, for example, are the lead funders of research about RF radiation. If you’re an academic studying RF radiation, it’s pretty clear, early on in your career, what kind of answers you (are allowed to) get. Now, you know, you might decide to ask different questions so that those answers don’t come up. We don’t have enough leadership, especially by great institutions like Harvard or Stanford or Northeastern, to make it clear that we are not going to tolerate that kind of compromise. You look at Harvard right now, where a billionaire like (William) Ackman thinks that he’s entitled to come in and set Harvard policy. Why? Because he’s rich. That’s it, because he can threaten to withdraw his money; to which I think Harvard’s response should be “Here’s your money back, we don’t have anything to do with you, if that’s the way you’re gonna behave, you’re not going to be part of our university.” Of all the places in the world that could afford to do that, Harvard can. And I think Harvard – and the next 10 universities – should be really aggressive and clear. It would create enough of a buffer so others could follow relatively easily.

I think about the Andreessens (Marc Andreessen, founder of VC firm a16z) of the world. They’re pro AI, and anti regulation. They have this view of just unleashing (it all), get the government out of the way. And it’ll all be beautiful, which is the most ignorant understanding of the history of humanity that you could possibly assert. The only time in American history where growth happens, and everybody rises with rising times, the only time that ever really happened in a dramatic way is basically 1950 to 1985. And what defines 1950 to 1985? Heavy government regulation to make sure that there’s a balance between private market interests and the public good. There were strong unions, limits to what corporations can do. So this is the only period where you see a relationship of growth and rising equality. And it’s because of the countervailing forces that the government creates, and yet you have people like Andreessen imagining history to be the story of whenever government appears, it destroys growth. And whenever government disappears, then we have growth, and everybody is better off. I mean, that’s just completely false. 

John Battelle – Well, we’re at time. And I want to thank you, Larry, for being here. As I said earlier, you can sign up for updates about everything happening on the Burnes website. Thank you so much, Larry. 

You can follow whatever I’m doing next by signing up for my site newsletter here. Thanks for reading.

Leave a Reply

Your email address will not be published. Required fields are marked *