TED Talks Daily - How AI could hack democracy | Lawrence Lessig

Episode Date: November 2, 2024

Does AI pose a threat to democracy? Law professor Lawrence Lessig dissects how this emerging technology could influence democratic institutions, warning that we’ve already passed a point (b...efore superintelligence or AGI) that deserves a lot more attention.

Transcript
Discussion (0)
Starting point is 00:00:00 TED Audio Collective is paid to when artificial intelligence will overwhelm human strengths, like when it's super intelligent or when humans are no longer in control of it. But in his 2024 talk, attorney and activist Lawrence Lessig warns us that we've already crossed a point that needs a lot more attention, the point at which AI overwhelmed human weaknesses.
Starting point is 00:00:44 He explains that problem and what we should do about it after the break. Support for this show comes from Airbnb. If you know me, you know I love staying in Airbnbs when I travel. They make my family feel most at home when we're away from home. As we settled down at our Airbnb during a recent vacation to Palm Springs, I pictured my own home sitting empty. Wouldn't it be smart and better put to use welcoming a family like mine by hosting it on Airbnb? It feels like the practical thing to do, and with the extra income, I could save up for renovations to make the space even more inviting for ourselves and for future guests. Your home might be worth more than you
Starting point is 00:01:25 think. Find out how much at airbnb.ca slash host. And now our TED Talk of the day. So on January 6th, 2021, my nation suffered a little bit of a democracy heart attack. Thousands of Americans had been told that the election had been stolen, and tens of thousands of them showed up because they believed the election had been stolen. And indeed, in polling immediately after January 6th, the Washington Post found 70% of Republicans believed the election had been stolen, and a majority of college-educated Republicans believed
Starting point is 00:02:15 the election had been stolen. That was their perception. And I don't know what the right way to act on that perception is. They thought the right way was to defend what they thought was a democracy stolen. Now, these numbers were astonishing. 30% or two-thirds of Republicans believing the election was stolen. But even more extraordinary are these numbers. The fact that in the three years since that astonishing event, the numbers have not changed.
Starting point is 00:02:55 The same number believe today that the election was stolen as believed it was stolen three years ago despite the fact that we've had investigations and overwhelming evidence that there was no fraud sufficient to ever change even a single state. This is something new. When Richard Nixon went through the Watergate scandal, as the news was being reported, Nixon's popularity collapsed, not just among Democrats, but among independents and Republicans. But we're at a stage where it doesn't matter what happens. This is Donald Trump's popularity over the course of his administration. Nothing changes. The facts don't matter. Now, I think this truth should bother us a lot.
Starting point is 00:03:44 I think we need to develop a kind of paranoia about what produces this reality, a particular paranoia, the paranoia of the hunted. Think of the kids in The Birds first realizing that those crows were attacking them, or black mirrors, metal heads, when you see these creatures chasing and surrounding you. The point is we need to recognize that there is an intelligence out there to get us, because our perceptions, our collective perceptions,
Starting point is 00:04:24 our collective misimpressions are not accidental. They are expected. They are intended. They are the product of the thing. Okay, I want to be careful introducing the thing. I'm going to talk a little bit about AI, but I'm not going to slag on AI. I think AI is the most extraordinary technology humanity has ever even conceived of, but I also think it has the potential to end humanity. But I'm not going to slag on AI because I'm pretty sure that our robot overlord is going to be listening to these TED Talks someday, and I don't want to be on the wrong side of the overlord. So AI is just fine.
Starting point is 00:05:07 I'm not going to talk about this AI first. I want to instead put AI in a little bit of a perspective. Because I think that we're too obsessed with the new, and we fail to recognize the significance of AI in the old. We think about intelligence, and we're distinguishing between artificial and natural intelligence, and we, of course, as humans, claim pride of kingdom in the world of natural intelligence.
Starting point is 00:05:36 And then we build artificial intelligence. It's intelligence that we make. But here's the critical point. We have already, for a long time, lived with systems of artificial intelligence. I don't mean digital AI. I mean analog AI. Any entity or institution that we build with a purpose that acts instrumentally in the world is, this sense an AI. It is an instrumentally rational entity that's mapping how it should behave given the way the world evolves and responds to it. So think about democracy as an AI. It has institutions, elections, parliaments, constitutions for the purpose of some collective ends. Our constitution says it's for the common good,
Starting point is 00:06:28 so the democracy in our constitution is an analog artificial intelligence devoted to the common good. We're thinking about corporations as an AI. They have institutions, boards, management, finance, for the purpose of making money, or at least conceived of narrowly today, that's the way it is. Their corporation is an analog intelligence
Starting point is 00:06:50 devoted to maximizing shareholder value. These are AIs. They have purposes and objectives, sometimes complementing each other. So the purpose of a school bus company complements the purpose of a school board to produce school bus transportation in a district. That's just beautiful. But sometimes they're competing. The purpose of a government in having a clean environment conflicts with the purpose of a coal company designing to produce electricity by spewing carbon and soot into the environment.
Starting point is 00:07:25 And when they conflict, we tell ourselves this happy story. We tell ourselves the story that democracy is going to stand up and discipline that evil corporation to get the corporation to do the right thing, to do the thing that's in the interest of all of us. That's our happy story. It's also a fantasy. Because at least in my country, corporations are more effective AIs than democracy.
Starting point is 00:07:55 Think about it a little bit like this. If we think about instrumental rationality along one axis of this graph and time across the other, humans, of course, are the first instrumentally rational entity we care about. We're better than cows, maybe not as good as ants, but the point is we're pretty good as individuals figuring out how to do things strategically. And then we build democracy to do that a little bit better, to act collectively for all of us. And that's a more instrumentally rational entity than we individual humans can be.
Starting point is 00:08:28 Then we created corporations. And it turns out they have become, at least in corrupted political regimes, which I'll just submit my political regime is, better than democracy in bringing about their objective ends. Now, of course, in this system, each of these layers has an aspiration to control the higher layer. So humans try to control democracy through elections.
Starting point is 00:08:55 Democracy tries to control corporations through regulation. But the reality of control is, of course, a little bit different. In the United States, corporations control democracy. Through the extraordinary amount of money they pour into elections, making our representatives dependent not on us, but on them. And in democracy, then controls the humans by making representation not actually representation, corrupting representation. Now this structure, this layer of higher order intelligence or instrumental rationality might evoke, for those of you who think about AI, a statement by the godfather of AI, Geoffrey
Starting point is 00:09:37 Hinton. Hinton warns us there are few examples of a more intelligent thing being controlled by a less intelligent thing or we can say a more instrumentally rational thing being controlled by a less instrumentally rational thing and that is consistent with this picture of AIs and then we add digital AI into this mix. And here too, once again, we have corporations attempting to control their digital AI. But the reality of that control is not quite perfect. Facebook in September of 2017 was revealed to have a term in their ad system called Jew haters. You could buy ads targeting Jew haters. Now, nobody in Facebook created that category. There
Starting point is 00:10:38 was not a human in Facebook who decided we're going to start targeting Jew haters. It's AI created that category because it's AI figured Jew haters would be a profitable category for them to begin to sell ads to. And the company was, of course, embarrassed that it turned out they didn't actually have control over the machine that ran their machines that run our lives. The real difference in this story, though, is the extraordinary potential of this instrumentally rational entity versus us. This massively better instrumentally rational entity versus even corporations and certainly democracies, because it's going to be more
Starting point is 00:11:20 efficient at achieving its objective than we are. And here's where we cue the paranoia I began to seed because our collective perceptions, our collective misconceptions are not accidental. They are expected, intended, the product of this AI. We could think of it as the AI perception machine. We are its targets. And now back to the episode. Now, the first contact we had with this AI, as Tristan Harris described it, came from social media. Tristan Harris, who started the Center for Humane Technology, co-founded it, famous in this extraordinary documentary,
Starting point is 00:12:12 The Social Dilemma. Before he was famous, he was just an engineer at Google. And at Google, he was focused on the science of attention, using AI to engineer attention to overcome resistance to increase human engagement with the platform because engagement is the business model. Compare this to, think of it as brain hacking. We could compare it to what we could call body hacking. This is the exploiting of food science.
Starting point is 00:12:42 Scientists engineer food to exploit our evolution, our mix of salt, fat, and sugar, to overcome the natural resistance so you can't stop eating food, so that they can sell food or sell quote food more profitably. Brain hacking is the same but focused on attention. It's exploiting evolution, the fact that we have an irrational response to random rewards, or can't stop consuming bottomless pits of content with the aim to increase engagement to sell more ads. And it just so happens, too bad for us, That we engage more, the more extreme, the more polarizing, the more hate-filled this content is. So that is what we're fed by these AIs. With the consequence that we produce a people more polarized and ignorant and angry than at any time in democracy's history in America
Starting point is 00:13:46 since the Civil War, and democracy is thereby weakened. They give us what we want. What we want makes us into this. Okay, but recognize something really critically important. This is not because AI is so strong. It's because we is so strong. It's because we are so weak. Here's Tristan Harris describing this.
Starting point is 00:14:11 We're all looking out for the moment when technology would overwhelm human strengths and intelligence. When is it gonna cross the singularity, replace our jobs, be smarter than humans? But there's this much earlier moment when technology exceeds and overwhelms human weaknesses. This point being crossed is at the root of addiction, polarization, radicalization, outrageification, vanityification, the entire thing. This is overpowering human nature, and this is checkmate on humanity. So Tristan's point is we're always focused on this corner, when AGI comes, when it's super intelligent, when it's more intelligent than any of us. And that's what we now fear, whether we
Starting point is 00:15:01 will get there in three years or 20 years, what will happen then. But his point is it's actually this place that we must begin to worry. Because at this place, it can overcome our weaknesses. The social dilemma was about the individual weaknesses we have, not to be able to turn away from our phones or to convince our children to turn away from their phones. But I want you to see that there's also a collective human weakness, that this technology drives us to disable our capacity to act collectively in ways that any of us would want. So we are surrounded individually by these metal heads, and we are also surrounded as a people by these metalheads long before AGI is anywhere on the horizon.
Starting point is 00:15:53 It overwhelms us. AI gets us to do what it seeks, which is engagement. And we get democracy hacked in return. Now, if the first contact that we had gave us that, if social media circa 2020 gave us that, what's the second contact with AI going to produce? When AI is capable not just in figuring out how to target you with the content it knows will elicit the most reaction and engagement from you, but can create content that it knows will react or get you to engage more directly, whether true or not, what does that contact do? I so hate the writers of Game of Thrones because their last season, they so completely ruined the whole series that we can't use memes from
Starting point is 00:17:00 Game of Thrones anymore. But if we could, I would say, winter is coming, friends. Well, I'm just going to say it anyway. Winter is coming, friends. And these AIs are the source that we have to worry about. So then what is to be done? Well, you know, if there's a flood, what you do is you turn around and run. You move. You move to higher ground or protected ground. You find a way to insulate democracy or to shelter democracy from AI's force or from AI's harmful force. And, you know, the law does this in America with juries. We have juries. They
Starting point is 00:17:47 deliberate. But they are protected in the types of information that they're allowed to hear or talk about or deliberate upon. Because we know we need to protect them if they're to reach a judgment that is just. And democracy reformers, especially across Europe, are trying to do this right now. Reformers building citizen assemblies across Europe, mainly in Japan as well. And citizen assemblies are these random, representative, informed and deliberated bodies that take on particular democratic questions
Starting point is 00:18:21 and address them in a way that could be protected from this corruption of the AI. So Iceland was able to craft a constitution out of a process like this. Ireland was able to approve gay marriage and deregulation of abortion through a process like this. France has addressed climate change and also end-of-life decisions. And across Germany, there are many of these entities that are boiling up to find ways for a different democratic voice to find voice. But here's the point. These are extraordinarily hopeful and exciting, no doubt, but they are not just a good idea. They are existential for democracy. They are security for democracy.
Starting point is 00:19:11 They are a way to protect us from this AI hacking that steers against a public will. This is change not just to make democracy better, a tweak, to just make it a little bit more democratic. It's a change to let democracy survive given what we know technology will become. This is a terrifying moment. It's an exhilarating moment.
Starting point is 00:19:43 Long before superintelligence, long before AGI threatens us, a different AI threatens us. But there's something to do while we still can do something. We should know enough now to know we can't trust democracy just now. We should see that we still have time to build something different. We should act with the love that makes anything possible, not because we know we will succeed.
Starting point is 00:20:18 I'm pretty sure we won't. But because this is what love means, you do whatever you can, whatever the odds, for your children, for your family, for your country, for humanity, while there is still time, while our robot overlord is still just a sci-fi fantasy. Thank you very much. Support for this show comes from Airbnb. If you know me, you know I love staying in Airbnbs when
Starting point is 00:20:59 I travel. They make my family feel most at home when we're away from home. As we settled down at our Airbnb during a recent vacation to Palm Springs, I pictured my own home sitting empty. Wouldn't it be smart and better put to use welcoming a family like mine by hosting it on Airbnb? It feels like the practical thing to do, and with the extra income, I could save up for renovations to make the space even more inviting for ourselves and for future guests. Your home might be worth more than you think. Find out how much at Airbnb.ca slash host.
Starting point is 00:21:36 That was Lawrence Lessig speaking at TEDxBerlin in 2024. If you're curious about TED's curation, find out more at TED.com slash curation guidelines. And that's it for today. TED Talks Daily is part of the TED Audio Collective. This episode was produced and edited by our team, Martha Estefanos, Oliver Friedman, Brian Green, Autumn Thompson, and Alejandra Salazar. It was mixed by Christopher Fazi Bogan. Additional support from Emma Taubner and Daniela Balarezo. I'm Elise Hu. I'll be back tomorrow with a fresh idea for your feet. Thanks for listening. Looking for a fun challenge to share with your friends and family?
Starting point is 00:22:15 TED now has games designed to keep your mind sharp while having fun. Visit TED.com slash games to explore the joy and wonder of TED Games.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.