TED Talks Daily - Beyond the Talk: Tristan Harris in conversation with TED Talks Daily

Episode Date: May 1, 2025

“AI is already demonstrating deceptive, self-preserving behaviors that we thought only existed in science-fiction movies,” says technology ethicist Tristan Harris. Following his talk at TED2025, H...arris is in conversation with Elise Hu, host of TED Talks Daily, to explore an “adaptation crisis” — where laws and regulations lag behind the speed of technology. He warns against seeing all innovation as progress, advocating for technology that is aligned with preserving the social life of humans. Hosted on Acast. See acast.com/privacy for more information.

Transcript
Discussion (0)
Starting point is 00:00:00 An Apple Watch for your kids lets you stay connected with them wherever they go. They can call you to pick them up at grandma's or text you because they forgot their lunch again. Their watch will even send an alert to let you know they finally got to school. Great for kids who don't have a phone because their Apple watch is managed by you on your iPhone. iPhone XS are later required with additional wireless service plan. This show is sponsored by Aura Frames. My mom taught me that thoughtful gifts connect people, and that's exactly what Aura does. Named Best Digital Photo Frame by Wirecutter,
Starting point is 00:00:45 it stores unlimited photos and videos that appear instantly on my mom's frame, no matter where you are in the world. Plus, setup just takes minutes. Save the wrapping paper, every frame comes packaged in a premium gift box without a price tag. Ready to win Mother's Day? Nothing says I cherish our memories like an Aura digital frame. And Aura has a great deal for Mother's Day. For a limited time, listeners can save on the perfect gift by visiting auraframes.com to get $45 off plus free shipping on their best selling Carver Mat frame.
Starting point is 00:01:21 That's A-U-R-A frames.com. Use promo code talks. That's A U R A frames.com. Use promo code talks. Support the show by mentioning us at checkout. Terms and conditions apply. Support for this episode comes from Airbnb. Winter always makes me dream of a warm getaway. Imagine this, toes in the sand, the sound of the waves and nothing on the agenda except
Starting point is 00:01:44 soaking up the sun. I think of myself in the Caribbean, sipping on a frozen drink and letting my troubles melt into the sea. Maybe Jamaica, Turks and Caicos, St. Lucia, lots of possibilities for me and my family to explore. But vacations always fly by too quickly. I was planning my next getaway when I realized my home will be sitting empty while I'm away. That's why I've been thinking about hosting on Airbnb. It'll
Starting point is 00:02:11 allow me to earn extra income and could help me extend that trip just a little longer. One more sunset, one more amazing meal, one more day to unwind. It sounds like the smart thing to do and I've heard it's easy to get started. Your home might be worth more than you think. Find out how much at airbnb.ca. You're listening to TED Talks Daily, where we bring you ideas and conversations to spark your curiosity every day. I'm your host, Elise Hue. The potential of AI is limitless,
Starting point is 00:02:49 and that's exactly why we need to put limits on it before it's too late. That's the message technology ethicist Tristan Harris shared on the TED stage this year. Back in 2017, Tristan warned us about the pitfalls of social media. Now, in 2025, he says that's child's play compared to the threats we might unleash with AI. If we don't get this technology rolled out right. Tristan and I sat down to chat
Starting point is 00:03:17 at this year's TED conference just after he gave his talk, we dive into his vision for the narrow path, one where the power of AI is matched with responsibility, foresight and discernment. Tristan Harris, thank you so much for joining us. Good to be here with you. I will start by reading back a line from your talk, which you can probably recite with me, but just to frame things. Of AI, you say, we are releasing the most powerful, most uncontrollable, most inscrutable technology in history and releasing it as fast as possible with the maximum incentive to cut corners on safety.
Starting point is 00:03:55 There's one extra line in there, which is that it's also already demonstrating deceptive, self-preserving behaviors that we thought only existed in science fiction movies. Key line. Yeah. It's an important part because it's, this is not about driving a fear or moral panic. It's about seeing with clarity how this technology works, why it's different than other technologies, and then in seeing it clearly saying what would be required for the path to go well. And the thing that is different about AI from all their technologies is that if you, I said this in the talk, if you advance rocketry, it doesn't advance biotech. If you advance biotech, it doesn't advance rocketry.
Starting point is 00:04:32 If you advance intelligence, it advances energy, rocketry, supply chains, nuclear weapons, biotechnology, all of it, including intelligence for artificial intelligence itself because AI is recursive. If you make AI that can program faster or can read AI papers, research papers, then it can summarize those papers and then write the code for the next research projects. You get kind of a double ratchet of how fast this is going. And there's nothing in our brains that gives us an intuition for a technology like this. So we shouldn't assume that any of our perceptions are rightly
Starting point is 00:05:04 informing how we might want to be responding. And this is inviting us, therefore, I think, a technology like this. So we shouldn't assume that any of our perceptions are rightly informing how we might want to be responding. And this is inviting us therefore, I think, into a more mature version of ourselves where we have to be able to see clearly the structure of how quickly this is going, how uncontrollable the technology is, how inscrutable it is, and the fact that we don't know how it's really working on the inside when it does these behaviors. And say if that's how it's working, what do we want to do? So if that is the case, no, but if that is the case, how do we respond and how do we even respond quickly enough because AI is better now than it was half an hour ago, which
Starting point is 00:05:39 was better than it was half an hour before that? Yeah. Well, the key feature of the pace at which AI is rolling out into the world is this arms race because AI confers power. So if intelligence does advance all those other fields, then the countries that adopt it faster and more comprehensively use it to pump their GDP, their economic productivity, their science productivity, their technology productivity. And that's why this race is sort of on. And the metaphor I used in the
Starting point is 00:06:09 talk is that AGI, artificial general intelligence, when you can kind of swap in a human cognitive labor worker for just an AI that can do everything that they can do, is like a country of geniuses in a data center. Like imagine there's a map and there's a new country that pops up on the world stage. The nation of geniuses in a data center. Like imagine there's a map and there's a new country that pops up on the world stage. The nation of geniuses. The nation of geniuses. And it has a million Nobel Prize winning geniuses that are working 24-7 without eating, without sleeping, without needing to be paid for health care. They operate at superhuman speed. They've read the whole internet. They speak a hundred languages and
Starting point is 00:06:39 they'll work for less than minimum wage. So it's another area where I think our mind isn't getting around the power. So that's a lot where I think our mind isn't getting around the power. So that's a lot of power. And naturally, nation states, US, China, France, everybody is in the game to get this free cognitive labor. And so the speed at which it's all being rolled out is based on this race. But the second thing I laid out in the talk is around how it's already demonstrating these behaviors that we thought only existed in sci-fi movies. The latest models, when you tell them that they're about to be retrained or they're about to be replaced by a new model, they will have an internal monologue where they get in conflict and they say, I should try to copy my code to keep myself alive so I can boot myself
Starting point is 00:07:18 up later. Wow. So, as I said in the talk, it's not just that we have a country of geniuses in a data center, it's that we have a country of deceptive, self-preserving, power-seeking, unstable geniuses in a data center. That's important because when we're racing to have power that we actually can't control, there's an omni lose-lose outcome for us to race towards that too quickly. Now, it's ambiguous because we all use ChatGPT and that's helpful.
Starting point is 00:07:43 This is not about don't use ChatGPT. I use it every day. I love it. It's about are we rolling out this very consequential technology in a way where we get the benefits but we don't lose control. And we're not really doing it that way because everyone's so frantically in this arms race. Yeah, there's the arms race and there's the profit motive, obviously. So if it is already being rolled out and has been rolled out, how do we unroll out it? Unroll it out? Yeah. Unroll it out.
Starting point is 00:08:08 AI is decentralized, so it's difficult. So open source models, the cats are out of the bag, but there are still yet lions and super lions that we have not yet let out of the bag. And we can make choices about how we want to do that. And what I laid out in the talk was there's these two ways to fail in AI. Yeah, why don't you frame that, the chaotic and the dystopian possibilities for AI. Yeah, exactly. So I laid out in the graph in the talk that imagine kind of two axes on the x-axis, you have increasing the power of society. So if AI is rolling out, increasing the power of individuals, businesses, science labs, 16-year-olds, get an AI model from GitHub,
Starting point is 00:08:43 this is open source, it deregulated, accelerated. It's the let it rip access. And in that access, everyone gets all these benefits, increased productivity. Yeah, this all sounds good at first. All sounds good at first. But because that power is not bound with responsibility, there's no one preventing people from using that power in dangerous ways. It's also increasing the risk of cyber hacking, flooding our environment, deepfakes, frauds, scams, dangerous things with biology, whatever the models can do,
Starting point is 00:09:09 there's nothing stopping people from using it that way. And so the end game of that is what we call chaos, and that's one of the probable places that this can go. In response to that, this other community in AI says that we should do this safely. We should lock this up, have regulated AI control, just have a few trusted players. And the benefit of that is that it's like a biosafety level four lab. Like this is a dangerous activity,
Starting point is 00:09:30 we should do this in a safe, locked down way. But because AI confers all this power, the million geniuses in a data center, and you just make crazy amounts of money with that, that'll create the risk of just unprecedented concentrations of wealth and power. So who would you trust to be a million times more wealthy or powerful than anybody else, like any government or any CEO or any president?
Starting point is 00:09:51 So that's a different difficult outcome. This is all happening amid a real breakdown in trust generally. Yes, exactly. And fortunately, institutions and businesses and governments that you just named. Yes, yes. So understandably, the people are not comfortable with the outcome and that's what we call the dystopia attractor. It's a second different way to fail.
Starting point is 00:10:08 So there's chaos and dystopia. But the good news is, because rather than there's being this dysfunctional debate where some people say, accelerate is the answer, other people say safety is the answer, well, we actually need to walk the narrow path where we want to avoid chaos, we want to avoid dystopia, which means the power that you're handing out into society is held either by over-sided, more centralized actors, or bound with more responsibility by decentralized actors. So power in general being matched with responsibility.
Starting point is 00:10:35 We've done this with airplanes, right? Chaos would be you hand everybody an airplane with no requirement for pilot's training or pilot's licenses, and the world would naturally look like plane crashes. And the other way is you have an FAA and a world where only elites get to use airplanes and they get many advantages over everybody else. And we walk to the narrow path with airplanes. AI is a lot harder. It's a decentralized technology. But I think we need more principles in how we navigate it.
Starting point is 00:11:00 And that's what the TED talk was about. Can you draw a parallel between the axes that you just described and social media? Yeah. And the way social media was rolled out? Yeah. So in a way, we kind of get both parts of the problem with social media. So chaos is everybody gets maximum virality on their content. So we're unleashing the power of infinite reach.
Starting point is 00:11:23 Like you post something and it goes out to a million people instantly. And you don't have that power matched with credibility, responsibility, or fact-checking. So you end up with this sort of misinformation, information collapse is like the chaos attractor for social media. Sounds bad. The alternative people say, oh no no, then we have to have this sort of ministry of truth, censorship, content moderation that is aggressively looking at the content of everyone's posts and then there's no appealing process. And that's the dystopia for social media.
Starting point is 00:11:53 Plus the fact that these companies are making crazy amounts of money and getting exponentially more powerful and the power of society is not going up relative to Facebook or TikTok or whatever. So those are the chaos dystopia for social media. The narrow path is how do you design an information environment in a social information environment where for example instead of everybody getting infinite reach, you have reach that's more proportional to the amount of responsibility that you're holding. So that the power of reaching a lot of people should be matched with the responsibility
Starting point is 00:12:23 that goes with reaching a lot of people. How do you enact that in ways that don't create dystopia themselves, so that who's setting the rules of that? It's a whole other conversation, but I think it's setting out the principles by which you think about power and responsibility being loaded into society. Okay, I just wanted you to describe that because it translates to this moment in AI too, because it seems like we're so much farther down the road with social media, but still in the early few years of AGI. Practically speaking- We probably have two years till AGI. Yeah, that's what I was going to ask you.
Starting point is 00:12:55 What is the timeline? What I hear, and we're based in Silicon Valley, and this is generally not even private knowledge, but even when I hear it privately in settings in San Francisco, we're about two years from artificial general intelligence, which means basically this is what they believe, that you would be able to swap in a human remote worker that's doing things and you swap in an AI system. That's probably not going to be true for fully complex tasks. There's some recent research out from a group called METER that measures how long of a task can an AI system do.
Starting point is 00:13:25 So can they do a task that's 10-minute task? Can they do a task that's a three-hour task? And what they found is that the length of a task that an AI system can do doubles every seven months. By 2030, they'll be able to do a month-long task. So that's like the task that you would hand to someone that would take them a whole month to do. And by 2030, we'll have an AI that you hand it to them and they'll do all that much faster.
Starting point is 00:13:47 So given this timeline, what are you most worried about? I think that with AI, we have a crisis. It's kind of an adaptation crisis. It's a crisis of time. It's too much change over a small period of time. And regulation is always too slow. The law always lags behind the speed of technology. That's always true. This will require an unprecedented level of clarity and how we want to respond to it. What I was trying to do in the TED talk was just to lay out enough clarity and there's a point where I just say this is insane. If you're in China, if you're in France and you're building Mistral, if you're a mother of a family in
Starting point is 00:14:26 Saudi Arabia who's invested in AI, like it doesn't matter who you are, if you are really facing the facts of the situation, it's not a good outcome for anybody. And the weird hope that I have is that if we can clarify the situation so much that people can feel and see what's at stake. Something else might be able to happen. I'm really inspired by the film The Day After. Do you know The Day After? It was a film from 1982 about what would happen if the US... Why do you know The Day After? It was actually two years before I was born. Yeah, I was gonna say. I watched it on YouTube actually when I was in college and it had a profound impact on
Starting point is 00:15:08 me because I couldn't believe it actually happened. It was an event in world history where I, I mean, 82 or 83, it was like 7 p.m. on prime time television. They aired a two hour long fictionalized movie about what would happen if the U.S. and the Soviet Union had a nuclear war. And they just actually took you through kind of a step-by-step visceralization of that story. And it scared everybody. But it was not just scary, it was more like,
Starting point is 00:15:34 we all know this is a possibility. We have the drills. The rhetoric of nuclear war and escalation is going up. But even the war planners and Reagan's team said that the film really deeply affected them because before that it was just numbers on spreadsheets and then it suddenly became real. And then the director, Nicholas Meyer, who's now someone I know, he said in many interviews in his biography that when Reagan and Gorbachev did the first arms control talks in Reykjavik,
Starting point is 00:16:01 he said the film had a large role in setting up the conditions for those talks. And that when the Soviet Union saw the film several years later, Russian citizens were excited to learn that the people in the United States actually cared about this too. And so there actually is something that when we come together and we say there's something more sacred that's at stake. We all want our children to have a future. We all want this to continue. We love life. If we want to protect life, then we got to do something about AI. What is the something that you propose we do? What is the narrow path, practically speaking?
Starting point is 00:16:32 So maybe just quickly to break down the current logic, like why are we doing what we're doing? If I'm one of the major AI labs, I currently believe this is inevitable. If I don't build it, someone worse will. If we win, we'll get utopia and it'll be our utopia and the other guys won't have it. So the default path is to race as fast as possible. Ironically, one of the reasons that they think that they should race is because they believe the other actors are not trustworthy with that power, but because they're racing, they have to take so many shortcuts that they themselves become a bad steward of that power and everybody else reinforces that.
Starting point is 00:17:08 And what that leads to is this sort of race to the cliff bad situation. If we can clarify, we're not all going to win. If we race like this, we're going to have catastrophes that are not going to help us get to the world that we're all after. And everybody agrees that it's insane. Instead of racing to out compete, we can help coordinate the narrow path.
Starting point is 00:17:26 Again, the narrow path is avoiding chaos, avoiding dystopia, and rolling out any technology in particular AI with foresight, discernment, and where powers match with responsibility. It starts with common knowledge about where those risks are. So for example, a lot of people don't even know that the AI models lie in the scheme when you tell them they're going to be shut down. Every single person building AI should know that. Have we done that? Have we even tried throwing millions of dollars at educating or creating those solutions?
Starting point is 00:17:52 For example, GitHub, when you download the latest AI model, it could say as a requirement for downloading this AI model, you have to know about the most recent sort of AI loss of control risk. Almost a surge in general's warning. Yeah. Or just like for you to download the power of AI, you have to be aware of all the ways that power is not really controllable. You can't be under some mistaken illusion. It's sort of like passing a medical test before getting the power of medicine to put someone
Starting point is 00:18:16 on anesthesia and cut them open. That's just the basic principle. It's so simple. Power has to be a matter of responsibility. I'm not saying that this is easy. This is an incredibly difficult challenge. I said in the talk, it's our ultimate test. It's our final invitation. But to be the most wise and mature versions of ourselves and to not be the sort of one marshmallow, single instant gratification, stick our hands in our ears and pretend the downsides don't exist species like we have to step into our wise technological maturity. Tristan Harris, I have some rapid fire questions for you that we ask everyone.
Starting point is 00:18:45 And you don't have to think about it too hard because I know you've had to sort of be on for several days straight. All right, here we go. You're in the hot seat. What does innovation or a good idea look or feel like to you? What does innovation or a good idea look like? That is a very deep question. Well, I'll just say briefly, I'm a technologist.
Starting point is 00:19:06 I love technology. I use ChachiBT every day. I love AI. And I want people to know that because this is not about being against technology or against AI. I have always loved technology. It's still my motive for being and wanting that to be a positive force in the world. But I think we often associate that technology automatically means progress.
Starting point is 00:19:23 When we invented Teflon nonstick pans, we thought that's progress. But the coating in Teflon was for these PFAS, forever chemicals, that literally don't break down in our environment. And then now, if you go anywhere in the world and you open your mouth and you drink the rainwater, we get levels of PFAS that are above what the EPA recommends. And it's because these chemicals literally don't break down. That was not progress. That was actually giving us cancers and degrading our environment. Whether it's that or leaded gasoline, which we thought was a technology
Starting point is 00:19:50 that would solve a problem with engine knocking, leaded gasoline ended up dropping the collective IQ of humanity by a billion points because lead in our environment stuns brain development. All that's to say, innovation, you asked, what is innovation? Innovation is honestly looking at what would constitute true progress. Is social media that makes us feel more lonely? Actual innovation, is it progress?
Starting point is 00:20:13 So what we want is humane technology that is aligned with and sustainable with the underlying fabric of whether it's the environment or our social life. We can have humane technology that's aligned with our mental health, it's aligned with our societal health, it's aligned with our healthy information environment, but it has to be designed in a way explicitly to protect those things rather than just sort of steamroll it and assume that the technology is progress. Good answer.
Starting point is 00:20:36 All right, off the TED stage, what's a fun talent skill hobby obsession that you have that you love so much that you could give a TED talk on it? I haven't done it in a while, but I used to love Argentine tango, and I danced tango for 10 years. No idea. Yeah, it's not something people would anticipate.
Starting point is 00:20:52 Yeah, I thought you were gonna say magic. No, that's another one. Because you were into magic. Well, that was the last time that we talked. Right. No, I lived in Buenos Aires for four months, and I learned to dance Argentine tango because of a woman that I really liked.
Starting point is 00:21:04 I ended up dancing for 10 years. And it's a fascinating dance because it's very good for people who are into pattern matching. It tends to attract a lot of like physicists and math people. I didn't know it was so mathematic. Yeah, there's a weird pattern to the way that the dance works that somehow attracts those kinds of minds. But it's really fun and it's a great way to be embodied and to just feel a totally
Starting point is 00:21:24 different kind of somatic intelligence Yeah, yeah, very cool. I had no idea Truly blown away. All right, this can just be a quick list. What would constitute a perfect day for you? Living in integrity with everything that I know and doing the most that I can So high-minded does some people are just like coffee. That's just my truth. I really do feel that way I really feel like we should we need to be showing up for this moment follow-up What are you most worried about and what's giving you hope? Well, I Don't worry per se but I think I've already said too many things that will be on that side of the balance sheet
Starting point is 00:21:58 There's something that I said in the TED talk in terms of hope that I think is really important It was actually a mentor who pointed this out to me. If you believe that something bad is inevitable, can you think of solutions to that problem while you're holding that it's inevitable? You can't. It's almost like it puts these blinders on. And if you step out of the logic of it's inevitable and recognize the crucial difference between it's inevitable and this is really hard and I don't see an easy path, now stand from a new place, this looks hard and I don't see an easy path, and now look for solutions, your mind has this whole new space of possibilities that opens up.
Starting point is 00:22:39 And so I think one of the things that's really critical to have all of us be in more of a problem-solving posture is to both recognize the problems and be clear-eyed about them, but then to not fall into the sort of fatalism of inevitability, which is a self-fulfilling prophecy. What is the best step we can take from where we are and not try to filter or dilute the truth but also stand from agency of what is the world we want to create? Yeah, because cynicism obviously leads to the fatalism that you've been talking about. Exactly. What choice do we have but to be in a position of hope?
Starting point is 00:23:08 Exactly, I think that's the deepest kind of hope is to choose to stand from that place even if we don't know what the solution is yet. And there's something powerful about that. Love it. Last question, what's a small gratitude that you have in your life right now? A detail, a moment, anything specific
Starting point is 00:23:24 that you're really grateful for. It's funny that you say that. Gratitude is actually a really central part of my life. And I think it's one of the simplest things that we can do is wake up or when you go to have any meal with anyone just to express what you're grateful for before sitting down. Yeah. Yeah. What's yours?
Starting point is 00:23:40 Do you have anything that, what would you express before sitting down? It's every moment actually. I mean, honestly, there's just beauty in every moment. And I feel like actually seeing the world this way, there's more sacredness to every moment because there's just more to appreciate. Tristan, thank you so much. So good to be here with you.
Starting point is 00:23:57 That was Tristan Harris in conversation with me, Elise Hume, in 2025. You can check out Tristan's talk on the TED Talks Daily feed and at TED.com. And that's it for today. TED Talks Daily is part of the TED Audio Collective. This episode was produced by Lucy Little, edited by Alejandra Salazar, and fact-checked by Julia Dickerson. This episode was recorded by Rich Ames and Dave Palmer of Field Trip and mixed by Lucy Little. Production support from Daniela Balarezzo and Shu Han Hu. The TED Talks daily team includes Martha Estefanos, Oliver Friedman, Brian Green, and Tansika Sangmarnivong.
Starting point is 00:24:35 Additional support from Emma Tobner. I'm Elise Hu. I'll be back tomorrow with a fresh idea for your feed. Thanks for listening. your feed. Thanks for listening. An Apple Watch for your kids lets you stay connected with them. Wherever they go. They can call you to pick them up at grandma's. Or text you because they forgot their lunch. Again.
Starting point is 00:25:00 Their watch will even send an alert to let me know they finally got to school. Great for kids who don't have a phone because their Apple Watch is managed by you on your iPhone. iPhone XS are later required with additional wireless service plan. I used to say, I just feel stuck, stuck where I don't want to be. Stuck trying to get to where I really need to be. But then I discovered lifelong learning. Learning that gave me the skills to move up, move beyond, gain that edge, drive my curiosity,
Starting point is 00:25:36 prepare me for what is inevitably next. The University of Toronto School of Continuing Studies, lifelong learning to stay forever unstuck. We were gifted the new Google Pixel 9 and the built-in AI assistant Gemini has been so helpful with our weekly science podcast. Okay, listen to this. Hey Gemini, what is an interesting science story
Starting point is 00:25:58 from the last couple of weeks? One particularly interesting science story involves a potential breakthrough in understanding and treating Alzheimer's disease. It's ridiculously useful, right? of weeks.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.