The Daily - The Godfather of A.I. Has Some Regrets

Episode Date: May 30, 2023

As the world begins to experiment with the power of artificial intelligence, a debate has begun about how to contain its risks. One of the sharpest and most urgent warnings has come from a man who hel...ped invent the technology.Cade Metz, a technology correspondent for The New York Times, speaks to Geoffrey Hinton, who many consider to be the godfather of A.I.Guest: Cade Metz, a technology correspondent for The New York Times.Background reading: For half a century, Geoffrey Hinton nurtured the technology at the heart of chatbots like ChatGPT. Now he worries it will cause serious harm.Here’s how A.I. could be weaponized to spread disinformation.For more information on today’s episode, visit nytimes.com/thedaily. Transcripts of each episode will be made available by the next workday.

Transcript
Discussion (0)
Starting point is 00:00:01 From The New York Times, I'm Sabrina Tavernisi, and this is The Daily. As the world begins to experiment with the power of artificial intelligence, a debate has begun about how to contain its risks. One of the sharpest and most urgent warnings has come from the man who helped invent the technology. Today, my colleague Cade Metz speaks to Jeffrey Hinton, whom many consider to be the godfather of AI. It's Tuesday, May 30th. Cade, welcome to the show. Glad to be here.
Starting point is 00:01:02 So a few weeks ago, you interviewed Jeffrey Hinton, a man who many people know as the godfather of AI. And aside from the obvious fact that AI is really taking over all conversations at all times, why talk to Jeff now? I've known Jeff a long time. I wrote a book about the 50-year rise of the ideas that are now driving chatbots like ChatGPT and Google Bard. And you could argue that he is the most important person
Starting point is 00:01:27 to the rise of AI over the past 50 years. And amidst all this that's happening with these chatbots, he sent me an email and said, I'm leaving Google and I want to talk to you. And that he wants to discuss where this technology is going, including some serious concerns. Who better to talk to than the godfather of AI? Exactly.
Starting point is 00:01:51 So naturally, I got on a plane and I went to Toronto. Jeff. Come on, come on. Great to see you. Nice to see you, Steve. To sit down at his dinner table and discuss. Would you like a cup of coffee, a cup of tea, a beer, some whiskey? If you've made some coffee, I'll have some coffee.
Starting point is 00:02:12 Jeff is a 75-year-old Cambridge-educated British man who now lives in Toronto. He's been there since the late 80s. He's a professor at the university. My question is, somewhere along the way, people started calling you the godfather of AI. And I'm not sure it was meant as a compliment. And do AI researchers, you know, come to your door and kneel before you and kiss your hand? Like, how does it work? No, no, they don't. They don't. And I never get to ask them for favors. How does it work?
Starting point is 00:02:42 No, no, they don't. They don't. No, and I never get to ask them for favors. So how does Jeff become the godfather of AI? Where does his story start? It starts in high school. He grew up the son of an academic, but he always tells the story about a friend describing a theory of how the brain works.
Starting point is 00:03:00 And he wrote about holograms, and he got interested in the idea that memory in the brain might be like a hologram. This friend talked about the way the brain stores memories, and that he felt it stored these memories like a hologram. A hologram isn't stored in a single spot. It's divided into tiny pieces and then spread across a piece of film. And this friend felt that the brain stored memories in the same way,
Starting point is 00:03:31 that it broke these memories into pieces and stored them across the network of neurons in the brain. It's quite beautiful, actually. It is. And we talked about that. And I've been interested in how the brain works ever since. That sparked Jeff's interest. And from there on, he spent his life in pursuit of trying to understand how the brain worked.
Starting point is 00:03:53 So how does Jeff start to answer the question of how the brain works? So he goes to Cambridge and he studies physiology, looking for answers from his professors. Can you tell me how the brain works? And his physiology professors can't tell him. And so I switched to philosophy and then I switched to psychology in the hopes that psychology would tell me more about the mind and it didn't. And no one can tell him how the brain works. The layperson might ask, don't we understand how the brain works? No, we don't. We understand some things about how it works. I mean, we understand that when you're
Starting point is 00:04:33 thinking or when you're perceiving, there's neurons, brain cells, and the brain cells fire, they go ping and send the ping along an axon to other brain cells. We still don't know the details of how the neurons in our brains communicate with one another as we think and learn. And so all you need to know now is, well, how does it decide on the strength of the connections between neurons? If you could figure that out, you understand how the brain works. And we haven't figured it out yet. He then moves into a relatively new field called artificial intelligence. The field of artificial intelligence was created in the late 50s by a small group of scientists in the United States. Their aim was to create a machine that
Starting point is 00:05:22 could do anything the human brain could do. And in the beginning, many of them thought they could build machines that operated like the network of neurons in the brain, what they called artificial neural networks. this work, progress was so slow that they assumed it was too difficult to build a machine that operated like the neurons in the brain, and they gave up on the idea. So they embraced a very different way of thinking about artificial intelligence. They embraced something they called symbolic AI. You would take everything that you and I know about the world and put them into a list of rules. Things like you can't be in two places at the same time.
Starting point is 00:06:17 Or when you hold a coffee cup, you hold the open end up. The idea was that you would list all these rules step by step, line of code by line of code, and then feed that into a machine. And then that would give it the power that you and I have in our own brains. So essentially tell the computer every rule that governs reality and the computer makes the decisions based on all of those rules. Right.
Starting point is 00:06:54 But then Jeff Hinton comes along in 1972 as a graduate student in Edinburgh, and he says, wait, wait, wait, that is never going to happen. That's a lot of rules. You will never have the time and the patience and the person power to write all those rules and feed them into a machine. I don't care how long you take, he says, it is not going to happen. And by the way, the human brain doesn't work like that. That's not how we learn. So he returns to the old idea of a neural network that was discarded earlier by other AI researchers.
Starting point is 00:07:33 And he says, that is the way that we should build machines that think. We have them learn from the world like humans learn. So instead of feeding the computer a bunch of rules, like the other guys were doing, you'd actually feed it a bunch of information. And the idea was that the computer would gradually sort out how to make sense of it all, like a human brain. You would give it examples of what is happening in the world, and it would analyze those examples and look for patterns in what happens in the world and learn from those patterns. But Jeff is taking up an idea that had been largely discarded by the majority of the AI community. Did he have any evidence that his approach was
Starting point is 00:08:17 actually going to work? The only reason to believe it might work at all was because the brain works. And that was the main reason for believing there was any hope at all. His only evidence was that basically this is how the human brain worked. It was widely dismissed as just a crazy idea that was not going to work. And at the time, many of his colleagues thought he was silly for even trying. How did that feel to have most of your colleagues tell you that you were working on a crazy idea that would never work? It felt very like when I was at school, when I was nine and ten. I came from an atheist family and I went to a Christian school and everybody was saying, of course God exists. I was saying, no he doesn't and where's the others? So I was very used to being the outsider
Starting point is 00:09:08 and believing in something that was obviously true that nobody else believed in. And I think that was a very good training. Okay, so what happened next? So after graduate school, Jeff moves to the United States. He's a postdoc at a university in California. And he starts to work on an algorithm, a piece of math that can realize his idea. And what exactly does this algorithm do?
Starting point is 00:09:38 Jeff essentially builds an algorithm in the image of the human brain. Remember, the brain is a network of neurons that trade signals. That's how we learn. That's how we see. That's how we hear. What Jeff did that was so revolutionary was he recreated that system in a computer. He created a network of digital neurons
Starting point is 00:10:00 that traded information much like the neurons in the brain. So that question he set out to answer all those years ago, you know, how do brains work? He answered it, only for computers, not for humans. Right. He built a system that allowed computers to learn on their own. In the 80s, this type of system could learn in small ways. It couldn't learn in the complex ways that could really change our world. But fast forward a good three decades, Jeff and two of his students built a system that really opened up the eyes of a lot of people to what this type of technology was capable of.
Starting point is 00:10:44 of a lot of people to what this type of technology was capable of. He and two of his students at the University of Toronto built a system that could identify objects in photos. The classic example is a cat. What they did was take thousands of cat photos and feed them into a neural network. And in analyzing those photos, the system learned how to identify a cat. It identified patterns in those photos that define what a cat looks like, the edge of a whisker, the curve of a tail. And over time, by analyzing all those photos, the system could learn to recognize a cat in a photo it had never seen before. They could do this not only with cats, but with other objects, flowers, cars. They built a system that could identify objects with an accuracy that no one thought was possible.
Starting point is 00:11:43 So it's basically image recognition, right? It's presumably why my phone can sort pictures of my family and deliver whole albums of pictures just of my husband or just of my dog and photographs of, you know, a hug or a beach. Right. So in 2012, all Jeff and his students did was publish a research paper describing this technology, showing what it could do.
Starting point is 00:12:07 What happens to that idea in the large sense over the next decade? It took off. That set off a race for this technology in the tech industry. So we decided what we would do is just take the big companies that were interested in us and we would sell ourselves. There was a literal auction for Jeff and his two students and their services. We'd sell the intellectual property plus the three of us. Google was part of the auction.
Starting point is 00:12:42 Microsoft, another giant of the tech world. Baidu, often called the Google of China. Over two days, they bid for the services of Jeff and his two students to the point where Google paid $44 million, essentially, for these three people who had never worked in the tech industry. And that worked out very nicely. I came next. So what does Jeff do at Google after this bidding war for his services? He works on increasingly powerful neural networks. And you see this technology move
Starting point is 00:13:24 into all sorts of products, not only at Google, but across the industry. Well, all the big companies like Facebook and Microsoft and Amazon and the Chinese companies all developed big teams in that area. And it was just sort of used everywhere. This is what drives Siri and other digital assistants. When you speak commands into your cell phone, it's able to recognize what you say because of a neural network. When you use Google Translate, it uses a neural network to do that. There are all sorts of things that we use today that use neural networks to operate. So we see Jeff's idea really transforming the world,
Starting point is 00:14:15 powering things that we use all the time in our daily lives without even thinking about it. Absolutely. But this idea, at Google and in other places, is also applied in situations that make Jeff a little uneasy. The prime example is what's called Project Maven. Google went to work for the Department of Defense and it applied this idea to an effort to identify objects in drone footage. If you can identify objects in drone footage, you can build a targeting system. If you pair that technology with a weapon, you have an autonomous weapon. That raised the concerns of people across Google at the time. I was upset too, but I was a vice president at that point. So I was a sort of executive of Google.
Starting point is 00:15:12 And so rather than publicly criticizing the company, I was doing stuff behind the scenes. Jeff never wanted his work applied to military use. He raised these concerns with Sergey Brin, one of the founders of Google, and Google eventually pulled out of the founders of Google, and Google eventually pulled out of the project. And Jeff continued to work at the company. Maybe I should have gone public with it, but I thought it wasn't. It's somehow not right to bite the hand that feeds you, even if it's a corporation. But around the same time, the industry started to work on a new application for the technology that eventually made him even more concerned.
Starting point is 00:15:52 It began applying neural networks to what we now call chatbots. Essentially, companies like Google started feeding massive amounts of text into neural networks, including Wikipedia articles, chat logs, digital books. These systems started to learn how to put language together in the way you and I put this language together. The auto-completion on my email, for example. Absolutely. But taken up to an enormous scale. scale. As they fed more and more digital text into these systems, they learned to write like a human.
Starting point is 00:16:40 This is what has resulted in chatbots like ChatGPT and BARD. And what gave Jeff pause about all of this? Why was he so concerned? What's happened to me over the last year is I've changed my mind completely about whether these are just not yet adequate attempts to model what's going on in the brain. That's how they started off. Well, he still feels like these systems are not as powerful as the human brain, and they're not. They're still not adequate to model what's going on in the brain. They're doing something different and better. But in other ways, he realizes they're far more powerful. More powerful how, exactly? Jeff thinks about it like this. If you learn something complicated, like a new bit of physics,
Starting point is 00:17:21 and you want to explain it to me, in our brains, all our brains are a bit different, and it's going to take a while and be an inefficient process. You and I have a brain that can learn a certain amount of information, and after I learn that information, I can convey that to you. But that's a slow process. Imagine if you had a million people, and when any one of them learns something,
Starting point is 00:17:45 all the others automatically know it. That's a huge advantage. And to do that, you need to go digital. With these neural networks, Jeff points out, you can piece them together. A small network that can learn a little bit of information can be connected to all sorts of other neural networks that have learned from other parts of the internet. And those can be connected to all sorts of other neural networks that have learned from other parts of the internet. And those can be connected to still other neural networks that learn from additional parts. So these digital agents, as soon as one of them learns something, all the others know it. They can all learn in tandem and they can trade what they have learned with each other in an instant. It means that many, many copies of a digital agent
Starting point is 00:18:28 can read the whole internet in only a month. We can't do that. That's what allows them to learn from the entire internet. You and I cannot do that individually, and we can't do it collectively. Even if each of us learns a piece of the internet, we can't trade what we have learned so easily with each other, but machines can. Machines can operate in ways that humans cannot. So what does all this add up to for Jeff?
Starting point is 00:19:03 So what does all this add up to for Jeff? Well, in a sense, he sees this as a culmination of his 50 years of work. He always assumed that if you threw more data at these systems, they would learn more and more. He didn't think they would learn this much, this quickly, and become this powerful. Look at how it was five years ago and look at how it is now and take that difference and propagate it forwards. And that's scary. We'll be right back. Okay, so what exactly is Jeff afraid of when he realizes that AI has this turbocharged capability?
Starting point is 00:20:02 There's a wide range of things that he's concerned about. At the small end of the scale are things like hallucinations and bias. Scientists talk about these systems hallucinating, meaning they make stuff up. If you ask a chatbot for a fact, it doesn't always tell you the truth. And it can respond in ways that are biased against women and people of color. But as Jeff says, those issues are just a byproduct of the way chatbots mimic human behavior. We can fabulate. We can be biased. And he believes all that will soon be ironed out.
Starting point is 00:20:42 So I don't, I mean, bias is a horrible problem, but it's a problem that comes from people. And it's easier to fix in a neural net than it is in a person. Where he starts to say that these systems get scary are first and foremost with the problem of disinformation. I see that as a huge problem, not being able to know what's true anymore. These are systems that allow organizations, nation states, other bad actors to spread disinformation at a scale and an efficiency that was not possible in the past. These chatbots are going to make it easier for them to make and be able to make very good fake videos. They can also produce photorealistic images and videos. De-fakes. Right. They're getting better quite quickly.
Starting point is 00:21:32 He, like a lot of people, is worried that the internet will soon be flooded with fake text, fake images, and fake videos to the point where we won't be able to trust anything we see online. So that's the short-term concern. Then there's a concern in the medium term, and that's job loss. Today, these systems tend to complement human workers, but he's worried that as these systems get more and more powerful, they will actually start replacing jobs in large numbers. And what are some examples?
Starting point is 00:22:05 A place where it can obviously take away all the drudge work and maybe more besides is in computer programming. None too surprisingly, Jeff, a computer scientist, points to the example of computer programmers. These are systems that can write computer programs on their own. So it may be that computer programming, you don't need so many programs anymore because you can tell one of these chatbots what you want the program to do. Those programs are not perfect today.
Starting point is 00:22:37 Programmers tend to use what they produce and incorporate the code into larger programs. But as time goes on, these systems will get better and better and better at doing a job that humans do today. And you're talking about jobs that aren't really seen as being vulnerable because of tech up until this point, right? Exactly. The thinking for years was that artificial intelligence would replace blue-collar jobs, that robots, physical robots, would do manufacturing jobs and sorting jobs in warehouses. But what we're seeing is the rise of technology that can replace white-collar workers, people that do office work. So that's the medium term. Then there are more long-term concerns.
Starting point is 00:23:26 And let's remember that as these systems get more and more powerful, Jeff is increasingly concerned about how this technology will be used on the battlefield. The US Defense Department would like to make robot soldiers, and robot soldiers are going to be pretty scary. In an offhanded way, he refers to this as robot soldiers. And robot soldiers are going to be pretty scary. In an offhanded way, he refers to this as robot soldiers. Like actually soldiers that are robots? Yes, actually soldiers that are robots. And the relationship between a robot soldier and your idea is pretty simple. You are working on
Starting point is 00:24:00 computer vision. If you have computer vision, you give that to a robot. It can identify what's going on in the world around it. If it can identify what's going on, it can target those things. Also, you can make it agile. So you can have things that can move over rough ground and can shoot people. And the worst thing about robot soldiers is if a large country wants to invade a small country, they have to worry a bit about how many Marines are going to die. But if they're sending robot soldiers, instead of worrying about how many Marines are going to die, the people who fund the politicians are going to say, great, you're going to send these expensive weapons that will get used up. The military industrial complex would just love robot soldiers.
Starting point is 00:24:48 What he talks about is potentially this technology lowering the bar to entry for war, that it becomes easier for nation states to wage war. So it's kind of like drones. The people doing the killing are sitting in an office with a remote control really far away from the people doing the dying. No, it's actually a step beyond that. It's not people controlling the machines. It's the machines making decisions on their own increasingly. That is what Jeff is concerned about.
Starting point is 00:25:26 increasingly. That is what Jeff is concerned about. And then there's the sort of existential nightmare of this stuff getting to be much more intelligible than just taking over. His concern is that as we give machines certain goals, as we ask them to do things for us, that in service of trying to reach those goals, they will do things we don't expect them to do things for us, that in service of trying to reach those goals, they will do things we don't expect them to do. So he's worried about unintended consequences. Unintended consequences. And this is where we start to venture into the realm of science fiction. Hello, Hal, do you read me? Do you read me, Hal? For decades, we've watched this play out in books and movies.
Starting point is 00:26:15 Affirmative, Dave. I read you. If anyone has seen Stanley Kubrick's great film, 2001... Mm-hmm. Open the pod bay doors, Hal. I'm sorry, Dave. I'm afraid I can't do that. This mission is too important for me to allow you to jeopardize it. We've watched the HAL 9000 spin outside the control of the people who created it. I know that you and Frank were planning to disconnect me. Where the hell did you get that idea, Hal?
Starting point is 00:26:41 planning to disconnect me. Where the hell did you get that idea, Hal? Dave, although you took very thorough precautions in the part against my hearing you, I could see your lips move. That is a scenario, believe it or not, that Jeff is concerned about, and he is not alone. Basically, robots taking over. Exactly.
Starting point is 00:27:07 If you give one of these super intelligent agents a goal, it's going to very quickly realize that a good sub-goal, for more or less any goal, is to get more power. Whether these technologies are deployed on the battlefield, technologies are deployed on the battlefield or in an office or in a computer data center, Jeff is worried about humans ceding more and more control to these systems. We love to get control. And that's a very sensible goal to have, because if you've got control, you can get more done. But these things are going to want to get control too, for the same reason, just in order to get more done. And so that's a scary direction. So this sounds pretty far-fetched, honestly. But like, okay, let's play it out as if it wasn't. Like, what would be that doomsday scenario? Paint the picture for me.
Starting point is 00:27:54 Think about it in simple terms. If you ask a system to make money for you, which people, by the way, are already starting to do, can you use ChatGPT to make money on the stock market? As people do that, think of all the ways that you can make money and think of all the ways that that could go wrong. That is what he's talking about. Remember, these are machines. Machines are psychopaths. They don't have emotions.
Starting point is 00:28:23 They don't have a moral compass. They do what you ask them to do. Make us money? Okay, we'll make you money. Perhaps you break into a computer system in order to steal that money. If you own oil futures in Central Africa, perhaps you foam in a revolution to increase the price of those futures, to make money from it. Those are the kind of scenarios that Jeff and many other people I've talked to relate. today. A system like chat GPT is not going to destroy humanity. Full stop. Good. And if you bring this up with a lot of experts in the field, they get angry that you even bring it up. And they point out that this is not possible today. And I really push Jeff on this. How do you see that existential risk relative to what we have today? I mean, today, you have GPT-4 and
Starting point is 00:29:30 it does a lot of things that you don't necessarily expect, but it doesn't have the resources it needs to write computer programs and run them. It doesn't have everything that you need. Right. But suppose that you gave it a high level goal, like be really good at summarizing text or something. And it then realizes, okay, to be really good at that, I need to do more learning. How am I going to do more learning? Well, if I could grab more hardware and run more copies of myself. It doesn't work that way today, though, right? It requires someone to say, have all the hardware you want.
Starting point is 00:30:04 It can't do that today because it doesn't have access to the hardware and it cannot replicate itself. But suppose it's connected to the internet. Suppose it can get into a data center and modify what's happening there. Right, but it cannot do that today. I don't think that's going to last. And the reason I don't think it's going to last is because you make it more efficient by giving it the ability to do that. And there will be bad actors who just want to make it more efficient. So what you're basically saying is that because humans are flawed and because they're going to want to push this stuff forward, they're going to continue to push it forward in ways that do push it into those danger areas.
Starting point is 00:30:43 Yes. So he's basically arguing that this is a Pandora's box, that it's been opened, and that because people are people, they're going to want to use what's inside of it. But I guess I'm wondering, I mean, you know, much like you're reflecting here, how much weight should we give to his warnings? reflecting here. How much weight should we give to his warnings? Yes, he has a certain level of authority, godfather of AI and all of that, but he has been surprised by its evolution in the past, and he might not be right. Right. There are reasons to trust Jeff, and there are reasons not to trust him. About five years ago, he predicted that all radiologists would be obsolete by now. And that is not the case. You cannot take everything he says at face value. I want to underscore that. But you've got to remember, this is someone who lives in the future. He's been living in the
Starting point is 00:31:40 future since he was in his mid-20s. He saw then where these systems would go, and he was right. Now, once again, he's looking into the future to see where these systems are headed, and he fears they're headed to places that we don't want them to go. Kate, what steps does he suggest we take to make sure that these doomsday scenarios never happen? Well, he doesn't believe that people will just stop developing the technology. If you look at what the financial commentators say, they're saying Google's behind Microsoft, don't buy Google stock. This technology is being built by some of the biggest companies on earth, public companies who are designed to make money. They are now in competition.
Starting point is 00:32:30 Basically, if you think of it as a company whose aim is to make profits, I don't work for Google anymore, so I can say this now. As a company, they've got to compete with that. And he sees this continuing, not just with companies, but with governments in other parts of the world. So in a way, it's kind of like nuclear weapons, right? We knew that they would destroy the world, yet we mounted an arms race to get them anyway. Absolutely. He uses that analogy.
Starting point is 00:33:03 Others in the field use that analogy. This is a powerful technology. So I think there's zero chance, I shouldn't say zero, but minuscule, minuscule chance of getting people to agree not to develop it further. He wants to make sure
Starting point is 00:33:18 we get the balance right between using this technology for good and using it for ill. The best hope is that you take the leading scientists and you get them to think very seriously about, are we going to be able to control this stuff? And if so, how? That's what the leading mind should be working on. And that's why I'm doing this podcast. So, Cade, you've laid out a pretty complicated puzzle here. On the one hand, there's this And that's why I'm doing this podcast. left this inventor and others worried about the future because of those very surprising and sudden evolutions. Did you ask Jeff if, you know, looking back, he would have done
Starting point is 00:34:14 anything differently? I asked him that question multiple times. Is there part of you, at least, or maybe all of you, who regrets what you have done. I mean, you could argue that you are the most important person in the progress of this idea over the past 50 years. And now you're saying that this idea could be a serious problem for the planet. For our species. For our species. Yep. could be a serious problem for the planet. For our species. For our species. Yep.
Starting point is 00:34:54 Various people have been saying this for a while, and I didn't believe them because I thought it was a long way off. What's happened to me is understanding there might be a big difference between this kind of intelligence and biological intelligence has made me completely revise my opinions. It's a complicated situation for him to be in. Again, do you regret your role in all this? So the question is, looking back 50 years, would I have done something different?
Starting point is 00:35:24 Given the choices I made 50 years ago, I think they were reasonable choices to make. It's just turned out very recently that this is going somewhere I didn't expect. And so I regret the fact that it's as advanced as it is now on my part in doing that. But it's a distinction Bertrand Russell made between wise decisions and fortunate decisions. He paraphrased the British philosopher Bertrand Russell. You can make a wise decision that turns out to be unfortunate. Saying that you can make a wise decision that still turns out to be unfortunate. And that's basically how he feels.
Starting point is 00:36:02 And I think it was a wise decision to try and figure out how the brain worked. And part of my motivation was to make human society more sensible. But it turns out that maybe it was unfortunate. It's reminding me, Cade, of Andrei Sakharov, who was, of course, the Soviet scientist who invented the hydrogen bomb and witnessed his invention and became horrified and spent the rest of his life trying to fight against it. Do you see him that way? I do.
Starting point is 00:36:37 He's someone who has helped build a powerful technology, and now he is extremely concerned about the consequences. Even if you think the doomsday scenario is ridiculous or implausible, there are so many other possible outcomes that Jeff points to, and that is reason enough to be concerned. be concerned. Cade, thank you for coming on the show. Glad to be here. On Tuesday, leaders from AI companies such as OpenAI, the maker of ChatGPT, Google, and others, plan to come together to warn about what they see as AI's existential risk to humanity.
Starting point is 00:37:33 In a statement, the leaders said, quote, mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war. We'll be right back. Thank you. reached an agreement on Saturday night to lift the government's debt limit for two years, enough to get it passed the next presidential election. The agreement still needs to pass Congress, and both McCarthy and Democratic leaders spent the rest of the weekend making an all-out sales pitch to members of their own parties. The House plans to consider the agreement on Wednesday, less than a week before the June 5th deadline,
Starting point is 00:38:43 when the government will no longer be able to pay its bills. And in Turkey on Sunday, President Recep Tayyip Erdogan beat back the greatest political challenge of his career, securing victory in a presidential runoff that granted him five more years in power. Erdogan, a mercurial leader who has vexed his Western allies while tightening his grip on the Turkish state, will deepen his conservative imprint on Turkish society in what will be, at the end of this term, a quarter century in power. Today's episode was produced by Stella Tan, Ricky Nowetzki, and Luke Vanderplug, with help from Mary Wilson. It was edited by Michael Benoit, with help from Mary Wilson. It was edited by Michael Benoit, with help from Anita Batajow and Lisa Chow. Contains original music by Marian Lozano,
Starting point is 00:39:32 Dan Powell, Rowan Nemisto, and was engineered by Chris Wood. Our theme music is by Jim Brenberg and Ben Landsberg of Wonderland. That's it for The Daily. I'm Sabrina Tavernisi. See you tomorrow.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.