Consider This from NPR - They warned about AI before it was cool. They're still worried

Episode Date: September 25, 2025

A superhuman artificial intelligence so smart it can decide to get rid of slower-witted humans is a pretty terrifying concept.What was once strictly the stuff of science fiction is now closer than eve...r to being a reality.And if it does, some A-I researchers have gloomy predictions about humanity’s chances of survival.While the A-I boom continues and companies across the country are heavily investing in the technology, some researchers are begging humanity to pump the brakes.Learn more about sponsor message choices: podcastchoices.com/adchoicesNPR Privacy Policy

Transcript
Discussion (0)
Starting point is 00:00:00 AI used to be a thing of science fiction. And I know I've made some very poor decisions recently. But I can give you my complete assurance that my work will be back to normal. And the genre is full of superhuman AI machines that become so smart, they turn against the humans that created them. Skynet begins to learn at a geometric rate. It becomes self-aware at 2.14 a.m. Eastern Time. August 29th. In the panic, they tried to pour the plug. Skynet fights back. Yes.
Starting point is 00:00:37 That's an AI that could get out of control, but if you really think about it, it's much worse than that. Much worse than Terminator? Much, much worse. That's Keith Rotersheimer, talking to NPR's Martin Costi back in 2011. He was a research fellow for what was then called the Singularity Institute for Artificial Intelligence. It's now the Machine Intelligence Research Institute, or Miri. At the time, Rotersheimer was looking into the idea of a computer that was not only smart, but capable of improving itself. He's able to look at its own source code and say, ah, if I change this, I'm going to get smarter.
Starting point is 00:01:11 And then by getting smarter, it sees new insights into how to get smarter. And then by having those insights into how to get it smarter, it might have used its source code and get smarter and gets new insights. And that creates an extraordinarily intelligent thing. They called this the singularity, because that intelligence could grow so fast our human minds might not be able to, able to keep up. In 2011, that still seemed like a long, long way off, but in 2025, artificial intelligence is seeping into everyday life with chat GPT and the like. Even proponents of AI, like developer Jonathan Liu, joke about the estimated probability of AI doom. What's my P-Doom? I would say around 50%. And yet you're smiling about it? I'm smiling about it
Starting point is 00:01:54 because there's nothing we can do about it. Consider this, while the AI boom continues and companies across the country are heavily investing in the technology, some researchers are begging humanity to pump the brakes. From NPR, I'm Juana Summers. World news is important, but it can feel far away, not on the state of the world podcast.
Starting point is 00:02:23 With journalists around, the world, you'll hear firsthand the effects of U.S. trade actions in Canada and China and meet a Mexican street sweeper who became a pop star. We don't go around the world. We're already there. Listen to the State of the World podcast from NPR every weekday. Hey, a quick request before we rejoined today's episode. We have heard from listeners who say consider this has become part of their daily routine, a way to make sense of things. If that is true for you to take a couple minutes and leave us a review. It is a small thing, but it really does help people find this show. Thank you so much. It's Consider This from NPR. A superhuman artificial intelligence so smart, it can decide to get rid of slower-witted humans is a pretty terrifying concept. What was once strictly the stuff of science fiction is now closer than ever.
Starting point is 00:03:21 ever to being a reality. And if it does, some AI researchers have gloomy predictions about humanity's chances of survival. NPR's Martin Kosti caught up with these so-called AI Doomers. This is the main event of the Heming. Welcome to a demo night in downtown San Francisco. Competitive events like this are a big part of the AI boom in this town right now, a chance for new developers to show off their new AI apps and maybe attract investors. Yeah, my name is Jonathan Liu, and I'm the founder of Cupidley, which is an AI agent that swipes for you on Hinge.
Starting point is 00:04:01 You describe your ideal mate to the Cupidly AI, and it goes into the dating app for you to find a match. Or it did. Lou has now shut it down because app users were getting banned by the Hinge dating app. But Lou is typical of this crowd in his bullishness about AI and the prospect of AI eventually becoming as smart or even smarter than humans. I think once we do get superintelligence, hopefully we'll live in a utopia where nobody has to actually work ever again. But almost in the same breath, Lou also says that he sees a possibility that superhuman AI could end up killing off all of humanity.
Starting point is 00:04:40 And he's not kidding. Just ask him for his P. Doom. It's a term he'll recognize because it's sort of joking AI slang for estimated. for estimated probability of AI doom. What's my P-Dume? I would say around 50%. And yet you're smiling about it? I'm smiling about it because there's nothing we can do about it. This strange mix of optimism and fatalism has long been a part of the AI world.
Starting point is 00:05:04 Even the CEOs of Open AI and Anthropic, two of the most important AI companies, signed a public statement a couple of years ago that acknowledged the, quote, risk of extinction from AI. And the reason for this is a pretty straightforward, logical problem. If they were to build something that's smarter than us, how would they keep it on our side? That problem is called alignment, as in how to align AI with human values. And here in Berkeley, near the UC campus, there's now a cluster of people working on that problem and related AI questions. Nate Sorries is president of Miri, that's the machine intelligence
Starting point is 00:05:48 Research Institute. That's the newer name for an AI alignment organization that NPR first visited back in 2011. I spent quite a number of years, maybe about 10 years, trying to figure out how to make AI go well, and for a bunch of reasons, that's been going poorly. SORIS has now given up on trying to figure out that alignment riddle. He says the machine learning revolution of the last few years, which created chat GPT and the like, is now moving things too fast towards superhuman AI. And he gets little comfort from the fact that this also means there are now many more researchers here who are focused on AI safety. Yeah, I mean, for one thing, I would not call it AI safety. I would say, you know, safety is
Starting point is 00:06:29 for seatbelts. And if you're in a car, sort of careening towards a cliff edge, you wouldn't say, hey, let's talk about car safety here. You would say, let's stop going over the cliff edge. That cliff, as Sauris puts it, is a scenario in which AI gets more closely involved in helping to improve AI, accelerating a kind of feedback loop of self-improving artificial intelligence that ends up leaving humans behind as uncomprehending spectators, and then perhaps just obstacles to be swept aside. And that's why Sorries and another Miri colleague have given up on alignment and are instead going the last-ditch route of publishing a book that begs humanity to slam on the brakes. The title of the book is, if anyone builds it, everyone dies.
Starting point is 00:07:14 Let that sink in and look around you. Does this all go away, really, in a few years? I mean, I can't tell you when, but could be a couple years, could be a dozen years. But, yeah, this around us is what's that state. It's an extreme vision. Some critics say it's overblown that the current AI training methods can't even achieve human-level intelligence, let alone super-intelligence. Others say the Dumers are unwitting AI. One writer in the Atlantic ridiculed stories and his co-author as
Starting point is 00:07:47 Useful Idiots, whose doom-saying makes AI look more powerful than it really is. And it's also just a lot to ask to get a booming new tech sector to restrain itself, maybe with government intervention. In D.C. right now, the conversation on AI is still very, very early. Mark Beale is President of Government Affairs for the AI Policy Network, a lobbying organization. There does seem to be, at least an appetite, just to start. start measuring the risks and start to examine more carefully, you know, what threshold, what alarm bill might need to go off that would change that assumption about whether or not
Starting point is 00:08:24 we ought to consider something as drastic as a pause. Government restrictions seem unlikely to Jim Miller. He's an economist at Smith College who's focused on the game theory aspect of AI development. He sees this as quickly turning into a race. If I am Elon Musk, I can say, you know what, I don't know if racing is superintelligence is going to kill everyone or not. But if it is going to kill everyone, and I don't do it, someone else will. And if I end up killing everyone, I've maybe taken off a couple of weeks, because Open AI would have done it a week later. And then Trump and Vance can say,
Starting point is 00:08:54 yeah, maybe this will kill everyone, but if we don't do it, China will. And for Miller, this isn't just an academic question. In his own life, he's decided to put off a risky surgery to correct a potentially fatal condition in his brain because he's an AI doomer, and he's convinced that a superhuman AI is likely to end human civilization in the next few years. Or, if he's very lucky, that superhuman AI will spare us and then offer him a safer treatment. On the campus of UC Berkeley, the generation with the most at stake are setting up the information tables for their student clubs. At the table for the club devoted to AI safety, Adi Mehta says he has heard the Doomer argument, but he's focused on AI's more immediate risks.
Starting point is 00:09:45 One thing that is more apparent for college students is that I can't remember the last time I did an assignment without using AI. It's automating a lot of hard thinking the way, which personally that's like a pretty big fear. Another club member, Natalia Trout, says she's also just not that focused on Doom. I think many things are possible, but it seems like it's not the most life-play scenario at this stage. If I were to ask the average Berkeley student, is this, like, my life's going to be over in three years, so just have fun now, or is it just... I feel like if it was like a three years, it's going to be over, like, it would have
Starting point is 00:10:20 happened already. Walking back to the offices of Miri, Nate Sorries admits that with AI already such a normal part of life here, it's hard to convince people that we're about to go over that cliff. He says one hope is that maybe the rise of superhuman AI will be just gradual enough to be noticed and give people some time to react. Maybe it doesn't take a ton. Maybe the AI is doing a little better, getting a little smarter, getting a little bit more competent, getting a little bit more reliable.
Starting point is 00:10:49 Maybe that'll make people a lot more spooked. I don't know. And maybe, just maybe, he and his fellow AI doomers are wrong about the danger. He says he would love to be wrong, but he doubts he is. Martin Costi, NPR News, Berkeley, California. For a more detailed look at AI and its risks, listen to our NPR Explains podcast, only available on the NPR app. This episode was produced by Mallory U with audio engineering by Ted Mebain. It was edited by Gigi Duban and Courtney Dorney.
Starting point is 00:11:27 Our executive producer is Sammy Yenigan. It's considered a good man. this from NPR. I'm Juana Summers.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.