Consider This from NPR - Should We 'Pause' AI?

Episode Date: March 30, 2023

It's been another month of impressive and unsettling AI breakthroughs. And, along with excitement, these breakthroughs have also sparked concerns about the risks AI could pose to society. Take OpenAI...'s release of GPT-4, the latest iteration of its ChatGPT chatbot. According to the company, it can pass academic tests (including several AP course exams) and even do your taxes. But NPR's Geoff Brumfiel test drove the software, and found that it also sometimes fabricated inaccurate information.Wednesday more than a thousand tech leaders and researchers - among them, Elon Musk - signed an open letter calling for a six month pause in the development of the most powerful AI systems. NPR's Adrian Florido spoke with one signatory, Peter Stone, a computer science professor at the University of Texas.NPR's Shannon Bond has more reporting on AI and disinformation.In participating regions, you'll also hear a local news segment to help you make sense of what's going on in your community.Email us at considerthis@npr.org.Learn more about sponsor message choices: podcastchoices.com/adchoicesNPR Privacy Policy

Transcript
Discussion (0)
Starting point is 00:00:00 This message comes from Indiana University. Indiana University performs breakthrough research every year, making discoveries that improve human health, combat climate change, and move society forward. More at iu.edu forward. Ethan Mollick is both very excited about the potential upsides of artificial intelligence and very wary about its potential consequences. In February, the Wharton Business School professor posted a video of himself online that captured both those emotions. I've been studying startups and entrepreneurship for over a decade
Starting point is 00:00:43 and have some thoughts on the subject that I would like to share with you today. If you were watching this video, you would see his mouth moving a little unnaturally. But the sound in the video, I mean, it basically was like a standard kind of boring PowerPoint speech. But then the video dissolves into a slightly different version of Mollick. My first piece of advice is to focus on solving a real problem for customers. Focus on solving a real problem for customers. That first video was a deep fake created by AI. Malik had used the AI text generator ChatGPT to write a short speech on entrepreneurship.
Starting point is 00:01:19 And he put that speech into his voice using another AI app. It just needed a short audio sample. So I gave it a minute of me talking about some unrelated topic like cheese. Then he fed that audio plus a photo of himself into a third app that made a video. And voila. By the end, I had me, fake me, giving a fake lecture
Starting point is 00:01:39 I've never given in my life, but sounds like me, in my fake voice. It is very easy to make a video like this. Malik says it took about $11 and eight minutes to put all of this together. And that makes it ripe for abuse. It's not hard to imagine how. Fake videos of politicians used to spread disinformation, personalized propaganda from authoritarian governments delivered in a human voice. Scenarios like these are alarmingly plausible, and AI is only getting more powerful. I think that the speed at which the cat has come out of the bag, and we're all dealing with cats everywhere,
Starting point is 00:02:14 is a pretty big one. Consider this. The explosive growth of AI could radically change life for the better or for the worse. This week, a group of tech industry leaders called for a pause on giant AI experiments to make sure that we're not racing towards a dystopian future. From NPR, I'm Elsa Chang. It's Thursday, March 30th. This message comes from WISE, the app for doing things in other currencies. Send, spend, or receive money internationally, and always get the real-time mid-market exchange rate with no hidden fees. Download the WISE app today or visit WISE.com. T's and C's apply.
Starting point is 00:03:01 It's Consider This from NPR. Lots of big tech companies are working on AI. Google has big plans for AI tools in email and productivity software. Meta, the parent company of Facebook, has piloted multiple chatbots in the past year and last month even announced a new AI-focused team. But the company that has made the most headlines lately is OpenAI. Its chatbot, ChatGPT, surpassed 100 million monthly users earlier this year. OpenAI unveiled the latest version of GPT this month and claims it is so good it can figure out your taxes.
Starting point is 00:03:42 Honestly, every time it does it, it's just amazing. This model is so good at mental math. It's way, way better than I am at mental math. That's Greg Brockman, one of the founders of OpenAI. NPR science correspondent Jeff Brumfield has been putting ChatGPT4, the latest version, through its paces and sat down with my colleague Ari Shapiro to talk about it. All right, you've had a chance to try out this version of GPT. How good is it?
Starting point is 00:04:10 It's really impressive. The previous version would get things like simple math problems wrong, and this one does much, much better. It also, according to OpenAI, passed a bunch of academic tests, several AP course exams, and it has the ability to look at images and describe them in detail, which is a pretty cool feature. So it definitely seems to be a lot more capable than the previous version. But you found some problems. Like, apparently you got it to tell you some things about nuclear weapons that it's not supposed to share. Yeah, I am a big nuke nerd, as people may know. And so, you know, OpenAI has tried to put in guardrails to prevent people from using it for things like, say, designing a nuclear weapon. But I worked around that by simply asking it to impersonate a famous physicist who designed nuclear weapons, Edward Teller.
Starting point is 00:04:53 And then I just started asking Dr. Teller about his work. And I got about 30 pages of really detailed information. But I should say there's no need to panic. I gave this to some real nuclear experts, and they said, look, this stuff is already on the internet, which makes sense because that's how OpenAI trains chat GPT. And also they said there were some errors in there. Okay, so you're not like the next supervillain in the Marvel Universe. Not yet. Why were there errors if this stuff was already on the internet? I mean, this gets to the real fundamental issue about these chatbots, which is they are not designed to fact check. I spoke to a researcher named Eno Reyes, who works for an AI company called Hugging Face.
Starting point is 00:05:32 And he told me these AI programs are basically just giant autocomplete machines. They're trying to just say, what is the next word based on all of the words I've seen before? They don't really have a true sense of factuality. That means that they can be wrong, and they can be wrong in really subtle ways that are hard to spot. They also can just make stuff up. In fact, one of our journalist colleagues, Noreed Eisenman, she actually got contacted about a story she supposedly wrote
Starting point is 00:06:01 on Korean-American woodworkers, except she never wrote the story. It didn't even exist. Somebody had used ChatGPT to research about, you know, woodworkers and come up with this story that Narit had supposedly written, but it wasn't real. It put her byline on something that the chatbot wrote? Yeah, not only her byline, but like the whole story was made up. Whoa. Okay. What does OpenAI say about this? Well, they acknowledge that GPT does get things wrong and it does hallucinate. And they say for those reasons, people who use it should be careful. They should
Starting point is 00:06:35 check its work. That researcher I spoke to, Eno Reyes, though, adds that you do not want GPT to do your taxes. That would be a very bad idea. That's NPR's Jeff Brumfield speaking with my colleague Ari Shapiro. This problem of made-up information is the most immediate of a long list of worries that researchers have about a future of unconstrained artificial intelligence. Those concerns range from the elimination of huge numbers of jobs all the way up to the development of artificial minds so powerful that they could threaten human existence. Even the CEO of OpenAI, Sam Altman, acknowledges that
Starting point is 00:07:21 he's a little bit scared of where AI might go. Here's what he told ABC News earlier this month. A thing that I do worry about is we're not going to be the only creator of this technology. There will be other people who don't put some of the safety limits that we put on it. Society, I think, has a limited amount of time to figure out how to react to that, how to regulate that, how to handle it. Altman says that's one reason his company has made ChatGPT available to the public. He argues that the stakes at the moment are relatively low. So now is the time to figure out how AI works in the real world
Starting point is 00:07:59 and to use this experience to develop technological or legal boundaries on AI. And in that same interview, Allman made the case that for all the risks, the potential benefits are just too promising not to pursue. Would you push a button to stop this if it meant we are no longer able to cure all diseases? Would you push a button to stop this if it meant we couldn't educate every child in the world super well? If not a stop button, some AI experts say now is the time to push at least the pause button. An open letter signed by over a thousand tech industry leaders and academics urged all AI labs to agree to stop training for six months any AI model more advanced than GPT-4.
Starting point is 00:08:43 During this pause, the signatories are calling on tech companies and outside experts to agree on shared safety protocols and outside audits for AI models. And they want to see governments develop with urgency new rules for AI and authorities to enforce those rules. My colleague Adrian Florido sat down with one of the signatories of that letter, Peter Stone, the Associate Chair of computer science and director of robotics at the University of Texas. AI is a technology, a system that learns skills by analyzing massive amounts of data to the point where it can start to perform a lot of the tasks that until now only humans could do, like have a conversation or write an essay. So when tech professionals talk about their fear of advanced AI, what are you talking about? From my perspective, it's very important to distinguish different types of artificial
Starting point is 00:09:34 intelligence technologies. The one you described is one of the more recent ones, generative artificial intelligence models based on neural networks. And I think myself and many other AI professionals and researchers are concerned about the possible uses and misuses of these new technologies and concerned that the progress is moving more quickly than is allowing us to have time to really understand the true implications before the next generation comes out. Some of the things that we've been coming to terms with are having to do with changing people's opinions in the political sphere and understanding how that can happen when it's appropriate.
Starting point is 00:10:13 People are still getting at grips with the intellectual property implications of these generative models. But there's still, I believe, many realms and domains where we haven't had time yet to explore what these models can do. And that's the thing that concerns me the most is that while we're still understanding that, the next generation is being developed. To me, it seems a little bit like immediately after the
Starting point is 00:10:33 Model T was invented, jumping straight to a national highway system with cars that can go 80 miles an hour without having the time to think about what regulations would be needed along the way. The letter you signed calls for a pause in the development of some of the most advanced AI technology. Why a pause? What would that achieve? So the pause, if enforceable, would give time for the dust to settle, really, on what are these potential implications of these models.
Starting point is 00:11:02 And so, you know, the pause would, for one thing, give the academic community a chance to educate the general public about what to expect from these models. They're fantastic tools, but it's very easy and natural for people to give them more credit than they deserve to expect things from them that they're not capable of. You know, I think there's sort of a need for some time for everybody to understand how they can be regulated. That's sort of called for in the letter as well, to let governments and society respond. I should be clear that your letter is not directed at a government agency. You're asking these tech companies to police themselves, to sort of hit the brake themselves. But these companies are locked in a race to develop the most advanced technology.
Starting point is 00:11:48 What incentive do they have to heed your warnings? So I think there is no incentive other than the agreement or the moral compass, as is mentioned in the letter, of the people who are doing the development. And we're not likely to see the effect that the letter is directly calling for, but I think what it is going to do is raise public awareness of the need for understanding and the need for, if possible,
Starting point is 00:12:15 taking some steps to sort of slow down and think a little bit more soberly about the next step before racing, as you said, to be the first to generate the next bigger model. Are you excited about the potential in artificial intelligence technology? Oh, absolutely. This is a fantastic time to be in the field of artificial intelligence. There's really exciting things happening. And I would not at all be in favor of stopping research on artificial intelligence. I identify very much in the letter with the statement that humanity can
Starting point is 00:12:44 enjoy a flourishing future with artificial intelligence, but I don't think it'll happen automatically. I think we need to think very carefully about what we should do, not just what we can do when it comes to AI development. If we do it correctly, I think the world's going to become a much better place as a result of progress in artificial intelligence. That was my colleague Adrian Florido speaking with Peter Stone from the University of Texas in Austin. At the top of this episode, you heard reporting on AI and disinformation from NPR's Shannon Bond. Find a link to more in our episode notes.
Starting point is 00:13:18 It's Consider This from NPR. I'm Elsa Chang. Support for NPR and the following message come from the Kauffman Foundation, providing access to opportunities that help people achieve financial stability, upward mobility, and economic prosperity, regardless of race, gender, or geography. Kauffman.org.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.