The Journal. - Why an AI Pioneer Is Worried

Episode Date: December 19, 2023

Yoshua Bengio, known as a godfather of AI, is one of hundreds of researchers and tech leaders calling for a pause in the breakneck development of powerful new AI tools. We talk to the AI pioneer about... how the tools evolved and why he's worried about their potential. Further Listening: - Artificial: Episode 1, The Dream  - Artificial: Episode 2, Selling Out  - OpenAI’s Weekend of Absolute Chaos  Further Reading: - How Worried Should We Be About AI’s Threat to Humanity? Even Tech Leaders Can’t Agree  - ‘Take Science Fiction Seriously’: World Leaders Sound Alarm on AI  Learn more about your ad choices. Visit megaphone.fm/adchoices

Transcript
Discussion (0)
Starting point is 00:00:00 This year, one thing that a lot of people have been talking about is AI, artificial intelligence. All right, artificial intelligence has been the story of 2023, right? I had ChatGPT write my graduation speech. A groundbreaking year for artificial intelligence. Today, I let ChatGPT control an entire college day of my life. And because so many of us are new to AI, we wanted to call up someone who's been thinking about it for decades. And you are referred to as one of the godfathers of AI. Is this a title that you like, that you use? No. That's Yashua Bengio,
Starting point is 00:00:51 a professor at the University of Montreal. He's considered one of the godfathers of AI because his work helped lay the foundation for many of the AI models in use today. for many of the AI models in use today. Why did you get into AI? Intelligence has always sounded like so incredible and mysterious and important because it's what's special about us and what allows us to do great things. Understanding that and building computers that would be intelligent, that was so exciting. Still is. And now I'm concerned.
Starting point is 00:01:33 Yashua has spent his entire career working to advance AI technology. Now he thinks the technology is moving too fast. And just saying, oh, everything is going to be fine and AI is useful, yeah, it could be extremely useful. But the more powerful it is, the more useful it can be and the more dangerous it can be. It just goes together. It's hand in hand. Welcome to The Journal, our show about money, business, and power. I'm Kate Leinbaugh. It's Tuesday, December 19th. Coming up on the show, one of AI's godfathers on why he's worried about the technology he helped create. business is unique, so your business insurance should be too. Contact a licensed TD Insurance advisor to learn more.
Starting point is 00:02:54 Yashua Bengio has long been interested in the relationship between humans and computers. And we heard that you read a lot of sci-fi as a kid. Is that right? That's true, yes. What kind of sci-fi? Well, the old stuff, you know. I was a kid in the 70s and early 80s. Okay. Did that have anything to do with you getting into AI? Yeah, probably. So, for example, I read the whole series of novels by Asimov on iRobot and company, all the Bradbury novels and so on.
Starting point is 00:03:32 AI is very prominent in science fiction, so it was sort of a fantasy when I was a kid, but then when I studied computers at university and started to read real scientific papers about AI, I realized that maybe it was something actually possible. For a lot of AI researchers, the goal has been to create an intelligent computer, where the computer would accomplish the same intellectual tasks that humans do. would accomplish the same intellectual tasks that humans do. And Yashua's theory for how to do this was inspired by how the human brain processes information. This approach, known as deep learning,
Starting point is 00:04:15 was on the fringes of AI research for a long time. But Yashua believed that understanding human intelligence was the key to building a machine that could learn. We are born with some knowledge that's wired in, you know, from our genome, but most of what we have in our brain comes from our own experience. And so there's a lot that we could learn about how humans learn, how animals learn in order to design, you know, this new approach to artificial intelligence. How would you build such a machine,
Starting point is 00:04:50 a machine that could learn? You simply get better at anticipating what will come. And of course, in order to do that, you need to build implicitly an understanding of how everything you're seeing is related to each other. That's how we build an understanding of our physical environment, our social environment, our job, the games we play, you know? So think about the games we play. Initially, we don't know how to play. Maybe somebody tells us the rules, but that's not enough to be good. And then we practice. And each time we play, we get a bit better because we see what worked and we see what didn't work.
Starting point is 00:05:29 So if you wanted to teach a computer what a cat is, what it looks like, how could that work? You just show it lots of cat images. And it would try to kind of anticipate what cat images look like. So, for example, given an image, it could tell you if it's a cat or not. Does it look like the other cat images or it doesn't? That's it. I'm simplifying, but that's basically how they learn what cats look like. And of course, you could do it in parallel for cats, dogs, and, you know,
Starting point is 00:06:07 tens of thousands of other categories. And now it recognizes objects. And this approach of deep learning showed a lot of potential. So we started making progress on these classical AI tasks and starting to beat other methods pretty significantly. It happened gradually in the 2000s, but what happened after 2010, 2012, more precisely, is that companies started seeing this as something that could be very profitable.
Starting point is 00:06:37 And they started hiring some of us and buying startups, starting to work on this and so on. So it became something industrial, like money was involved. And it's not just some crazy, crazy sort of scientist having fun, trying to figure out intelligence. It was something that could change the world. What did that feel like to you? Well, initially I was quite happy.
Starting point is 00:07:05 It was like, oh, that's confirmation that we are on the right path. People have been saying that the approach we were following didn't go anywhere. We had trouble getting accepted in the broader community, but suddenly companies were like very quickly developing products and suddenly it was working a lot better.
Starting point is 00:07:26 So you were vindicated. It was great. And my students could get jobs too, right? Well-paid ones too. Tech companies ran with deep learning for new AI systems. Some of these systems were so-called language models. Instead of being trained on, say, cat pictures, they were trained on text. ChatGPT is based on a language model.
Starting point is 00:07:52 It uses deep learning to essentially predict the next word in a sequence. This is how the chatbot answers questions, writes code, and does math. When ChatGPT was released, Yashua's expectations were pretty low. I played a bit with it, and I was trying to nail it, like to find the cases where it would give wrong answers. You know, they would have this nate, or it wouldn't be able to do some really simple task. I found that it would be very bad at doing simple arithmetic that a 12-year-old could do easily. Do you remember exactly what you asked it? Yeah, yeah. Like adding two or three digit numbers.
Starting point is 00:08:34 Huh. So initially I thought, well, yeah, we're not there. And I still think we're not there. But after a few months of doing this, I realized, but wait, it knows so much stuff. No one knows as much as this machine does. The thing was, even though ChatGPT was making mistakes, Yashua saw parallels with how we humans think.
Starting point is 00:09:05 He says the machine was showing something like intuition. Humans also make these mistakes if they don't take the time to think through carefully. So if I ask you to add to three-digit numbers,
Starting point is 00:09:21 and I ask you to do it really quick, so you don't have time to go through all the usual steps. You're going to give me like a rough thing and it's going to be ballpark reasonable. And that's exactly what you get from Challenge EPT. So it's much more like we are. And that's also why, you know, it fails
Starting point is 00:09:40 because it only relies on its intuition for everything. It just has a huge set of things on which it has intuition. So you kind of started to see some of yourself, like your humanness in it, in a way. Of course. Of course. What did that feel like? Well, in a way, it's not very surprising because we have been designing these systems based on inspiration from human intelligence. But at the same time, it's really, really important for people to understand that this is an alien intelligence, even though it may have some similarities with our intelligence. It is not human intelligence.
Starting point is 00:10:19 It will probably never be. It might imitate us. Chat GPT just predicts the next word, right? The probability of the next word, and then it samples one of the possible words according to those probabilities more precisely. So what I mean is that each time you call it, you might get a different answer, because it's really like a random thing. Like, just like, if you ask me the same questions, I never answer exactly the same way. It's exactly the same thing for chat GPT. But that still feels a long way off from human level AI. No. I know people say this, but they are wrong.
Starting point is 00:10:59 Tell me why. Well, very simple. If you try to predict the next word that I will say, there's like 100,000 words that I could say, roughly. I was going to say a million. Yeah, if you include rare words and proper nouns. So which of these one million words will I say? It's very hard to predict unless you understand what I'm talking about. Let's say I ask, I consider a sequence of words
Starting point is 00:11:25 where it starts with the question, and then there's question mark, and then there's the answer. The answer is just the next word. So in order to provide that next word correctly, you need to understand the question and what it's about. There needs to be a level of understanding. There is. There is a level of understanding. There is.
Starting point is 00:11:45 There is a level of understanding. People who are saying that there is no understanding in ChatGPT don't understand what's going on here. Other AI experts disagree with Yashua's conclusion that ChatGPT has the ability to understand. But Yashua is very concerned about what he sees happening with AI. That's that spicy enchilada? Very flavorful. Yodeling with them.
Starting point is 00:12:31 Ooh, must be mating season. And hiking with them. Is that a squirrel? Bear! Run! Collect more moments with more ways to earn. Air Mile. With Uber Reserve, good things come to those who plan ahead.
Starting point is 00:12:48 Family vacay? Reserve your ride as soon as you book your flights. To all the planners, now you can reserve your Uber ride up to 90 days in advance. See Uber app for details. Yashua has seen AI come a long way, and recently he's been worried about how powerful the technology is getting. Some of his worries go back to the kind of sci-fi stories that fascinated him growing up. Some of the sci-fi stories are amazingly to the point of some of those concerns. So one that many people have seen is 2001 Space Odyssey with the HAL 9000 computer. Hello, HAL, do you read me? Do you read me, HAL?
Starting point is 00:13:36 In the movie 2001 A Space Odyssey, a team of astronauts comes to believe that their AI, named HAL, has been malfunctioning. So they decide to turn it off. But HAL anticipates their plan. Open the pod bay doors, HAL. I'm sorry, Dave. I'm afraid I can't do that. And HAL tries to kill them. Of course, it was never programmed to kill people. But as soon as you have a machine that has as a goal to preserve itself, like we do,
Starting point is 00:14:09 and if you think that somebody wants to turn you off, you're probably going to defend yourself. And you think AI could get there? I don't know. But I don't see any reasoning flaw in this story. It's plausible. It's plausible, exactly. That's my concern. Yashua's had other concerns about AI for a while, related to social media and disinformation,
Starting point is 00:14:35 and how AI can manipulate. AI has been used from, you know, in the last decade, heavily for advertising, for being able to target just the right message to you given the information that's available to the computer to make you change your mind. And of course, it's being used not just for advertising but for recommendations, which could be useful
Starting point is 00:14:56 in Facebook or e-commerce, but could also be a little bit disturbing. We don't want to be manipulated sort of behind our back. also be a little bit disturbing. We don't want to be manipulated sort of behind our back. But now if we have machines that can manipulate language as well as us, combine that with the commercial objectives of influencing people one way or another for one reason or another, well that's scary. So I was already concerned about AI in advertising. And then I realized, oh, there's a whole floodgate that's opening up that could be very dangerous. At the start of the year, Yashua's concerns about AI grew more urgent. We have essentially reached that milestone that we have machines that master language.
Starting point is 00:15:43 They don't necessarily have all the human abilities, but they have that one, which is crucial, because it's the entry point into human culture. And that's what makes it so powerful. It can take advantage of everything that's written. We might be much, much closer than we anticipated to human-level intelligence, and this is not what I was envisioning before ChatGPT.
Starting point is 00:16:09 Open AI, the company that makes ChatGPT, didn't respond to our request for comment about Yashua's concerns. Earlier this year, Yashua joined with hundreds of other high-profile AI researchers and tech leaders to write an open letter. They publicly called for a pause in AI tech development. I signed the letter at the end of March saying, oh, why don't we kind of slow down a little bit, make sure we understand what we're doing. Why? Why did you feel like that was an important thing to sign? So I didn't really think that those companies would take a pause, but I thought it was important to speak up and to say that we should be cautious, that we need governments to step in, and I know that it takes time. And so the letter was
Starting point is 00:16:59 not just about this pause thing, but it was alerting the public opinion. The letter calls for more AI regulation and calls for AI research and development to focus on making systems more accurate, safe, and trustworthy. In May, Yashua also signed another letter, which claims that AI poses an extinction risk as great as pandemics and nuclear war. an extinction risk as great as pandemics and nuclear war. So the biggest catastrophe are these like loss of control scenarios, which really we don't understand well, and we need to understand better. But there's another kind of scenario which people don't talk too much about, and that is the excessive power concentration. So what do I mean? Maybe we find the ways to make AI safe,
Starting point is 00:17:50 in other words, that we don't lose control of it, and it does what we want, at least that the human controlling it gets what it wants. First, they could be very rich, so they could acquire economic dominance. And once you're super rich, you can also acquire political power. And eventually, there is an incompatibility between concentration of power and democracy.
Starting point is 00:18:14 Democracy is about sharing power. If a few people decide everything, that's not democracy. For someone who spent his entire career working to advance AI technology, Yashua says the shift in his own thinking has felt pretty destabilizing. It's difficult to make that shift because I really, I mean, I had always seen my work as something positive for the world. And starting to think, wait, if we get to human level intelligence and potentially surpass it, this could be harmful, is hard to digest if you've seen yourself for all your life as working on something essentially good. something essentially good. I've always been saying we should be careful about social impact,
Starting point is 00:19:16 but mostly I was very positive about technology and not like someone who's been spending their life talking about doom, but rather somebody who's been arguing that we should develop more of it in order to help us address many of the challenges that we have. Why is that fear true this time? Because we've reached an unexpected milestone. We are on a trajectory that's going quickly towards human-level intelligence. And I don't think that we have the wisdom and the guardrails to handle this properly. Joshua, thank you.
Starting point is 00:20:04 Pleasure. That's all for today, Tuesday, December 19th. The Journal is a co-production of Spotify and The Wall Street Journal. If you want to learn more about AI, we have a series for you. It's called Artificial, The Open AI Story. The first two episodes are already in your feed, and we're going to drop two more in January. Check it out. We'll link to it in our show notes. Check it out. We'll link to it in our show notes.
Starting point is 00:20:50 Thanks for listening. See you tomorrow.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.