TED Talks Daily - Is AI progress stuck? | Jennifer Golbeck

Episode Date: November 23, 2024

Will progress in artificial intelligence continue to accelerate, or have we already hit a plateau? Computer scientist Jennifer Golbeck interrogates some of the most high-profile claims about ...the promises and pitfalls of AI, cutting through the hype to clarify what's worth getting excited about — and what isn't.

Transcript
Discussion (0)
Starting point is 00:00:00 You're listening to TED Talks Daily, where we bring you new ideas to spark your curiosity every day. I'm your host, Elise Hough. You've heard so much about advances in AI in the past year, and a lot of it right here on this show. It's vital to talk about, especially when we hear warnings about how AI could overtake human intelligence and destroy civilization.
Starting point is 00:00:30 In her 2024 talk, computer scientist and AI researcher Jennifer Goldbeck asks us to take a step back first. She cuts through the hype to clarify what is worth worrying about and what isn't when it comes to AI. It's coming up after the break. Support for this show comes from Airbnb. If you know me, you know I love staying in Airbnbs
Starting point is 00:00:54 when I travel. They make my family feel most at home when we're away from home. As we settled down at our Airbnb during a recent vacation to Palm Springs, I pictured my own home sitting empty. Wouldn't it be smart and better put to use welcoming a family like mine by hosting it on Airbnb? It feels like the practical thing to do, and with the extra income I could save up for renovations
Starting point is 00:01:15 to make the space even more inviting for ourselves and for future guests. Your home might be worth more than you think. Find out how much at airbnb.ca slash host. AI keeping you up at night? Wondering what it means for your business? Don't miss the latest season of Disruptors, the podcast that takes a closer look at the innovations reshaping our economy. Join RBC's John Stackhouse and Sonia Senek from Creative Destruction Lab as they ask bold questions like, why is Canada lagging in AI adoption and how to catch up?
Starting point is 00:01:53 Don't get left behind. Listen to Disruptors, the innovation era, and stay ahead of the game in this fast changing world. Follow Disruptors on Apple Podcasts, Spotify, or your favorite podcast platform. And now our TED Talk of the Day. We've built artificial intelligence already that on specific tasks performs better than humans. There is AI that can play chess and beat human grandmasters. But since the introduction of generative AI to the general public a couple years ago,
Starting point is 00:02:25 there's been more talk about artificial general intelligence, or AGI. And that describes the idea that there's AI that can perform at or above human levels on a wide variety of tasks just like we humans are able to do. And people who think about AGI are worried about what it means if we reach that level of performance in the technology. Right now there's people from the tech industry coming out and saying, the AI that we're building is so powerful and dangerous that it poses a threat to civilization, and they're going to government and saying, maybe you need to regulate us. Now normally when an industry makes a powerful new tool, they don't say it poses an existential threat to humanity
Starting point is 00:03:07 and that it needs to be limited. So why are we hearing that language? And I think there's two main reasons. One is if your technology is so powerful that it can destroy civilization, between now and then, there's an awful lot of money to be made with that. And what better way
Starting point is 00:03:25 to convince your investors to put some money with you than to warn that your tool is that dangerous? The other is that the idea of AI overtaking humanity is truly a cinematic concept. We've all seen those movies. And it's kind of entertaining to think about what that would mean now with tools that we're actually able to put our hands on. In fact, it's so entertaining that it's a very effective distraction from the real problems already happening in the world because of AI. The more we think about these improbable futures, the less time we spend thinking about how do we correct deep fakes or the fact that there's AI right now being used to decide whether or not people are let out of prison and we know it's racially biased. But are we anywhere close to
Starting point is 00:04:13 actually achieving AGI? Some people think so. Elon Musk said that we'll achieve it within a year but like at the same time Google put out their AI search tool that's supposed to give you the answer so you don't have to click on a link and it's not going super well. Now of course these tools are going to get better but if we're going to achieve AGI or if they're even going to fundamentally change the way we work we need to be in a place where they are continuing on a sharp upward trajectory in terms of their abilities. And that may be one path, but there's also the possibility that what we're seeing is that these tools have basically achieved what they're capable of doing, and the
Starting point is 00:04:57 future is incremental improvements in a plateau. So to understand the AI future, we need to look at all the hype around it and get under there and see what's technically possible. And we also need to think about where are the areas that we need to worry and where are the areas that we don't. So if we want to realize the hype around AI, the one main challenge that we have to solve is reliability. These algorithms are wrong all the time.
Starting point is 00:05:24 And Google actually came out and said, after these bad search results were popularized, that they don't know how to fix this problem. I use ChatGPT every day. I write a newsletter that summarizes discussions on far-right message boards, and so I download that data. ChatGPT helps me write a summary. And it makes me much more efficient
Starting point is 00:05:41 than if I had to do it by hand, but I have to correct it every day because it misunderstands something, it takes out the context. And so because of that, we can't just rely on it to do the job for me. And this reliability is really important. Now a sub part of reliability in this space is AI hallucination, a great technical term for the fact that AI just makes stuff up a lot of the time. I did this in my newsletter, I said,
Starting point is 00:06:09 "'Jad, GBT, are there any people threatening violence? "'If so, give me the quotes.'" And it produced these three really clear threats of violence that didn't sound anything like people talked on these message boards. And I went back to the data and nobody ever said it. It just made it up out of thin air. And you may have seen this
Starting point is 00:06:26 if you've used an AI image generator. We have to solve this hallucination problem if this AI is gonna live up to the hype. And I don't think it's a solvable problem with the way this technology works. There are people who say we're gonna have it taken care of in a few months, but there's no technical reason to think that's the case.
Starting point is 00:06:44 Because generative AI always makes stuff up. When you ask it a question, it's creating that answer or creating that image from scratch when you ask. It's not like a search engine that goes and finds the right answer on a page. And so because its job is to make things up every time, I don't know that we're going to be able to get it to make up correct stuff and then not make up other stuff.
Starting point is 00:07:08 That's not what it's trained to do. And we're very far from achieving that. And in fact, there are spaces where they're trying really hard. One space that there's a lot of enthusiasm for AI is in the legal area where they hope it will help write legal briefs or do research. Some people have found out the hard way that they should not write legal briefs right now
Starting point is 00:07:28 with ChatGPT and send them to federal court because it just makes up cases that sound right and that's a really fast way to get a judge mad at you and to get your case thrown out. Now there are legal research companies right now that advertise hallucination free generative AI. And I was really dubious about this and researchers at Stanford actually went in and checked it and they found the best performing of these hallucination free tools still hallucinate 17% of the time. So like on one hand it's a great scientific achievement that we have built a tool that we can post basically
Starting point is 00:08:07 any query to, and 60 or 70 or maybe even 80% of the time, it gives us a reasonable answer. But if we're gonna rely on using those tools and they're wrong 20 or 30% of the time, there's no model where that's really useful. And that kind of leads us into how do we make these tools that useful? Because even if you don't believe me and you think we're going to solve this hallucination problem, we're going to solve the reliability problem, the tools still need to get better than
Starting point is 00:08:34 they are now. And there's two things they need to do that. One is lots more data and two is the technology itself has to improve. And now back to the episode. So where are we going to get that data? Because they've kind of taken all the reliable stuff online already and if we were to find twice as much data as they've already had, that doesn't mean they're going to be twice as smart. I don't know if there's enough data out there and it's compounded by the fact that one way that generative AI has been very successful is it producing low quality content online.
Starting point is 00:09:13 That's bots on social media, misinformation, and these SEO pages that don't really say anything but have a lot of ads and come up high in the search results. And if the AI starts training on pages that it generated, we know from decades of AI research that they just get progressively worse. It's like the digital version of mad cow disease. Let's say we solve the data problem.
Starting point is 00:09:38 You still have to get the technology better, and we've seen $50 billion in the last couple years invested in improving generative AI. And that's resulted in $3 billion in revenue. So that's not sustainable. But of course, it's early, right? Companies may find ways to start using this technology, but is it going to be valuable enough to justify the tens, maybe hundreds of billions of dollars
Starting point is 00:10:01 of hardware that needs to be bought to make these models get better. I don't think so. And we can kind of start looking at practical examples to figure that out. And it leads us to think about where are the spaces we need to worry and not. Because one place that everybody's worried with this
Starting point is 00:10:17 is that AI is gonna take all of our jobs. Lots of people are telling us that's gonna happen and people are worried about it. And I think there's a fundamental misunderstanding at the heart of that. So imagine this scenario, we have a company and they can afford to employ two software engineers. And if we were to give those software engineers
Starting point is 00:10:34 some generative AI to help write code, which is something it's pretty good at, let's say they're twice as efficient. That's a big overestimate, but it makes the math easy. So in that case, the company has two choices. They could fire one of those software engineers, because the other one can do the work of two people now, or they already could afford two of them, and now they're twice as efficient, so they're bringing in more money.
Starting point is 00:10:58 So why not keep both of them and take that extra profit? The only way this math fails is if the AI is so expensive that it's not worth it. But that would be like the AI is $100,000 a year to do one person's worth of work. So that sounds really expensive. And practically, there are already open source versions of these tools that are low cost
Starting point is 00:11:21 that companies can install and run themselves. Now they don't perform as well as the flagship models, but if they're half as good and really cheap, wouldn't you take those over the one that costs $100,000 a year to do one person's work? Of course you would. And so even if we solve reliability, we solve the data problem, we make the models better, the fact that there are cheap versions of this available suggests that companies aren't going to be spending hundreds of millions of dollars
Starting point is 00:11:46 to replace their workforce with AI. There are areas that we need to worry though, because if we look at AI now, there are lots of problems that we haven't been able to solve. I've been building artificial intelligence for over 20 years and one thing we know is that if we train AI on human data, the AI adopts human biases,
Starting point is 00:12:07 and we have not been able to fix that. We've seen those biases start showing up in generative AI, and the gut reaction is always, well, let's just put in some guardrails to stop the AI from doing the biased thing. But one, that never fixes the bias because the AI finds a way around it. And two, the guardrails themselves can cause problems.
Starting point is 00:12:27 So Google has an AI image generator and they tried to put guardrails in place to stop the bias in the results. And it turned out it made it wrong. And so in trying to stop the bias, we end up creating more reliability problems. We haven't been able to solve this problem of bias and if we're thinking about deferring decision-making, replacing human decision makers and relying on this technology and we can't solve this problem,
Starting point is 00:12:54 that's the thing that we should worry about and demand solutions to before it's just widely adopted and employed because it's sexy. And I think there's one final thing that's missing here, which is our human intelligence is not defined by our productivity at work. At its core, it's defined by our ability to connect with other people, our ability to have emotional responses, to take our past and integrate it with new information and creatively come up with new things. And that's something that artificial intelligence is not now nor will it ever be capable of doing. It may be able to imitate
Starting point is 00:13:29 it and give us a cheap facsimile of genuine connection and empathy and creativity. But it can't do those core things to our humanity. And that's why I'm not really worried about AGI taking over civilization. But if you come away from this disbelieving everything I have told you and right now you're worried about humanity being destroyed by AI overlords the one thing to remember is despite what the movies have told you if it gets really bad we still can always just turn it off. Thank you.
Starting point is 00:14:05 just turn it off. Thank you. Support for this show comes from Airbnb. If you know me, you know I love staying in Airbnbs when I travel. They make my family feel most at home when we're away from home. As we settled down at our Airbnb during a recent vacation to Palm Springs, I pictured my own home sitting empty. Wouldn't it be smart and better put to use welcoming a family like mine by hosting it on Airbnb? It feels like the practical thing to do, and with the extra income I could save up for renovations to make the space even more inviting for ourselves and for future guests. Your home might be worth more than you think.
Starting point is 00:14:41 Find out how much at airbnb.ca slash host. That was Jennifer Goldback at TEDx Mid-Atlantic in 2024. If you're curious about Ted's curation, find out more at ted.com slash curation guidelines. And that's it for today. TED Talks Daily is part of the TED Audio Collective. This episode was produced and edited by our team, Martha Estefanos, Oliver Friedman, Brian Green, Autumn Thompson, and Alejandra Salazar. It was mixed by Christopher Faisy-Bogan. Additional support from Emma Topner and Daniela Ballarezo. I'm Elise Hue. I'll be back tomorrow with a fresh idea for your feet.
Starting point is 00:15:21 Thanks for listening.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.