Today, Explained - …We’re trusting it anyway

Episode Date: September 4, 2023

Tech companies are racing to make new, transformative AI tools, with little to no safeguards in place. This is the second episode of “The Black Box,” a two-part series from Unexplainable. This epi...sode was reported and produced by Noam Hassenfeld, edited by Brian Resnick and Katherine Wells with help Meradith Hoddinott, and fact-checked by Tien Nguyen. It was mixed and sound designed by Vince Fairchild with help from Cristian Ayala. Music by Noam Hassenfeld. Transcript at vox.com/todayexplained Support Today, Explained by making a financial contribution to Vox! bit.ly/givepodcasts Learn more about your ad choices. Visit podcastchoices.com/adchoices

Transcript
Discussion (0)
Starting point is 00:00:00 What's big and scary and has 72% of American voters in agreement? Artificial intelligence. Vox's Seagal Samuel recently wrote about a YouGov poll of 1,001 Americans. 42% of the respondents aligned themselves with Donald Trump. 47% aligned themselves with Joe Biden. But 72% of them said they would prefer we slow down the development of AI, while only 8% of them said speed it up. And there's good news for the 8%.
Starting point is 00:00:34 More and more tech companies and entertainment conglomerates and especially militaries want to figure out how they can advance AI, even integrating it into their decision-making. But it turns out, getting AI to do what you want, the way you want it, is a lot harder than it seems. Our friends at Unexplainable are going to help us cope on Today Explained. BetMGM, authorized gaming partner of the NBA, has your back all season long. From tip-off to the final buzzer, you're always taken care of with a sportsbook born in Vegas. That's a feeling you can only get with BetMGM. And no matter your team, your favorite player, or your style, there's something every NBA fan will love about BetMGM.
Starting point is 00:01:21 Download the app today and discover why BetMGM is your basketball home for the season. Raise your game to the next level this year with BetMGM, a sportsbook worth a slam dunk, an authorized gaming partner of the NBA. BetMGM.com for terms and conditions. Must be 19 years of age or older to wager. Ontario only.
Starting point is 00:01:41 Please play responsibly. If you have any questions or concerns about your gambling or someone close to you, please contact Connex Ontario at 1-866-531-2600 to speak to an advisor free of charge. BetMGM operates pursuant to an operating agreement with iGaming Ontario. Today, today X-Line. It's today X-Line. It's today X-Line. Ramosfirm, here to hand things off to Noam Hassenfeld from Unexplainable, who wants to tell you a story.
Starting point is 00:02:13 It's the story of a little boat. Specifically, a boat in this retro-looking online video game. It's called Coast Runners, and it's a pretty straightforward racing game. There are these power-ups that give you points if your boat hits them. There are obstacles to dodge. There are these kind of lagoons where your boat can get all turned around. And a couple of years ago, the research company OpenAI wanted to see if they could get an AI to teach itself how to get a high score on the game without being explicitly told how. We are supposed to train a boat
Starting point is 00:02:46 to complete a course from start to finish. This is Dario Amode. He used to be a researcher at OpenAI. Now he's the CEO of another AI company called Anthropic. I remember studying it running, just telling it to teach itself, and I figured that it would learn to complete the course. Dario had the AI run tons of simulated races over and over,
Starting point is 00:03:08 but when he came back to check on it, the boat hadn't even come close to the end of the track. What it does instead, this thing that's been looping, is it finds this isolated lagoon, and it goes backwards in the course. The boat wasn't just going backwards in this lagoon. It was on fire, covered in pixelated flames, crashing into docks and other boats and just spinning around in circles.
Starting point is 00:03:43 But somehow the AI's score was going up. Turns out that by spinning around in this isolated lagoon in exactly the right way, it can get more points than it could possibly ever have gotten by completing the race in the most straightforward way. When he looked into it, Dario realized that the game didn't award points for finishing first. For some reason, it gave them out for picking up power-ups. Every time you get one, you increase your score, and they're kind of laid out mostly linearly along the course. But this one lagoon was just full of these power-ups, and the power-ups would regenerate after a couple seconds.
Starting point is 00:04:12 So the AI learned to time its movement to get these power-ups over and over by spinning around and exploiting this weird game design. There's nothing wrong with this in the sense that we asked it to find a solution to a mathematical problem, how do you get the most points, and this is how it did it. But, you know, if this was a passenger ferry or something, you wouldn't want it spinning around, setting itself on fire, crashing into everything.
Starting point is 00:04:38 This boat game might seem like a small, glitchy example, but it illustrates one of the most concerning aspects of AI. Like this game, our world isn't perfectly designed. So if scientists don't account for every small detail in our society when they train in AI, it can solve problems in unexpected ways, sometimes even harmful ways. So given the risks here, that AI can solve problems in ways its designers don't intend, it's easy to wonder why anyone would want to use AI to make decisions in the first place. It's because of all this promise. Here's just a couple examples. Last year, an AI built by Google predicted almost all known protein structures. It was
Starting point is 00:05:22 a problem that had frustrated scientists for decades, and this development has already started accelerating drug discovery. AI has helped astronomers detect undiscovered stars. It's allowed scientists to make progress on decoding animal communication. And like we talked about last week, it was able to beat humans at Go, arguably the most complicated game ever made. The powerful and compelling thing about AI, you know, when it's playing Go, is sometimes it will tell you a brilliant Go move that you would never have thought of, that no Go master would ever have thought of, that does advance your goal of winning the game. This is Kelsey Piper. She's a reporter for Vox who we heard from last episode. And she says this kind of innovation is really useful,
Starting point is 00:06:09 at least in the context of a game. But when you're operating in a very complicated context like the world, then those brilliant moves that advance your goals might do it by having a bunch of side effects or inviting a bunch of risks that you don't know, don't understand, and aren't evaluating. Essentially, there's always that risk of the boat on fire. And we've already seen this kind of thing happen outside a video game. Just take the example of Amazon. So Amazon tried to use an AI hiring algorithm, looked at candidates and then recommended which ones would proceed to the interview process. Amazon fed this hiring AI 10 years worth of submitted resumes, and they told it to
Starting point is 00:06:46 find patterns that were associated with stronger candidates. And then an analysis came out finding that the AI was biased. It had learned, you know, that Amazon generally preferred to hire men, so it was happily more likely to recommend Amazon men. Amazon never actually used this AI in the real world. They only tested it. But a report by Reuters found exactly which patterns the AI might have internalized. The technology thought, oh, Amazon doesn't like
Starting point is 00:07:11 any resume that has the word women's in it. An all-women's university, captain of a women's chess club, captain of a women's soccer team. Essentially, when they were training their AI, Amazon hadn't accounted for their own flaws in how they'd been measuring success internally. Kind of like how OpenAI hadn't accounted for the way the boat game gave out points based on power-ups, not based on who finished first. And, of course, when Amazon realized that, they took the AI out of their process. But it seems like they might be getting back in the AI hiring game. According to an internal document obtained by former Vox reporter Jason Del Rey, Amazon's been working on a new AI system for recruitment. At the same time, they've been extending buyout offers to hundreds of human recruiters.
Starting point is 00:07:57 And these flaws aren't unique to hiring AIs. The way AIs are trained has led to all kinds of problems. Take what happened with Uber in 2018, when they didn't include jaywalkers in the training data for their self-driving cars. And then a car killed a pedestrian. Tempe, Arizona police say 49-year-old Elaine Herzberg was walking a bicycle across a busy thoroughfare frequented by pedestrians Sunday night. She was not in a crosswalk.
Starting point is 00:08:24 And a similar thing happened a few years ago with a self-training AI Google used in its Photos app. The company's automatic image recognition feature in its photo application identified two black persons as gorillas and in fact even tagged them as so. According to some former Google employees, this may have happened because Google had a biased dataset. They may just not have included enough Black people. The worrying thing is if you're using AIs to make decisions
Starting point is 00:08:50 and the data they have reflects our own biased processes, like a biased justice system that sends some people to prison for crimes where it lets other people off with the slap on the wrist, or a biased hiring process, then the AI is going to learn the same thing. But despite these risks, more companies are using AI to guide them in making important decisions. This is changing very fast. Like, there are a lot more companies doing this now than there were even a year ago, and there will be a lot more in a couple more years. Companies see a lot of benefits here. First, on a simple level, AI is cheap. Systems like ChatGPT are currently being heavily subsidized by investors, but at least for now, AI is way cheaper than hiring real people. If you want to look over
Starting point is 00:09:37 thousands of job applicants, AI is cheaper than having humans screen those thousands of job applicants. If you want to make salary decisions, those get done by algorithm because it's easier to fire who the algorithm spits out than to have human judgment and human analysis in the picture. And even if companies know that AI decision-making can lead to boat-on-fire situations, Kelsey says they might be okay with that risk.
Starting point is 00:10:02 It's so much cheaper that that's like a good business trade-off. And so we hand off more and more decision-making to AI systems for financial reasons. The second reason behind this push to use AI to make decisions is because it could offer a competitive advantage. Companies that are employing AI in a very winner-take-all capitalist market, they might outperform the companies that are still relying on expensive human labor. And the companies that aren't are much more expensive, so fewer people want to work with them and they're a smaller share of the economy.
Starting point is 00:10:34 And you might have huge economic behemoths that are making decisions almost entirely with AI systems. Kelsey says competitive pressure is even leading the military to look into using AI to make decisions. I think there is a lot of fear that the first country to successfully integrate AI into its decision-making will have a major battlefield advantage
Starting point is 00:10:55 over anyone still relying on slow humans. And that's the driver of a lot in the military, right? If we don't do it, somebody else will, and maybe it will be a huge advantage. This kind of thing may have already happened in actual battlefields. In 2021, a UN panel determined that an autonomous Turkish drone may have killed Libyan soldiers without a human
Starting point is 00:11:17 controlling it or even ordering it to fire. And lots of other countries, including the U.S., are actively researching AI-controlled weapons. You don't want to be the people, you know, still fighting on horses when someone else has invented fighting with guns. And you don't want to be the people who don't have AI when the other side has AI. So I think there is this very powerful pressure not just to figure this out, but to have it ready to go. And finally, the third reason behind the push toward AI decision-making is because of the promise we talked about at the top. AI can provide novel solutions for problems humans might not be able to solve on their
Starting point is 00:11:52 own. Just look at the Department of Defense. They're hoping to build AI systems that, quote, function more as colleagues than as tools. And they're studying how to use AI to help soldiers make extremely difficult battlefield decisions, specifically when it comes to medical triage. I'm going to talk about how we can build AI-based systems that we would be willing to bet our lives with and not be foolish to do so. AI has already shown an ability to beat humans in war game scenarios, like with the board game
Starting point is 00:12:21 Diplomacy. And researchers think this ability could be used to advise militaries on bigger decisions like strategic planning. Cybersecurity expert Matt DeVos talked about this on a recent episode of On the Media. I think it'll probably get really good at threat assessment. I think analysts might also use it to help them through their thinking, right? They might come up with an assessment and say, tell me how I'm wrong. So I think there'll be a lot of unique ways in which the technology is used in the intelligence community. But this whole time, that boat on fire possibility is just lurking. One of the things that makes AI so promising, the novelty of its solutions, it's also the thing that makes it so hard to predict.
Starting point is 00:13:10 Kelsey imagines a situation where AI recommendations are initially successful, which leads humans to start relying on them uncritically, even when the recommendations seem counterintuitive. Humans might just assume the AI sees something they don't, so they follow the recommendation anyway. We've already seen something like this happen in a game context with AlphaGo, like we talked about last week. So the next step is just imagining it happening in the world. And we know that AI can have fundamental flaws, things like biased training data or strange loopholes engineers haven't noticed. But powerful actors relying on AI for decision making might not notice these faults until it's too late. And this is before we get into the AI, like, being deliberately adversarial.
Starting point is 00:13:51 This isn't the Terminator scenario with AI becoming super intelligent and wanting to kill us. The problem is more about humans and our temptation to rely on AI uncritically. This isn't the AI trying to trick you. It's just the AI exploring options that no one would have thought of that get us into weird territory that no one has been in before. And since they're so untransparent, we can't even ask the AI,
Starting point is 00:14:17 hey, what are the risks of doing this? So if it's hard to make sure that AI operates in the way its users intend, and more institutions feel like the benefits of using AI to make decisions might outweigh the risks, what do we do? There's a lot that we don't know, but I think we should be changing the policy and regulatory incentives so that we don't have to learn from a horrible disaster, and so that we understand the problem better and can start making progress on solving it. How to start solving a problem that you don't understand in a minute on Today Explained. They're digital picture frames. They were named the number one digital photo frame by Wirecutter. Aura frames make it easy to share unlimited photos and videos directly from your phone to the frame. When you give an aura frame as a gift, you can personalize it.
Starting point is 00:15:32 You can preload it with a thoughtful message, maybe your favorite photos. Our colleague Andrew tried an aura frame for himself. So setup was super simple. In my case, we were celebrating my grandmother's birthday. And she's very fortunate. She's got 10 grandkids. And so we wanted to surprise her with the order frame. And because she's a little bit older, it was just easier for us to source all the images together and have them uploaded to the frame itself. And because we're all connected over text message, it was just so easy to send a link to everybody. You can save on the perfect gift by visiting
Starting point is 00:16:10 AuraFrames.com to get $35 off Aura's best-selling Carvermat frames with promo code EXPLAINED at checkout. That's A-U-R-A-Frames.com promo code EXPLAINED. This deal is exclusive to listeners and available just in time for the holidays. Terms and conditions do apply. The all-new FanDuel Sportsbook and Casino is bringing you more action than ever. Want more ways to follow your faves? Check out our new player prop tracking with real-time notifications. Or how about more ways to customize your casino page with our new favorite and recently played games tabs. And to top it all off, quick and secure withdrawals.
Starting point is 00:16:45 Get more everything with FanDuel Sportsbook and Casino. Gambling problem? Call 1-866-531-2600. Visit connectsontario.ca. Today Explained is back and we're deep into part two of the Black Box series from Unexplainable, hosted by Noam Hassenfeld. And right now, we've got a whole lot of unknowns. Unknowns on unknowns on unknowns. So what do we do? I would say at this point, it's sort of unclear.
Starting point is 00:17:24 Sigal Samuel writes about AI and ethics for Vox, and she's about as confused as the rest of us here. But she says there's a few different things we can work on. The first one is interpretability, just trying to understand how these AIs work. But like we talked about last week, interpreting modern AI systems is a huge challenge. Part of how they're so powerful and they're able to give us info that we can't just drum up easily ourselves is that they're so complex.
Starting point is 00:17:51 So there might be something almost inherent about lack of interpretability being an important feature of AI systems that are going to be much more powerful than my human brain. So interpretability may not be an easy way forward. But some researchers have put forward another idea, monitoring AIs by using more AIs, at the very least just to alert users if AIs seem to be behaving kind of erratically. But it's a little bit circular because then you have to ask, well, how would we be sure that our helper AI is not tricking us in the same way that we're worried our original AI is doing? So if these kind of tech-centric solutions aren't the way forward,
Starting point is 00:18:31 the best path could be political, just trying to reduce the power and ubiquity of certain kinds of AI. A great model for this is the EU, which recently put forward some promising AI regulation. The European Union is now trying to put forward these regulations that would basically require companies that are offering AI products in especially high-risk areas to prove that these products are safe. This could mean doing assessments for bias, requiring humans to be involved in the process of creating and monitoring these systems, or even just trying to reasonably demonstrate that the AI won't cause harm. We've unwittingly bought this premise that they can just bring anything to market when we would never do that for other similarly impactful technologies.
Starting point is 00:19:19 Like, think about medication. You've got to get your FDA approval. You've got to jump through these hoops. Why not with AI? Why not with AI? Well, there's a couple reasons regulation might be pretty hard here. First, AI is different from something like a medication that the FDA would approve. The FDA has clear, agreed-upon hoops to jump through. Clinical testing. That's how they assess the dangers of a medicine before it goes out into the world. But with AI, researchers often don't know what it can do until it's been made public. And if even the experts are often in the dark, it may not be possible to prove to regulators that AI is safe. The second problem here is that even aside from AI, big tech regulation doesn't exactly
Starting point is 00:20:02 have the greatest track record of really holding companies accountable, which might explain why some of the biggest AI companies like OpenAI have actually been publicly calling for more regulation. The cynical read is that this is very much a repeat of what we saw with a company like Facebook, now Meta, where people like Mark Zuckerberg were going to Washington, D.C. and saying, oh, yes, we're all in favor of regulation. We'll help you. We want to regulate, too. When they heard this, a lot of politicians said they thought Zuckerberg's proposed changes were vague and essentially self-serving, that he just wanted to be seen supporting the rules, rules which he never really thought would hold them accountable.
Starting point is 00:20:42 Allowing them to regulate in certain ways, but where really they maintain control of their data sets. They're not being super transparent and having external auditors. So really, they're getting to continue to drive the ship and make profits while creating the semblance that society or politicians are really driving the ship. Regulation with real teeth seems like such a huge challenge that one major AI researcher even wrote an op-ed in Time magazine calling for an indefinite ban on AI research, just shutting it all down. But Seagal isn't sure that's such a good idea.
Starting point is 00:21:18 I mean, I think we would lose all the potential benefits it stands to bring. So drug discovery, you know, cures for certain diseases, potentially huge economic growth that if it's managed wisely, big if, could help alleviate some kinds of poverty. I mean, at least potentially it could do a lot of good. And so you don't necessarily want to throw that baby out with the bathwater. At the very least, Seagal does want to turn down the faucet. I think the problem is we are rushing at breakneck speed towards more and more advanced forms of AI when the AIs that we already currently have, we don't even know how they're working. When ChatGPT launched, it was the fastest publicly deployed technology in history.
Starting point is 00:22:04 Twitter took two years to reach a million users. Instagram took two and a half months. ChatGPT took five days. And there are so many things researchers learned ChatGPT could do only after it was released to the public. There's so much we still don't understand about them. So what I would argue for is just slowing down. Slowing down AI could happen in a whole bunch of different ways.
Starting point is 00:22:27 So you could say, you know, we're going to stop working on making AI more powerful for the next few years, right? We're just not going to try to develop AI that's got even more capabilities than it already has. AI isn't just software. It runs on huge, powerful computers. It requires lots of human labor. It costs tons of money to make and operate, even if those costs are currently being subsidized by investors. So the government could make it harder to get the types of computer chips necessary
Starting point is 00:22:58 for huge processing power. Or it could give more resources to researchers in academia who don't have the same profit incentive as researchers in industry. You could also say, all right, we understand researchers are going to keep doing the development and trying to make these systems more powerful, but we're going to really halt or slow down deployment and release to commercial actors or whoever. Slowing down the development of a transformative technology like this, it's a pretty big ask, especially when there's so much money to be made. It would mean major
Starting point is 00:23:29 cooperation, major regulation, major complicated discussions with stakeholders that definitely don't all agree. But Seagal isn't hopeless. I'm actually reasonably optimistic. I'm very worried about the direction AI is going in. I think it's going way too fast. But I also try to look at things with a bit of a historical perspective. Seagal says that even though tech progress can seem inevitable,
Starting point is 00:23:58 there is precedent for real global cooperation. We know historically there are a lot of technological innovations that we could be doing that we're not because societally it just seems like a bad idea. Human cloning or like certain kinds of genetic experiments, like humanity has shown that we are capable of putting a stop or at least a slowdown on things that we think are dangerous. But even if guardrails are possible, our society hasn't always been good about building them when we should.
Starting point is 00:24:29 The fear is that sometimes society is not prepared to design those guardrails until there's been some huge catastrophe, like Hiroshima, Nagasaki, just horrific things that happen. And then we pause and we say, OK, maybe we need to go to the drawing board, right? That's what I don't want to have happen with AI. We've seen this story play out before. Tech companies or technologists essentially run mass experiments on society. We're not prepared. Huge harms happen. And then afterwards we start to catch up and we say, oh, we shouldn't let that catastrophe happen again. I want us to get out in front of the catastrophe. Hopefully that will be by slowing down the whole AI race. If people are not willing to slow down, at least let's get in
Starting point is 00:25:18 front by trying to think really hard about what the possible harms are and how we can use regulation to really prevent harm as much as we possibly can. That was Seagal Samuel from Vox. Before that, you were hearing from Noam Hassenfeld, also of Vox and Unexplainable from the Vox Media Podcast Network. These past two shows we brought you were shorter versions of their two-part series called The Black Box. Go find the full versions in the Unexplainable feed. Regular programming on Today Explained resumes tomorrow. Happy Labor Day.ご視聴ありがとうございました

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.