Radiolab - Driverless Dilemma

Episode Date: September 15, 2023

Most of us would sacrifice one person to save five. It’s a pretty straightforward bit of moral math. But if we have to actually kill that person ourselves, the math gets fuzzy. That’s the lesson o...f the classic Trolley Problem, a moral puzzle that fried our brains in an episode we did almost 20 years ago, then updated again in 2017. Historically, the questions posed by The Trolley Problem are great for thought experimentation and conversations at a certain kind of cocktail party. Now, new technologies are forcing that moral quandary out of our philosophy departments and onto our streets. So today, we revisit the Trolley Problem and wonder how a two-ton hunk of speeding metal will make moral calculations about life and death that still baffle its creators. Special thanks to Iyad Rahwan, Edmond Awad and Sydney Levine from the Moral Machine group at MIT. Also thanks to Fiery Cushman, Matthew DeBord, Sertac Karaman, Martine Powers, Xin Xiang, and Roborace for all of their help. Thanks to the CUNY Graduate School of Journalism students who collected the vox: Chelsea Donohue, Ivan Flores, David Gentile, Maite Hernandez, Claudia Irizarry-Aponte, Comice Johnson, Richard Loria, Nivian Malik, Avery Miles, Alexandra Semenova, Kalah Siegel, Mark Suleymanov, Andee Tagle, Shaydanay Urbani, Isvett Verde and Reece Williams. EPISODE CREDITS  Reported and produced by - Amanda Aronczyk and Bethel HabteOur newsletter comes out every Wednesday. It includes short essays, recommendations, and details about other ways to interact with the show. Sign up (https://radiolab.org/newsletter)! Radiolab is supported by listeners like you. Support Radiolab by becoming a member of The Lab (https://members.radiolab.org/) today. Follow our show on Instagram, Twitter and Facebook @radiolab, and share your thoughts with us by emailing radiolab@wnyc.org Leadership support for Radiolab’s science programming is provided by the Gordon and Betty Moore Foundation, Science Sandbox, a Simons Foundation Initiative, and the John Templeton Foundation. Foundational support for Radiolab was provided by the Alfred P. Sloan Foundation.

Transcript
Discussion (0)
Starting point is 00:00:00 Hey, it's Latif. Feels like all anyone is talking about these days is AI. We have a few new AI stories in the pipeline that I'm excited about, but in the meantime, I wanted to play you this one. It's a rerun from the old Jad and Robert Days, but it's actually not just a rerun, it's a rerun embedded in another reruns, which sounds confusing, but it's actually fascinating, because in addition to all the baseline interesting stuff
Starting point is 00:00:36 in the episode about ethics and human nature, the experience of hearing these stacked reruns actually helps you feel the speed of technology. Like how fast newfangled things become old hat. And then how despite that, despite all of that rapid technological change, how the things we all need to think about, the kind of fundamental questions that we struggle with as humans, pretty much stay the same. And that's, I don't know, that's sort of, it's kind of interesting to hear.
Starting point is 00:01:22 It's pretty humbling, actually. Take a listen. I hope you enjoy this episode, It's kind of interesting to hear. It's pretty humbling actually. Take a listen. I hope you enjoy this episode is called The Driverless Dilemma. You're listening to Radio Lab. Radio from W and Y. Hey! Riiiieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieieie radio lab from W and Y. Hey, yeah. Why?
Starting point is 00:01:47 I'm Chad Abumrod. I'm Robert Krohlwich and you know what this is. Radio lab. Okay, so we're going to play you a little bit of tape first just to set up the what we're going to do today. About a month ago, we were doing the thing about the fake news. Yeah, we're worried about a lot of fake news. A lot of people are, but in the middle of doing that reporting,
Starting point is 00:02:06 we were talking with a fellow from Vanity Fair. My name is Nick Bilton. I'm a special correspondent for Vanity Fair. And in the course of our conversation, Nick, and this had nothing to do with what we were talking about. By the way, Nick just got into a sort of, well, he went into a kind of a nervous revery, I'd say. Yeah, he was like, you know, you guys wanna talk about fake news,
Starting point is 00:02:25 but that's not actually what's eating it me. The thing that I've been pretty obsessed with lately is actually not fake news, but it's automation and artificial intelligence and driverless cars, because it's going to have a larger effect on society than any technology that I think has ever been created in the history of mankind.
Starting point is 00:02:44 I know that's kind of a bold statement, but. What are you grateful for? But you've got to imagine that, you know, that there will be in the next 10 years, 20 to 50 million jobs that will just vanish to automation. You've got, you know, million truckers that will lose their jobs. But it's not, we think about like automation and driverless cars But we think about automation and driverless cars
Starting point is 00:03:06 and we think about the fact that they are going to, the people that just drive the cars, like the taxi drivers and the truckers are going to lose their jobs. What we don't realize is that there are entire industries that are built around just cars. So for example, if you're not driving the car, why do you need insurance? There's no parking tickets because your driver's car
Starting point is 00:03:28 knows where it can and can't park and goes and finds a spot and moves and so on. If there are truckers that are no longer using rest stops because driver's cars don't have to stop and pee or take a nap, then all of those little rest stops all across America are affected. People aren't stopping to use the restrooms. They're not buying burgers.
Starting point is 00:03:46 They're not staying in these hotels and so on and so forth. And then you look at driverless cars to a next level, the whole concept of what a car is is going to change. So for example, right now a car has five seats in a wheel. But if I'm not driving, what's the point of having five seats in a wheel? You could imagine that you take different cars. And maybe when I was on my way here to this interview, I wanted to work out. So I called a driverless Jim car or I have a meeting out in Santa Monica after this and
Starting point is 00:04:13 it's an hour so I call a movie car to watch a movie on the way out there or office car and I pick up someone else and we have a meeting on the way. And all of these things are going to happen not in a vacuum but simultaneously this, you know, pizza delivery drivers are going to replace by robots that will actually cook your pizza on the way to your house in a little box and then deliver it. And so kind of a little bit of a long way to answer, but I truly do think that it's going to have a massive, massive effect on society. Am I stressing you guys out?
Starting point is 00:04:42 Are you having heart palpitations over there? Yeah, I'm glad, it's gone. So that's a fairly compelling description of a very dangerous future. But you know what, it's funny. We couldn't use that tape initially at least. But we kept thinking about it because it actually weirdly points us back to a story we did about a decade ago. The story of a moral problem that's about to get totally reimagined.
Starting point is 00:05:09 It may be that what Nick is worried about, and what we were worried about 10 years ago, have now come dangerously close together. So what we thought we'd do is we're gonna play you the story as we did it then, sort of the full segment, and then we're gonna amend sort of the full segment. And then we're going to amend it on the back end. And by way of just describing this was at a moment in our development where there's
Starting point is 00:05:30 just like way too many sound effects, just gratuitous. We're going to have to apologize for that. No, I'm going to apologize because there's too much. Too much. And also like we talk about the FMRI machine, like it's this like amazing thing when it's sort of commonplace now. Anyhow, it doesn't matter.
Starting point is 00:05:47 We're going to play it for you and then talk about it on the back end. We start with a description of something called the trolley problem. You ready? Yep. All right, you're near some train tracks. Go there in your mind. Okay. There are five workers on the tracks working.
Starting point is 00:06:05 They've got their backs turned to the trolley which is coming in the distance. I mean, they're repairing the track. They are repairing the track. This is unbeknownst to them, the trolley is approaching. They don't see it. You can't shout to them. Okay.
Starting point is 00:06:17 And if you do nothing, here's what'll happen. Hi workers, we will die. Oh my god! That was a horrible experience. I don't want that to happen. No you don't, but you have a choice. You can do A, nothing, or B, it so happens next to you is a lever.
Starting point is 00:06:37 Pull the lever and the trolley will jump onto some side tracks where there is only one person working. So if the... No! No! So if the trolley goes on the second track, it will kill the one guy. Yeah, so there's your choice. Do you kill one man by pulling a lever or do you kill five men by doing nothing? Well, I'm going to pull the lever. Naturally.
Starting point is 00:07:00 Alright, here's part two. You stand near some train tracks, five guys are in the tracks, just as before, and there is the trolley coming. The same five guys were in the train. The same five guys. Back to the train, they can't see anything. Yeah, yeah, exactly. However, in a make couple of changes, now you're standing on a foot bridge.
Starting point is 00:07:17 That passes over the tracks. You're looking down onto the tracks. There's no lever anywhere to be seen, except next to you, there is a guy. What do you mean there's a guy? A large guy. Large individual standing next to you on the bridge, looking down with you over the tracks and you realize, wait, I can save those five workers. If I push this man, give me a little tap. No, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, stops the train. Oh, yeah, I'm not going to do that. I'm not going to do that. But surely you realize the math is the same. You mean I'll save four people this way?
Starting point is 00:07:51 Yeah. Yeah, but this time I'm pushing the guy. Are you insane? No. All right, here's the thing. If you ask people these questions, and we did starting with the first, is it OK to kill one man to save five using a lever?
Starting point is 00:08:03 Nine out of 10 people will say. Yes. Yes. Yes, yes, yes, yes. But if you ask them, is it okay to kill one man to say five by pushing the guy, nine out of 10 people will say? No, no, no, no. It is practically universal. And the thing is, if you ask people, why is it okay to murder?
Starting point is 00:08:24 That's what it is. Murder a man with, why is it okay to murder? That's what it is. Murder a man with a lever and not okay to do it with your hands. People don't really know. Pulling the lever to save the five. I don't know. I don't know, that feels better than pushing the one to save the five. But I don't really know why. So that's a good, there's a good moral quandary for you.
Starting point is 00:08:45 Ha ha ha ha ha ha ha ha ha ha. And if having a moral sense is a unique and special human quality then maybe we, we, us two humans anyway, you and me, should at least inquire us to why this happens. And I happen to have met somebody who has a hunch. He's a young guy at Princeton University, wild curly hair, but a mischief in his eye. His name is Josh Green.
Starting point is 00:09:12 All right. And he spent the last few years trying to figure out where this inconsistency comes from. How do people make this judgment? Forget whether or not these judgments are right or wrong. Just what's going on in the brain that makes people distinguish so naturally and intuitively between these two cases,
Starting point is 00:09:28 which from an actuarial point of view are very, very, very similar, if not identical. Josh is, by the way, a philosopher and a neuroscientist, so this gives him special powers. He doesn't sort of sit back in a chair, smoke a pipe, and think, now why do you have these differences? He says, no, I would like to look inside people's heads, because in our heads, we may find clues as to where these feelings of revulsion or acceptance come from. In our brain.
Starting point is 00:09:58 So we're here in the control room. We basically just see his... And it just so happens that in the basement of Princeton there was this some yeah yeah well big circular thing yeah it looks kind of like an airplane engine a hundred eighty thousand pound brain scan I'll tell you a funny story you can't have any metal in there because the magnet so we have this long list of questions that we ask people to make sure they can go in do you have a pacemaker if you ever worked with metal? Blah blah blah blah.
Starting point is 00:10:25 Do you have a lot with metal? Yeah, because you can have little flecks of metal in your eyes that you would never even know are there from having done metal working. And one of the questions is whether or not you wear a wig or anything like that, because they often have metal wires and like that. And there is this very nice woman who does brain research here, who's Italian, and she's asking her subjects over the phone all these screening questions.
Starting point is 00:10:46 So I have this person over to dinner and she said, yeah, I ended up doing this study but it has to be the weirdest questions. This woman's like, do you have a hair piece? And I'm like, what does it have to do if I have her piece or not? I'm going to say it anyway. And she said, you know, she asked, you said, do you have a hair piece? But so now she asks people if you wear a wig or whatever. Anyhow, what Josh does is he invites people into this room,
Starting point is 00:11:09 has them lie down and what is essentially a cot on rollers, and he rolls them into the machine. Their heads are braced, so they're sort of stuck in there. You ever done this? Oh, yeah. Yeah, several times. And many tells them stories. He tells them the same two, you know, trolley tales that you told before.
Starting point is 00:11:25 And then at the very instant that they're deciding whether I should push the lever or whether I should push the man, that instant the scanner snaps pictures of their brains. And what he found in those pictures was, frankly, a little startling, he showed us some. I'll show you some stuff. Okay. Let me think. The picture that I'm looking at is the sort of a, it's a brain looked, I guess, from the top down. Yeah, it's top down and sort of sliced, you know, like a deli slicer. And the first slide that he showed me was a human brain being asked the question,
Starting point is 00:11:57 would you pull the lever and the answer in most cases was yes. Yeah, I'd pull the lever. When the brain's saying yes, you'd see little kind of peanut shaped spots of yellow. This little guy right here and these two guys right there. The brain was being active in these places. And oddly enough, whenever people say yes, to the lever question, the very same pattern lit up. They any showed me another slide.
Starting point is 00:12:23 This was a side of a brain saying no- No, I would not push the man. I will not push the large man. And in this picture, this one we're looking at here, this- It was a totally different constellation of regions that lit up. This is the no, no, no, crowd. I think this is part of the no, no, no, crowd. So when people answer yes to the lever question, there are places in their brain which glow?
Starting point is 00:12:45 But when they answer no, I will not push the man, then you get a completely different part of the brain lighting up. Even though the questions are basically the same. What does that mean? And what does Josh make of this? Well, he has a theory about this. A theory not proven, but I think
Starting point is 00:12:59 that this is what I think the evidence suggests. He suggests that the human brain doesn't hum along like one big unified system. And so he says, maybe in your brain and every brain, you'll find little warring tribes, little subgroups. One that is sort of doing a logical sort of counting kind of thing. You've got one part of the brain that says,
Starting point is 00:13:20 huh, five lives versus one life, wouldn't it be better to say five versus one? And that's the part that would glow when you answer, yes, I'd pull the lever. Yeah, pull the lever. But there's this other part of the brain, which really really doesn't like personally killing another human being
Starting point is 00:13:34 and gets very upset at the Fat Man case and shouts in effect. No! It understands it on that level and says, No! No! Bad, don't do. Now, what do I think I'm supposed to push! No! Bad, don't do. No, but don't think I'm a poach.
Starting point is 00:13:46 No, no, that must appear. Instead of having sort of one system that just sort of turns out the answer in Bing, we have multiple systems that give different answers, and they duke it out, and hopefully out of that competition comes morality. This is not a trivial discovery that you struggle to find right and wrong depending upon what part of your brain is shouting the loudest. It's like bleachers morality.
Starting point is 00:14:16 Do you buy this? You know, I just don't know. I've always kind of suspected that a sense of right and wrong is mostly stuff that you get from your mom and your dad and from experience, that it's culturally learned for the most part. Josh is kind of a radical in this respect. He thinks it's biological, I mean deeply biological,
Starting point is 00:14:39 that somehow we inherit from the deep past a sense of right and wrong that's already in our brains from the get go before mom and dad are our primate ancestors before we were full blown humans had intensely social lives they have social mechanisms that prevent them from doing all the nasty things that they might otherwise be interested in doing and so deep in our brain we have what you might call basic primate morality. And basic primate morality doesn't understand things like tax evasion, but it does understand things like pushing your body off of a cliff. So you're thinking them at the man on the bridge, that I'm on the bridge next to the large man, and that I have hundreds of thousands of years of
Starting point is 00:15:22 training in my brain that says don't murder the large man. Right. And even if I'm thinking, if I murder the large man, I'm going to save five lives and only kill the woman, but there's something deeper down that says, don't murder the large man. Right. Now that case, I think it's a pretty easy case. Even though it's five versus one, in that case, people just go with what we might call
Starting point is 00:15:42 the inner chimp. But there are other, but there are inner chimp is your unfortunate way of describing an act of deep goodness. Well, that's what's interesting. 10 commandments for God. Right. Well, what's interesting is that we think of basic human morality as being handed down from on high.
Starting point is 00:16:01 And it's probably better to say that it was handed up from below. That our most basic core moral values are not the things that we humans have invented, but the things that we've actually inherited from other people. The stuff that we humans have invented are the things that seem more peripheral and variable. But something as basic as thou shalt not kill, which many people think was handed down in tablet form from a mountaintop, from God directly to humans, no chimps involved. Right. You're suggesting that hundreds of thousands of years of on-the-ground training have gotten
Starting point is 00:16:35 our brains to think, don't kill your kin. Don't kill our... Right, at least, you know, that should be your default response. I mean, certainly chimps and these are extremely violent, and they do kill each other, but they don't do it as a matter of course. They so to speak, have to have some context-sensitive reason for doing so. So, yeah, we're getting to the rub of it. You think that profound moral positions may be somehow embedded in brain chemistry. Yeah.
Starting point is 00:17:06 And Josh thinks there are times when these different moral positions that we have embedded inside of us in our brains, when they can come into conflict and in the original episode, we went into one more story. This one you might call the crying baby dilemma. The situation is somewhat similar to the last episode of Mash for people who are familiar with that, but the way we told the story goes like this. It's wartime. There's an enemy patrol coming down the road.
Starting point is 00:17:35 You are hiding in the basement with some of your fellow villagers. Let's kill those lights. And the enemy soldiers are outside. They have orders to kill anyone that they find. Quiet! Now nobody make a sound until they pass us. So there you are, you're huddled in the basement, all around your enemy troops, and you're holding your baby in your arms.
Starting point is 00:17:58 Your baby with a cold, a bit of a sniffle, and you know that your baby could cough at any moment. Today hear your baby, they're going to find you and the baby and everyone else and they're going to kill everybody. And the only way you can stop this from happening is cover the baby's mouth. But if you do that, the baby's going to smother and die. If you don't cover the baby's mouth,
Starting point is 00:18:20 the soldiers are going to find everybody and everybody's going to be killed, including you, including your baby. And you have the choice. Would you smother your own baby to save the village? Or would you like your baby cough knowing the consequences? And this is a very tough question. People take a long time to think about it and some people say yes and some people say no. And children are a blessing and a gift from God, and we do not do that to children.
Starting point is 00:18:49 Yes, I think I would kill my baby to save everyone else and myself. No, I would not kill a baby. I feel because it's my baby, I have the right to terminate the life. I'd like to say that I would kill the baby, but I don't know if I'd have the inner strength. No, if it comes down to killing my own child, my own daughter, my own son, then I choose that.
Starting point is 00:19:08 Yeah, if you have to, of course it was done in World War II. When the Germans were coming around, there was a mother that had a baby that was crying, and rather to be found she actually suffocated the baby, but the other people lived. Sounds like an old mashtag. No, you do not kill your baby. In the final mash episode, the Korean woman who's a character in this piece, she murders her baby. She killed it.
Starting point is 00:19:38 She killed it. Oh my god. Oh my god. Oh my God! Oh my God! I didn't mean for him to kill it! I just wanted to be quiet! It was a baby! She's mothered around baby! What Josh did is he asked people the question, would you murder your own child while they were in the brain scanner. And at just the moment when they were trying to decide what they would do, he took pictures of their brains.
Starting point is 00:20:27 And what he saw, the contest we described before, was global in the brain. It was like a world war. That gang of accountants that part of the brain was busy calculating, and calculating how old village could die, old village could die. But the older and deeper reflex old village could die, but the older and deeper reflex also was lit up, shouting, don't kill the baby. No, no, don't kill the baby. Inside the brain was literally divided to the calculation, don't kill the baby. Two different tribes in the brain literally tried to shout each other out. And, Chad, this
Starting point is 00:21:02 was a different kind of contest than the ones we talked about before. Remember before, when people were pushing a man off of a bridge, overwhelmingly, their brains yelled, no, no, don't push the man, and when people were pulling the lever, overwhelmingly, yeah, yeah, pull the lever. Right. There it was distinct. Here, I don't think really anybody wins. Well, who breaks the time? I mean, have to answer something right? That's a good question. And now is there a... what happens? Is it just two cries that fight each other out or is there a judge? Well that's an interesting question and that's one of the things that we're looking at. When you are in this moment with parts of your brain contesting there are two brain regions. These
Starting point is 00:21:44 two areas here towards the front, right behind your eyebrows, left and right that light up. And this is particular to us. He showed me a slide. It's those sort of areas that are very highly developed in humans as compared to other species. So when we have a problem that we need to deliberate over the front of the brain, this is above my eyebrow, sort of? Yeah, right about there. And there's two of them, one on the left and one on the right.
Starting point is 00:22:12 Bilateral. And they are the things that monkeys don't have as much of as we have. Certainly, these parts of the brain are more highly developed in humans. So looking at these two flashes of light at the front of a human brain, you could say we are looking at what makes us special. That's a fair statement. They human being wrestling with a problem. That's what that is. Yeah, where it's both emotional, but there's also a sort of a rational attempt to sort of sort through those emotions.
Starting point is 00:22:39 Those are the cases that are showing more activity in that area. So in those cases when these thoughts above our eyebrows become active, what are they doing? Well, he doesn't know for sure, but what he found is in these close contests whenever those nodes are very, very active. It appears that the calculating section of the brain gets a bit of a boost, and the visceral inner-chimp section of the brain is kind of muffled. No! Maaay!
Starting point is 00:23:07 Maaaay! The people who chose to kill their children, who made what is essentially a logical decision. Over and over, those subjects had brighter glows in these two areas and longer glows in these two areas. So there is a definite association between these two dots above the eyebrow and the power of the logical brain over the inner chip or the visceral brain. Well, you know, that's the hypothesis. So it's going to take a lot of more research to sort of tease
Starting point is 00:23:37 apart what these different parts of the brain are doing or if some of these are just sort of activated and incidental kind of way. I mean, we really don't know. It's all very new. Okay, so that was the story we put together many, many, many years ago, about a decade ago. And at that point, the whole idea of thinking of morality
Starting point is 00:24:01 is kind of purely a brain thing. It's relatively new. And certainly the idea of philosophers working with FM of purely a brain thing, it's relatively new. And certainly the idea of philosophers working with FMRI machines that was super new. But now here we are, 10 years later, and some updates. First of all, Josh Green. So in the long, long stream of times, I assume now you have three giraffes, two bobbs and children. Yeah, so two kids, and we're close to adding a cat.
Starting point is 00:24:25 We talked to him again. He has started a family. He's switched labs from Princeton to Harvard. But that whole time, that interim decade, he has still been thinking and working on the trolley problem. Did you ever write the story differently? Absolutely. For years, he's been trying out different permutations
Starting point is 00:24:41 of the scenario on people like, OK. Instead of pushing the guy off the bridge with your hands, what if you did it but not with your hands? So in one version we asked people about hitting a switch that opens a trap door on the foot bridge and drops the person. In one version of that the switch is right next to the person, in another version the switch is far away. And in yet another version you're right next to the person and you don't push them off with your hands, but you push them with a pole. And to cut to the chase, what Josh has found is that the basic results that we talked about. That's roughly held up.
Starting point is 00:25:16 Still the case that people would like to save the most number of lives, but not if it means pushing somebody with their own hands or with a pole for that matter. Now, here's something kind of interesting. He and others have found that there are two groups that are more willing to push the guy off the bridge. They are Buddhist monks and psychopaths. I mean, some people just don't care very much about hurting other people. They don't have that kind of an emotional response.
Starting point is 00:25:43 That would be the psychopaths, whereas the Buddhist monks, presumably, are really good at shushing their inner-chimp as he called it and just saying to themselves, you know, I'm aware that this is, that killing somebody is a terrible thing to do. And I feel that, but I recognize that this is done for a noble reason, and therefore, it's, it's, it's okay. So there's all kinds of interesting things you can say about the trolley problem as a thought experiment, but at the end of the day, it's just that So there's all kinds of interesting things you can say about the trolley problem as a thought experiment, but at the end of the day, it's just that. It's a thought experiment.
Starting point is 00:26:10 What got us interested in revisiting it is that it seems like the thought experiment is about to get real. That's coming up right after the break. Chad Robert Radio Lab, okay so where we left it is at the trolley problems about to get real. Here's how Josh Green put it. You know, now as we're entering the age of self-driving cars, ah, this is like the trolley problem now finally come to life. Oh, this car coming.
Starting point is 00:26:56 Oh, the future of the automobile is here. Oh, this car is autonomous vehicles. It's here. Feel this. First self-driving Volvo would be offered to customers in 2021. Oh, where's it going? This legislation is first of this kind, focused on the car of the future that is more of a super computer on wheel.
Starting point is 00:27:19 Oh, is the car coming? Okay, so self-driving cars, unless you've been living under a muffler, they are coming. It's gonna be a little bit of an adjustment for some of us. C'mon! You hit the brakes! Hit the brakes! But what Josh meant when he said it's the trolley problem come to life is basically this. Imagine this scenario.
Starting point is 00:27:38 The self-driving car now is headed towards a bunch of pedestrians in the road. The only way to save them is to swerve out of the way, but that will run the car into a concrete wall, and it will kill the passenger in the car. What should the car do? Should the car go straight and run over, say, those five people, or should it swerve and kill the one person? That suddenly is a real world question.
Starting point is 00:28:07 If you ask people in the abstract, what theoretically should a car in this situation do? They're much more likely to say, I think you should sacrifice one for the good of the many. They should just try to do the most good, or avoid the most harm. So if it's between one driver and five pedestrians, Logically it would be the driver.
Starting point is 00:28:22 Killed that on driver. Be selfless. I think it would be the driver. Kill that, I'm driver. Be selfless. I think it should kill the driver. But when you ask people, forget the theory. Would you want to drive in a car that would potentially sacrifice you to save the lives of more people in order to minimize the total amount of harm they say no? I mean, I wouldn't buy it.
Starting point is 00:28:39 No, no. Absolutely not. That would kill me in it? No. So I'm not going to buy a car that's going to would kill me in it? No. So I'm not going to buy a car that's going to purposely kill me. Oh no, I wouldn't buy. For sure no.
Starting point is 00:28:54 I'll sell it but I wouldn't buy. So there's your problem. People would sell a car and an idea of moral reasoning that they themselves wouldn't buy. And last fall, an exec at Mercedes Ben's face planted. Right into the middle of this contradiction. Welcome to Paris, one of the most beautiful cities in the world and welcome to the 2016 Paris Motor Show.
Starting point is 00:29:22 Home to some of the most beautiful cars in the world. Okay, October 2016, 2016 Paris Motor Show, home to some of the most beautiful cars in the world. Ok, October 2016, the Paris Motor Show, you had something like a million people coming in over the course of a few days. All the major car makers were there. Here is Ferrari, you can see the La Ferrari Aperta, and of course the new GT4C Lusot T. Everybody was debuting their new cars, and one of the big presenters in this whole fair was this guy. In the future, you'll have cars where you don't even have to have your hands on the steering wheel anymore, but maybe you watch a movie on the head-up display or maybe you want to do your emails. That's really what we are striving for.
Starting point is 00:29:56 This is Christoph on Hugo, a senior safety manager at Mercedes-Benz. He was at the show, sort of demonstrating a prototype of a car that could sort of self-drive its way through traffic. In this E-Class today, for example, you've got a maximum of comfort and support systems. You'll actually look forward to being stuck in traffic jams, won't you? Of course, of course. He was doing dozens and dozens of interviews through the show, and in one of those interviews, unfortunately this one we don't have on tape, he was asked,
Starting point is 00:30:21 what would your driverless car do in a trolley problem type dilemma? Or maybe you have to choose between one or many and he answered, quote, if you know you can save one person, at least say that one. If you know you can save one person, save that one person. Save the one in the car. This is Michael Taylor, correspondent for Caron Driver magazine. He was the one that Christoph on Hugo said that too. If you know for sure that one thing, one death can be prevented in that your first priority. Now when he said this to you, this producer
Starting point is 00:30:58 Amanda Ronchek, did it seem controversial at all in the moment? In the moment it seemed incredibly logical. I mean all he's really doing is saying what's on people's minds, which is that... No. I mean I wouldn't buy it. Who's gonna buy a car that chooses somebody else over them? Anyhow, he makes that comment Michael Princeett and... a kerfuffle in Seuss. Save the one in the car that's Christoph von Hegel from Mercedes.
Starting point is 00:31:26 But then when you lay at the questions you sound like a bit of a heel because you want to save yourself as opposed to the pedestrian. Doesn't it ring though of like just privilege? It does. Yeah, it does. Wait a second. What would you do? It's you or a pedestrian.
Starting point is 00:31:40 And it's just, you know, I don't know anything about this pedestrian. It's just you or a pedestrian, just a regular guy walking down the street. Screw everyone who's not in a Mercedes. And there was this kind of uproar about that, how dare you drive these selfish, you know, make these selfish cars. And then he walked it back and he said, no, no, what I mean is that just that we have a better chance of protecting the people in the cars. So we're going to protect them because they're
Starting point is 00:32:03 easier to protect. But of course, you, there's always gonna be trade-offs. Yeah. And those trade-offs could get really, really tricky. And subtle. Because obviously these cars have sensors. Sensors like cameras, radars, lasers, and ultrasound sensors. This is Raj Raj Kumar, he's a professor at Carnegie Mellon.
Starting point is 00:32:27 Time the co-director of the GMCMU connected in an autonomous driving collaborative research lab. He is one of the guys that is writing the code that will go inside GM's driverless car. He says, yeah, the sensors at the moment on these cars still evolving, pretty basic. We are very happy if today you can actually detect a pedestrian, can detect a bicyclist or motorcycle, different vehicles of different shapes, sizes and colors.
Starting point is 00:32:52 But he says it won't be long before we can actually know a lot more about who these people are. Eventually they will be able to detect people of different sizes, shapes and colors. Like oh, that's a skinny person, that's a small person, tall person, black person, white person, that's a little boy, that's a little girl. So forget the basic moral math. Like, what does a car do if it has to decide, oh, do I save this boy or this girl?
Starting point is 00:33:14 What about two girls versus one boy and an adult? How about a cat versus a dog? A 75 year old guy in a suit versus that person over there who might be homeless? You can see where this is going. And it's conceivable the cars will know our medical records and back at the car show. We've also had that 10 car to car communication.
Starting point is 00:33:31 Well, that's also one of the enabling technologies and highly automated driving. Mercedes Guy basically said in a couple of years the cars will be networked. They'll be talking to each other. So just imagine a scenario where like cars are about to get into accidents and right at the decision point, they're like conferring. Well, who do you have in your car?
Starting point is 00:33:47 Me, I got a 70 year old Wall Street guy makes eight figures. How about you? Oh, I'm a bus full of kids. Kids have more years left. You need to move. Well, hold up. I see that your kids come from poor neighborhood and have asthma. So I don't know. So you can basically tie yourself up and not not wrap yourself around an axle. We do not think that any programmer should be given this major burden of deciding who survives and who gets killed. I think these are very fundamental, deep issues that society has to decide at large. I don't think programmer eating pizza and sipping coke
Starting point is 00:34:23 should be making that call. How does society decide? I mean, help me imagine that. I think programmer eating pizza and sipping coke should be making that call. How does society decide? I mean, help me imagine that. I think it really has to be an evolutionary process, I believe. Raj told us that two things basically need to happen. First, we need to get these robot cars on the road, get more experience with how they interact with us human drivers and how we interact with them.
Starting point is 00:34:40 And two, there need to be like industry-wide summits. No one company is going to solve that. This is Bill Ford Jr. of the Ford company giving a speech in October of 2016 at the Economic Club of D.C. And we have to have, because could you imagine if we had one algorithm and Toyota had another and General Motors had another, I mean it would be, I mean, obviously you couldn't do that. Because like what if the Tibetan cars make one decision and the American cars make another?
Starting point is 00:35:07 So we need to have a national discussion on ethics I think, because we've never had to think of these things before, but the cars will have the time and the ability to do that. Lever, Kollege, Ruiz, espasta. So far, Germany is the only country that we know of that has tackled this head on. One of the most significant points the Ethics Commission made is that autonomous and connected driving isn't ethical imperative. The government has released a code of ethics that says, among other things, self-driving cars, are forbidden to discriminate between humans in almost any
Starting point is 00:35:45 way, not on race, not on gender, not on age, nothing. These shouldn't be programmed into the cars. One can imagine a few classes being added in the Geneva Convention, if you will, of what these automotive vehicles should do, a globally accepted standard, if you will. How we get there to that globally accepted standard is anyone's guess. And what it will look like, whether it'll be like a coherent set of rules or like rife with the kind of contradictions we see in our own brain, that also remains to be seen. But one thing is clear.
Starting point is 00:36:18 Oh, there's cars coming. Oh, there's cars. Oh, there are cars coming. Oh, there's cars. Oh, there are cars coming. Build this with their questions. Put me back for me to draw it. Oh, dear Jesus, I could never. Ah, ah, ah, ah, ah. Oh, where's it going? God damn you. Oh my God.
Starting point is 00:36:41 Okay, we do need to caveat all this by saying that the moral dilemma we're talking about in the case of these driverless cars is gonna be super rare. Mostly what'll probably happen is that like the plane loads full of people that die every day from car accidents, well, that's just gonna hit the floor. And so you have to balance the few cases
Starting point is 00:37:04 where a car might make a decision you don't like against the massive number of lives saved. I was thinking, actually, of a different thing. I was thinking, even though you dramatically bring down the number of bad things that happen on road, you dramatically bring down the collisions, you dramatically bring down the mortality, you dramatically lower the number of people who are drunk coming home from a party and just
Starting point is 00:37:29 ram someone's sideways and killing three of them and injuring two of them for the rest of their lives. Those kinds of things go way down. But the ones that remain are engineered, like they are calculated, almost with foresight. So here's the difference, and this is just an interesting difference. Oh, damn, that's so sad that happened. I got drunk and that of that, maybe you should go to jail. But you mean that the society engineered this in?
Starting point is 00:38:00 That is a big difference. One is operatic and seems like the forces of destiny and the other seems mechanical and prethought through. Premeditated, yeah. There's something dark about a premeditated, expected death. And I don't know what you do about that. Everybody's on the whole for that.
Starting point is 00:38:18 In the particulars, in the particulars, it feels dark. It's a little bit like when, you know, should you kill your own baby to save the village? Like in the particular instance of that one child, it's dark. But against the backdrop of the life saved, it's just a tiny pinprick of darkness. That's all this. Yeah, but you know how humans are.
Starting point is 00:38:33 If you argue back, yes, a bunch of smarty-pants is concocted a mathematical formula which meant that some people had it on. Here they are. There are many fewer than before. A human being, just like Josh would tell you, would have a roar of feeling and a sense of the which meant that some people had it on and here they are. There are many fewer than before. A human being, just like Josh would tell you, would have a roar of feeling and of anger
Starting point is 00:38:50 and saying, how dare you engineer this in? No, no, no, no, no. And that human being needs to meditate like the monks to silence that feeling because the feeling in that case is just getting in the way. Yes and no. And that may be impossible unless you're a monk for God's sake. See we're right back where we started now.
Starting point is 00:39:12 Yeah, all right, we should go. Chad, you have to thank some people, no? Yes, this piece was produced by Amanda Aroncheck with help from Bethel Habtay. Special thanks to Iod Ruan, Edmund Awad, and Sydney Levine from the Moral Machine Group at MIT. Also thanks to SirTack, Karamon, ShinChang, and RoboRace for all their help. And I guess we should go now.
Starting point is 00:39:32 Yeah. I'll, um, I'm Chad Havumrod. I'm not getting into your car. We don't mind. Just take my own. I'm going to rig up an autonomous vehicle to the bottom of your bed. So you're going to go to bed and suddenly find yourself on the highway driving you wherever I want. There you are. Anyhow, okay we should go. I'm Chad I've been right. I'm Robert Radio Lab was created by Chad Abumad and is edited by Soren Wheeler, Lulu Miller and Lotta Nasser are our co-hosts. Dylan Keith is our director of sound design.
Starting point is 00:40:20 Our staff includes Simon Adler, Jeremy Bloom, Becca Bressler, Rachel Q. Sick, McKetty Foster Keys, W. Harry Fortuna, David Gable, Maria Paz Gutierrez, Sindu Nyanasambadam, Matt Kilti, Annie McEwan, Alex Niesin, Saurakari, Anna Ruskuit Paz, Alyssa John Perry, Sarah Sambak, Aryan Wack, Pat Walters, and Molly Webster, with help from Timmy Broderick. Our fact checkers are Diane Kelly, Emily Krieger, and Natalie Middleton. Hi, I'm Erica and Yonkers.
Starting point is 00:40:56 Leadership Support for Radio Lab Science Programming is provided by the Gordon and Betty Moore Foundation. Science Sandbox, Assignments Foundation Initiative, and the John Templeton Foundation. Conditional support for Radio Lab was provided by the Alfred P. Sloan Foundation. you

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.