Ologies with Alie Ward - Trolleyology (MORAL DILEMMAS + THE TROLLEY PROBLEM) with Joshua Greene

Episode Date: August 20, 2025

Train tracks. Split decisions. And a philosophy humdinger worth debating. Dr. Joshua Greene is a Harvard Psychology professor, neuroscientist, and *actual* Trolleyologist. The moral humdinger that has... been used in everything from Supreme Court decisions to board games looks at: What makes you a good person? How do you reason with people who make you scream into a jar like Yosemite Sam? How far would you go to save others? Which charities should get your money? What is active versus passive harm? And what would a monk do? Also: how neurodivergence influences moral decisions, religion used as a moral compass, and your new favorite skeleton on the planet. Visit Dr. Greene’s website, his charity platform Giving Multiplier, and his online quiz game TangoBuy his book, Moral Tribes: Emotion, Reason, and the Gap Between Us and Them, on Bookshop.org or AmazonA donation went to Giving MultiplierMore episode sources and linksOther episodes you may enjoy: Eudemonology (HAPPINESS), Genocidology (CRIMES OF ATROCITY), Obsessive-Compulsive Neurobiology (OCD), Bonus Episode: The OCD Experience, Artificial Intelligence Ethicology (WILL A.I. CRASH OUT?), Attention-Deficit Neuropsychology (ADHD), Suicidology (SUICIDE PREVENTION & AWARENESS), Dolorology (PAIN), Molecular Neurobiology (BRAIN CHEMICALS), Personality Psychology (PERSONALITIES), Ferroequinology (TRAINS)400+ Ologies episodes sorted by topicSmologies (short, classroom-safe) episodesSponsors of OlogiesTranscripts and bleeped episodesBecome a patron of Ologies for as little as a buck a monthOlogiesMerch.com has hats, shirts, hoodies, totes!Follow Ologies on Instagram and BlueskyFollow Alie Ward on Instagram and TikTokEditing by Mercedes Maitland of Maitland Audio Productions and Jake ChaffeeManaging Director: Susan HaleScheduling Producer: Noel DilworthTranscripts by Aveline Malek Website by Kelly R. DwyerTheme song by Nick Thorburn

Transcript
Discussion (0)
Starting point is 00:00:00 Oh, hey, it's that guy, the coffee shop overthinking his milk choice. Allie Ward, this is ologies. This is not one you saw coming, is it? Unless you're either a Harvard and Princeton-trained philosopher and academic. But my guest is a Harvard and Princeton-trained philosopher and academic. He's a member of the Center for Brain Science and a professor of psychology at Harvard. He's dedicated his life and career to the psychology and the neuroscience of how the brain forms complex ideas and makes choices and uses real. real philosophical humdingers in his analyses. And one of the courses he teaches is titled
Starting point is 00:00:35 Evolving Morality, from primordial soup to super intelligent machines. And he wrote the book Moral Tribes, Emotion, Reason, and the Gap Between Us and Them. Now, during this interview, we chatted via a video call, and I had to ask the guests a few times to make sure that the collar of his shirt wasn't rubbing on his mic. And the best solution was to undo it. do a couple of shirt buttons and like splay his collar out like he was going to saunter onto a dance floor in Miami and he seemed genuinely mortified by it but I like to think that it made for a super casual vibe this is such a fun conversation never did you think gruesome philosophy could be this engaging and friendly but the moral thought experiment that's now known as the trolley problem
Starting point is 00:01:20 it originated with this english philosopher philippa foot who wrote on virtue ethics Like if a streetcar or a trolley is careening on a track that splits, is diverting it to kill fewer people ethical, philosopher Judith Jarvis Thompson used that example in 1976 after Philippa did and coined it the trolley problem. So we're off to these like streetcar races. And the trolley problem examines morals and ethics and utilitarianism and religion and sacrifice and how to look at the principles of the greater good. Also looks at neuroscience. So we're going to get into that. But first, thank you so much to patrons of the show who support us for a dollar or more a month and submit questions for the ologist before we record. Thank you to everyone out there wearing Ologiesmerch from Ologiesmerch.com. And thank you to everyone who leaves reviews of the show, which keeps it up in the charts. And also I read them all, such as this recent one from KWZ, New York, who wrote, come for the science and fun facts, but you'll leave with one remarkable example after another of human beings who care deeply about the world. So thank you, KWZ and Y for caring enough to
Starting point is 00:02:25 submit a review. I read every single one. So keep them coming and know that they make your internet dad. The lady named me cry sometimes. Also, if you do need any shorter kid-friendly non-swearing episodes, you can find those in their own feed. Wherever you get podcasts, they're called Smologis. Okay, let's get into what makes you a good person. How far would you go to save others? Which charities should get your cash? Why are the political divides killing people? What is actually? versus passive harm? What would a monk do? How neurodivergence influences moral decisions? Is religion a moral compass? How to reason with your relatives who vote in ways that make you want to scream into a jar like Yosemite Sam? Possibly your new favorite skeleton on the planet. And if you'd
Starting point is 00:03:11 pull a lever or go a step further with philosopher, professor, author, non-profit champion and official trolleyologist, it's a real word, Dr. Joshua Green. My name is Joshua Green, and I'm a professor in the Department of Psychology and Center for Brain Science at Harvard University, and I use he-him pronouns. Great. That is quite a bio. How long have you been at Harvard? Gosh, I got here as a professor in 2006, so we're coming up on 19 years. And then I was a wee undergrad here in the 90s as mostly a philosophy student. Take me back to the 90s as a philosophy student. Long hair.
Starting point is 00:04:10 Hacky sack, buttoned up, what was your vibe? Just nerd, you know, I think I was just really into it. And I never had a sort of form of cool that went with being a philosopher. What got you into it? I mean, you could go way, way back asking lots and lots of questions as a kid. I remember kind of being unsatisfied with a lot of the answers I was given in Hebrew school. I was raised in a very sort of, you know, fairly reformed secular Jewish family. You know, I always used to argue.
Starting point is 00:04:44 And someone was like, well, we got to put this to good use and suggested that I do debate. So I started doing debate when I was like 12. And I was a pretty young 12 year old. So I was like this little argumentative twerp with my, don't ask me why, yellow pants that I wore. I think I didn't realize that was funky. That was just the pants that I had. And, you know, someone called me Mr. Banana Pants at age 12 as debater. That's harsh.
Starting point is 00:05:09 But I got interested in the questions. And a lot of the questions were really about sort of fundamental social tradeoffs. You know, the rights of the individual versus the greater good kind of came up over and over again. And this person said, you know, like in. cross-examination. It's very formal. It's like, so do you agree that you're saying that it's better to always do the thing that will promote the greater good? And I'm like, yes, yes, yes. And then she said, okay, well, suppose there was a doctor who had some patients and five of these patients were missing organs of various kinds. And then in comes a healthy person with two nice, clean,
Starting point is 00:05:43 ready to go kidneys and a liver. And you could take the organs out of this one person and distribute them to these other five people and assume that, you know, the operation would work. Would it be okay for the doctor to sacrifice that one person to save the other five patients? And I was like, you know, and I lost that debate round, but even worse, I kind of lost my guiding philosophy that was always my go-to, right? And that really stuck with me. And this introduced me to the trolley problem, which was really the underlying sort of philosophical exploration of these sacrificial dilemmas where you can kill one person to save
Starting point is 00:06:22 five people. And what was beautiful about that was it had a really nice type comparison, right? So this is the now, at the time, no one outside of philosophy had heard of these things. So the trolley, for the people who don't know, like for the eight people out there who've never heard of this, the trolley is headed towards five people and you can hit a switch and turn it onto another track where it will run over one person. Most people say that that's acceptable. Okay, so you're at the controls of a trolley. And you have a track with five people on it about to become hamburger meat by fate. Or you can switch tracks and you can kill one person to save the other five. Most people are like, yeah, just let's temper with fate. Let's kill the one. Also, that person dies a hero. And then you get to go to therapy every day for the rest of your life. But the more complex like leveling up of the trolley problem is instead of those two tracks, instead of that fork, five on one, one on the other. You instead, you have a footbridge or like an overpass right above the tracks there's one track below you five people are tied to the track i don't know who did it that's not part of the philosophical equation
Starting point is 00:07:30 but you're on that footbridge overpass standing on it looking down at this track five people tied to it and you're standing next to a big tall person with like a heavy backpack enough weight to stop the trolley from running over the five people below do you push that big tall person with a backpack who's standing next to onto the tracks below? They get hit by the trolley, but it stops the trolley from running over the five. Is that okay? I felt, and most people think that that's wrong,
Starting point is 00:07:57 and I thought, ah, this is like the perfect fruit fly, because here you've got the biggest divide in Western moral philosophies between the utilitarians like John Stuart Mill and Jeremy Bentham, who are saying morality is ultimately about producing good consequences. So John Stuart Mill was a philosopher and a politician in the mid-1800s, who advocated for women's rights in social liberalism, and his predecessor
Starting point is 00:08:23 was this legendary Jeremy Bentham, who was a founding figure of utilitarianism, which steers people toward doing Walt Wolf results in the greatest good for the greatest number of people. Now, people who lean toward the ethics of Emmanuel Kant, however, say that happiness can be elusive. It's different for different people, and it can come and go. and consequences shouldn't determine actions. Rather, morality is independent of its effect and it's guided by these universal laws of what is right.
Starting point is 00:08:53 And the kind of Kantians who say, no, morality is fundamentally about people's rights and our duties to respect those rights and certain lines that must not be crossed or must be crossed. And in the original switch case where you can turn the trolley away from the five and on to the one, to the extent you agree with most people that it's okay to hit the switch,
Starting point is 00:09:13 that fits very well with the utilitarian perspective or consequentialist is how philosophers often say it. Okay, but remember, we take it up a notch, not just by switching a mere lever to save five people to kill one,
Starting point is 00:09:27 but rather by heaving a bystander onto the tracks of an overpass footbridge. But the footbridge case seems like a real vindication for Kant and the Kantians that no, even sometimes when you can promote the greater good, even if we grant that all of this will work, et cetera, it still seems wrong, right?
Starting point is 00:09:44 And I felt that, and I was like, what is going on there? And then I got more into the psychology and ultimately into the neuroscience behind that switch footbridge distinction in our heads. And that kind of is what turned me from being just a regular philosopher into being a philosopher slash experimental psychologist slash cognitive neuroscientist. When it comes to philosophy, a lot of us, you know, like I asked at the top, like, did you have long hair in a hacky sack? A lot of us don't meet a lot of philosophers, especially professional ones. And so it's such a niche and esoteric and can't even fathom what level you're on. What is a philosopher and what is philosophy and what's the importance of them in society? Do they impact the way that laws are
Starting point is 00:10:37 enforced? Do they impact Supreme Court decisions? Like, where is philosophy? in our society? So there are some philosophers whose ideas really matter a lot. And the living philosopher who's been most important to me is the philosopher Peter Singer, who kind of blew my mind back in those early days when I was thinking about these types of moral dilemmas. Just a quick background. So Peter Singer, still very much alive, 79 years old, is a moral philosopher and a bioethics
Starting point is 00:11:07 professor from Australia. And his angle is how do we apply ethics to get the greatest amount of happiness for the greatest number of people or living things. But yes, singing Peter Singer's praises. He was into animal welfare before school. And he's also spent decades saying like, hey, rich people, how about helping others? And his ideas have literally saved millions of people's lives. So philosophy is kind of a, it's a high-stakes hit or miss. The people who make a difference make an enormous difference.
Starting point is 00:11:41 And part of why I expanded into science is I felt like if I had any shot at making a difference as a philosopher, I'm in a better position to do it as a philosopher scientist who can look at what's going on in our heads and say, this is what's happening. And when you understand that, does that change our thinking about what's really right or wrong? Yeah. And what was his study? Can you explain that? Well, so Peter Singer is known for many, a few things. I mean, one of them is, essentially being the philosophical grandfather of the animal rights movement. So he wrote a book called Animal Liberation that came out in the 70s.
Starting point is 00:12:15 And it starts out in an interesting way. He says, you know, people say, oh, you're writing a book about animal rights. You must really love animals. And he says, no, it's not about what I love or don't love or want to cuddle up with or play fetch with. It's about whether or not animals suffer, which is a point that Jeremy Bentham, the original utilitarian in the 18th century made. And he sort of made the case that our practices, especially with factory farming, which
Starting point is 00:12:44 just can't be morally justified. The other big thing that he did, and this is the thing that has probably had the most direct influence on my work, is his famous drowning child argument. So you may have heard some version of this. You're walking along, and there's a pond, and there is a child who is drowning in the pond, and you can wade in and save this child, but you're going to ruin your fancy new shoes or suit or whatever it is, and it'll cost you some amount of money. to replace them. And if you ask people, is it okay to let the child drown because you don't want
Starting point is 00:13:12 to ruin your clothes? Most people would say, no, that's terrible. That's monstrous. Okay. And Peter Singer says, good, I agree. And then he says, but there are children on the other side of the world who are drowning in poverty, who are badly in need of food and medicine. And for the price of the clothes that you're wearing, you or you combined with a small number of other people can save someone's life. Well, shit. So if you have an obligation to to wade into the pond and save the child at some expense to yourself, why don't you have a comparable obligation to save people nearby or on the other side of the world whose lives are in grave danger due to their circumstances?
Starting point is 00:13:52 And a lot of people spent a lot of time trying to argue why Singer was wrong, but I was convinced that he was right, even if it goes against the grain of human nature. So Singer essentially made the argument that we in the affluent world should be doing much more for people in desperate need, and typically your money goes farthest overseas, you can provide a treatment that rids a child of devastating parasitic intestinal worms for less than a dollar, right? And for $100, you can do it 100 times. Like, there's no, you know, there's effectively no limit, right? And that argument really stuck with me and has motivated a lot of the other work. And, you know, you've been in the game for 30 years now. Do you see any changes that happen as the
Starting point is 00:14:35 world changes. As human beings who have been around for 300,000 years, we didn't always know about someone who was suffering on the other side of the world. We didn't know about a lot of things that weren't local to us. We didn't have factory farming. So do you feel like these philosophical problems are kind of a moving target as our way of life changes? Absolutely. I mean, take the case of animal rights. I mean, when Singer wrote Animal Liberation, it was just a tiny fraction of the population that was vegetarian or vegan for moral reasons, right? Especially, you know, in the West or people who didn't already, weren't already part of a religious tradition, let's say, that had that kind of norm. Now, there's nothing remarkable at all about meeting
Starting point is 00:15:16 someone who's a vegetarian or these days even a vegan because they don't want to participate in killing animals and making them suffer. And then the other thing is in terms of people in the affluent world using their money effectively to alleviate as much suffering is possible, which mostly means overseas, that movement really took off. And billions have been raised, you know, very explicitly under this philosophical banner. It's been a little complicated recently. So this is what I'm referring to as the effective altruism movement. Okay. So effective altruism, it's been in the headlines for a lot of good reasons, a little bit of bad reasons. So on one hand, it's saved countless lives. It's raised billions of dollars from
Starting point is 00:15:59 wealthy, wealthy donors. Some billionaires pledge a percentage of their annual income to the most effective charities on the globe. On the other hand, there was this young cryptocurrency billionaire who was like a staunch advocate publicly of effective altruism and is now serving 25 years in prison for defrauding his investors and being on record saying that he used effective altruism to just bolster his platform and his image is a good guy. So billionaire downfalls, philosophy, gossip, it's got it all, including accidental dad jokes because the incarcerated finance guy's name is Samuel Bankman Freed. And unfortunately, he is no longer a bankman, nor is he free. But back to trolleyology. And what about the experimental psychology side of it and the neuroscience side of it?
Starting point is 00:16:48 When you're looking at something like the trolley problem, can you take one life to spare five? Are you putting people in fMRI machines? Are you having them fill out questionnaires? Are you mostly talking to students or are you going a man on the street style? Like how do you collect data on these kind of moral dilemmas? So all of the above, the sort of breakthrough experiment that I did while I was a philosophy PhD student, and this was done with my mentor then Jonathan Cohen, who's still at Princeton, I had the thought that what's going on in the footbridge case is there's a kind of an emotional response to the thought of sort of pushing this person and harming them in this very sort of direct and intentional way.
Starting point is 00:17:31 So remember, in the footbridge case, it's not just this switch lever. In the footbridge scenario, you are laying hands on a person to yeat them onto the tracks to save five people. Is it rude? Deeply. Is it morally sound? And that you could see that response in the brain. So if you put people in the scanner and you have them consider dilemmas like the switch case and dilemmas like the footbridge case, you'd see more activity related to a kind of emotional response in the footbridge case. The stronger that response, the more people would say, no, you can't push the guy off the footbridge or whatever it is. And we found something broadly consistent with that in that first neuroimaging
Starting point is 00:18:09 study, which was published in 2001. That 2001 first neuroimaging study was this breakthrough paper titled an integrative theory of prefrontal cortex function, which proposed that cognitive control stems from the active maintenance of patterns of activity in the prefrontal cortex that represents goals and the means to achieve them. When people talk about the prefrontal cortex, this is the paper they're talking about. It's been cited in 12,390 other papers. That's what a lot of academics consider a shitload. The cool thing about brain imaging is that you can look and see what's going on, but it's a very noisy signal and you don't have experimental control, right? You can change what people are reading or are asked to think about, but you can't turn on or off some
Starting point is 00:18:56 part of the brain. So these imaging studies and some that came after his first work was published included volunteers who had sustained brain injury. Whereas a brain lesion, that part of the brain is permanently turned off, right? And so the actual studies or the work that made me think to make that prediction about brain imaging was work with patients like the famous case of Phineas Gage. Listen to this. So Phineas Gage is the 19th century railroad foreman who was working on the railroad all the live long day in Vermont and got an iron spike through the front of his head.
Starting point is 00:19:30 And as a result was fundamentally changed. I mean, you might think that someone who had that kind of injury, they wouldn't be able to speak. They wouldn't be able to ever do another math problem. His sort of rational faculties and language faculties and just general sort of thinking ability remained intact. But his emotions and his decision-making were very much damaged. By his emotions and decision-making being very much damaged, Josh means that, yeah, when this young railworker was penetrated by a 43-inch iron bar that exited his skull and then clattered to the ground 80 feet away, he survived.
Starting point is 00:20:08 He lived for 12 more years. But, yeah, his social inhibitions vanished. He developed what people said was a very surly and sometimes violent. disposition. He took just some heavy drinking. I'm willing to bet he did quite a bit of honky talking with the ladies. That is to say he went a little off the rails. And his skull and the tamping iron that blew through it, they're both at Harvard University School of Medicine. And I can tell you, I would have been looking at pictures of him, some sultry-looking daguerre types holding his railroad spike like a staff. And he may have had a rod plow through him, but his face card
Starting point is 00:20:45 survived intact. 10 out of 10 would honky tonk. He was a babe. He was a babe. And the way researchers who studied people like this, in particular, this is a reporter in a book called Descartes Error by Antonio Damasio, which I read as an undergrad, said these people, they know the words, but they don't hear the music, they don't feel the music, that they don't have the emotional response. When I read that book, like, I literally jumped up and down on my bed when I got to that passage. I was like, this is what's going on in the footbridge case. And in a typical volunteer, their reaction to pushing someone off a footbridge instead of just pulling a lever is like, dude, hell no, I'm not going to be pushing anyone onto any tracks. But in volunteers with atypical brain anatomy is what's missing in these patients that have damage to the ventrometrial prefrontal cortex, sort of this is the part of your brain above your eyes in the middle of your forehead.
Starting point is 00:21:38 But eventually, people, including DeMazio's group, tested patients like that. and it was exactly as our results predicted. That is, the patients with this kind of brain damage were much more likely to say that it's okay to push the guy off the footbridge. And then people studied other types of patients. So you have patients with damage to a part of the brain called the basolateral amygdala, which is involved
Starting point is 00:22:04 in goal-directed planning. And those people will never say that it's okay or very, very rarely say that it's okay to push the guy off the footbridge. And you see similar responses. and patients who have damage to a part of the brain called the hippocampus, which is involved in kind of envisioning a scenario and deciding how to act on it based on the details of what's going on. People found that different types of drugs you can give people, you give people an anti-anxiety
Starting point is 00:22:28 drug and they become more okay with the utilitarian response. Utilitarian response being pushed the guy. And you give people a drug that it's a depression drug but has a sort of reverse effect early on so that it actually heightens the emotional response. And those people are more likely to say that it's wrong. And we've done further studies that have sort of teased apart the different circuits. So now we have a decent kind of understanding of the basic neural circuitry involved in the sort of yes response and the no response to cases like the footbridge case. Okay, well, then what is it exactly about the footbridge case? And years ago, we did a studies that suggested there were kind of two different things going on. One is what you might call the pushing. So personal force. So if you ask people,
Starting point is 00:23:11 is it okay to push somebody off the footbridge? Like 30% of people will say yes. And then you can do a version to say, is it okay to, let's say the person is standing over a trap door and you can hit a switch that will drop them through the footbridge onto the tracks. There, in that initial study,
Starting point is 00:23:29 like 60% of people said that was okay, right? So something about pushing. And it doesn't matter if you push with your hands or push with a pull. So it's not about the touching. It's about the pushing, right? That's part of it. And then the other part of it is this distinction between harming intentionally versus as a side effect.
Starting point is 00:23:47 And this is something that goes all the way back to like this theological doctrine from St. Thomas Aquinas. And that's been used like in the Catholic Church, for example, to distinguish between a surgical procedure that's an abortion versus that's designed to save the life of the mother but then would end up terminating the fetus's life. Like are you trying to do the thing that's harmful? So basically, is it a side effect or not? And we find that that matters also. For example, if you're running across a narrow footbridge to get to a switch that's going to, you know, you can hit it and save the people, and you're going to incidentally bump somebody off of that footbridge to their death, more people will say that's okay. And that's a direct personal bump, but it's incidental. The harm is a side effect. So Josh says that he and his colleagues worked this out
Starting point is 00:24:35 preliminarily in a 2009 paper titled Pushing Moral Buttons, the interaction between personal force and intention in moral judgment. This was in the journal cognition. And it examined personal force defined as force generated by the agent's muscles that directly impacts the victim. And people get squeamish about that. That is the personal, actually the personal force effect, the pushing versus hitting a switch, that was found everywhere in the world. Really? So basically it tells us really what is violence in our conception right a violent action is really an action that has three things it causes harm that's in the background it is something i didn't mention before active as opposed to passive so this would be the difference between someone you make them go over
Starting point is 00:25:24 the footbridge versus they're about to fall and you don't stop it right so active intentional not a side effect and fairly direct. Like those three things, that's kind of what makes up the core of our sense of this is a violent action. So again, a violent action is one, active and intentional, two, it causes harm, and three, the victim does not want the harm. And bringing this back to Peter Singer, part of why, you know, we're letting people die all the time, people whose lives we could save, it doesn't feel like an active violence
Starting point is 00:25:58 because we're not going in there and killing them. We're allowing circumstances to kill them. So it's passive rather than active. It's not our intention. We're not achieving some goal by doing this or some specific goal. And there's no like physical directness there. So it kind of explains why things that can be incredibly damaging don't set off our alarm bells. They don't have that paradigmatic feeling of like punching somebody in the face or pushing
Starting point is 00:26:27 somebody off of a footbridge. And this, of course, is a big topic in, say, a famine versus a genocide. And one might argue that the former is passive and the latter is active. But often in the cases of famine in genocide, they are both active. For more on that and humanitarian law, crimes of atrocity, and what many humanitarian law experts consider a textbook genocide in Gaza will link the Genocideology episode in the show notes. And your other books, moral tribes and you talk about features and bugs. Can you explain a little bit about what those features and bugs are and what those tribes are? You know, often when I tell people like, if you push with your hands, then it seems wrong. But if you drop somebody through a trapdoor with a switch,
Starting point is 00:27:11 and people kind of laugh, right? And that is a normative philosophical laugh. What you're laughing at is you're thinking, that doesn't make a lot of sense, right? I mean, the way as nice put it is, if someone called you from a footbridge and was like, Allie, there's a train and it's coming. There are five people and I might be able to save them. Should I do it? I'd have to kill someone and then you wouldn't say, well, that depends. Would you have to push this person with your hands or could you do it more indirectly? Like, that shouldn't matter. Yeah. So that's a kind of bug. In that it's like a hiccup of physics, our brain should not have to waste time mulling over. And likewise, when it comes to something that's maybe easier to defend, like, you know, caring about the people who are
Starting point is 00:27:52 immediately in front of us, like the child who's drowning in front of us, or even more so people with whom we have a personal relationship, we can understand that in more evolutionary terms, right? We evolved to be cooperative creatures. The group that is willing to pull its, you know, fellow tribe mates out of the raging river, that group's going to survive much better. Our moral, emotional dispositions are designed for this group teamwork, but they're not designed to save the lives of strangers on the other side of the world. Wasn't even possible for most of human history. And the goal from an evolutionary point of view is for you and the members of that group to survive and spread your genes, right? It's not about making the world
Starting point is 00:28:31 better in some objective sense. So the thought is that we can understand what we react to and what we don't. And from at least a certain detached perspective, we can say, you know, it seems like we might overreact to certain types of harms, like let's say physician-assisted suicide, where someone is in miserable shape and they're never going to recover. cover and they're just in a lot of pain and they feel like their life has no dignity and they feel like it's time to go right and interestingly recently it was revealed that daniel conman the father of sort of heuristics and biases and behavioral decision making chose to go to switzerland to end his life um and i think it's not an accident that someone who studies decision making would make that
Starting point is 00:29:16 kind of choice because he understands his what he'd call system one sort of intuitions and his system to reasoning and is generally a system two kind of guy. And then perhaps, I think even more importantly, the fact that we are not moved by the suffering of people on the other side of the world in the same way that we're moved by someone who's drowning right in front of us, we should view that as under-alarming. And then, of course, there are things like racism and tribalism more generally in speciesism, where we don't care as much about people who we think of as different from us as not part of our us, right? Or we don't trust them as much. or we're ready to believe lies about them much more easily, right?
Starting point is 00:29:56 And this is related to my main project these days, which is about bridging divides. So we're going to touch on that in a bit, but it's about stark political divides that are ruining the world. It's also about tango dancing. But I think understanding where our moral feelings come from can give us insight. But to me, you can't understand the origins of all of this and the way. they work and think, oh, we should just follow our intuitions. They're always right. Yeah. It's a direct line to the moral truth. Instead, we need to step back and think. And maybe where that puts us is a world in which we care more about other people, care more about other species, are willing
Starting point is 00:30:39 to make certain sacrifices when necessary, but also listen to our hearts when they tell us that we're possibly doing something wrong. Yeah. But I think that self-knowledge is incredibly useful. You're talking about how we can help people on the other side of the world, and we actually, we're going to get to listener questions, but we give to a charity every week of your choosing. So when it comes to charities, how do you decide? Well, we've created a donation platform that's supposed to help with this. So with the trolley problem, the trolley problem feels like an impossible dilemma. There is no happy solution to the footbridge case. Either you're letting more people die, you know, four people more dead than is necessary, Or you're committing what feels like an act of murder, and unless you sort of change the situation, you're stuck with that. There's a similar kind of dilemma getting more into Peter Singer's zone, which is about where you give and how you give. Most people, myself included, I want to give from the heart. They want to give to things that they feel personally connected to. And if you love animals, that might mean giving to the local animal shelter.
Starting point is 00:31:42 Or if your grandmother died of breast cancer, you might want to give to a breast cancer charity. whatever. And that makes sense, right? And I, you know, want to support my local schools and food bank and things like this. But the charities that actually do the most good are almost always not the ones that are closest to our heart. At least if we're in the developed world, where there is more infrastructure and, say, fewer intestinal parasites and mosquitoes. Or in the case of the U.S., for example, no horrors of wars waged in our backyards or buildings blown to dust. And the difference between a typical charity, a typical good charity, and the charities that are most effective is enormous. The difference between a really effective charity and a typical charity, it's like a redwood versus a shrub.
Starting point is 00:32:29 It's like a hundred times, or in some cases, like a thousand times. Okay, like in a developed country like the U.S., paying to train a guide dog for someone who's blind or visually impaired and calls like $50,000. A surgery in other parts of the world that can prevent people from going. blind due to a disease called trachoma can cost less than $100. So that can be something like 500 to 1,000 times difference in what you get from your money. Now, this is not to say that we shouldn't support and care about blind people here, but surely we should take advantage of that opportunity if, you know, for the cost of training one guide dog, we can prevent 500 people or 1,000 people from going blind in the first
Starting point is 00:33:12 place. So huge differences. This is so hard. Yeah. Well, so no, but it hurts to think about. We have the solution because this is what I do. My wife and I, we give to local charities and things like that that just we feel a personal connection to. And then we do things like deworming treatments and vaccinating newborns in Nigeria and things like that. And so I said, well, why don't we just ask people, instead of saying you should be giving to more effective charities instead of what you do, why don't you do both, right?
Starting point is 00:33:39 So we started running these experiments. And so the basic setup for our first experiment is, in one condition, it's the typical choice. That is, you can pick your favorite charity or this charity that's recommended by experts. So let's say it's a deworm the world initiative where, you know, for a dollar, you can give a kid a deworming treatment, right? And what we found is that, you know, most people, like 80% of people or more would choose their personal favorite over the expert effectiveness recommendation. That's the control condition. And then in the experimental condition, we give people three choices. You can give it all to your personal favorite.
Starting point is 00:34:14 You can give it all to the deworming charity that the expert recommend or do a 50-50 split. And what we found was that over half the people did the 50-50 split. So more money ended up going to the highly effective charity when you gave people the option to split than when you forced them to choose. And we did a bunch of experiments to try to understand the psychology of it. And the gist of it is that when you give from the heart, it's not about how much you get. The difference between giving $50 to the local animal shelter or $100, it feels more or less the same. So if you give $50 instead of $100, then you've got another $50.
Starting point is 00:34:48 And what you can do with that $50 is scratch a different itch, which is the itch to give smart and impactfully rather than only doing the thing you have the personal connection with. So that's sort of like the hearthead psychology there. And then we said, okay, well, what if we offer people an incentive and, you know, we'll add money on top. And unsurprisingly, people did this even more. It was like another 75 or 55% boost when we added money on top. And then there's the question of, okay, where's that money going to come from? So then we said, well, what if after people have made their choices, we say, hey, would you take the money that you were going to give to that super effective charity that
Starting point is 00:35:24 you had just learned about and instead put it into a matching fund for other people? We found that a lot of people would do that. So we're like, huh, the math seems to add up. Let's give this a try. So you give to your favorite charity in a U.S. registered nonprofit, and you have the choice to divide it with a super effective charity. And then there's this extra secret sauce, and it's that because others have donated to a matching fund, you can up your donation a wee bit knowing that it will be matched and everyone wins a little more. Now, if you want to opt to be a matching fund donor, you can do that instead or an addition. So how and where do you get in on this, this hot benevolence? So Lucius and his developer friend Fabio Kuhn created this website called Giving Multiplyer. And if you're home, you can Google Giving Multiplyer.
Starting point is 00:36:14 You'll see it come up and it allows you to do what we do in this experiment. But the gist is you pick your favorite charity, any registered 501c3 in the U.S. There's a little search field there. And you say, okay, I'm going to give to whatever it is, my local animal shelter, which I know the name of. And then you say, okay, here are the 10 charities that we are supporting that are super effective. and it's things like distributing malaria nets or other malaria treatments or the deworming charity that I mentioned and other things. And then we have this cool little slider thing and depending on how you allocate your money between the one that you chose and the one that's
Starting point is 00:36:51 from our list, we add money on top. And we add more money on top the more you give to the highly effective charity. And if you have a code for that, then you get a little bit higher matching funds. and I will shamelessly say that for listeners of this podcast, if you put in ologies as your code, then you get a higher matching rate. Oh, that's amazing. So we launched this in 2020, and we would have been happy if we, you know, raised a little bit better than bake sale. But it went really well, and long short of it is we've been doing this for four years. We've raised over $4 million.
Starting point is 00:37:28 Oh, my God. And over two million of that has gone to super duper effective check. So we're talking about just from malaria nights alone, saving dozens of people's lives, thousands of children who've gotten deworming treatments as a result of this, hundreds of thousands of dollars in direct cash transfers that people in poor countries can use to put a better roof over their head or start a business and things like that. And this cycle has been going now for four years and, you know, saved lots of lives and raised millions of dollars.
Starting point is 00:38:00 And, you know, it started with that little experiment and really started with kind of the inspiration from Peter Singer, the philosopher, whose ideas really matter. So we can, in your name, we can donate to the matching fund to help other people? I would not complain. If that's not too self-serving, yeah, that would be great. If you want to put money in the matching fund, that would be awesome. Oh, that's so great. That's amazing. And then also, if people want to go check it out, they can use the codologies and they'll match a little more. Yeah. And you can do is like make a tiny donation now, like put in like 10 bucks, try it out, see how it works. And then when it's holiday time and you're ready to like really do your good deeds for the year. We'll get back in touch with you
Starting point is 00:38:36 and we'll have some good offers. Oh, that's so great. I want to ask you some questions too from listeners who were excited that you're coming on, if I may. Please. So yes, Josh's charity of choice this week is the matching fund at giving multiplier.org and that matching donation will help incentivize more folks to give just a little bit more to their charity of choice and a super effective one. Now, to get a little bit of a boost in the matching fund percentage, you can go to giving multiplier.org slash ologies. And that link is in our show notes or on our website for the trolleyology episode. So that is a wonderful resource for people who are looking to give and have it go a little further. So thank you, Josh, for the research that went into that and the option.
Starting point is 00:39:20 Again, that's giving multiplier.org slash ologies. And thank you sponsors of the show for making our weekly donations possible. Okay, this next question asked by Lily Britt Klein, sandwich, Earl of Gramelequin, Hand the Bee, Al Ham, Lindsay Mixer, Kira Black, Jacob Morve, Karen Digger, and Quinn may make you think, oh my God. Let's talk a little bit about culture and religion. A bunch of people wanted to know, in Andy Pepper's words, is religion a thing because it's easier for our animal brains to outsource morality and spirituality to an institution, even if that means doing things that are clearly amoral like every holy war ever. Megan Ratcliffe wanted to know,
Starting point is 00:40:01 how does religion influence these moral quandaries? And also, Chris F. wanted to say, anecdotally, I found that folks that received a highly religious upbringing but left their faith as adults often have stronger moral compass than those who continued following religion into adulthood. So modern religion in terms of moral character versus an innate desire to do good,
Starting point is 00:40:22 what's up with that? So let's talk about what is religion? Why does it even exist? Yeah. And the earliest things that we might recognize as religions, what they did was they often provided explanations for things that we didn't understand. Why does this bright ball of fire go around the world once every 24 hours and things like that? And also kind of answered questions like, well, what happens to people when they die?
Starting point is 00:40:45 And sometimes I hear voices is that people talking to me from some other realm. And it's kind of explaining the unexplainable. And then there was this kind of transition. And the story I'm telling here, I should say, comes from sort of brilliant analyses by researchers like Aaron Zyan and Joe Henrik and others. So this is not just coming from me. There was this kind of invention of big gods, right? And what they mean by big gods is gods that know everything about what people do and what they're really thinking and why they do it and care that humans treat each other well. And what that combination does is it provides a kind of guarantee of cooperation.
Starting point is 00:41:21 Because if I'm a Muslim and you're a Muslim, even if we've never met before, if we both know that we are faithful, then I know that I can trust you and you can trust me because we both know what'll happen to us if we disappoint, you know, God in the way that we behave. And so religion has been a way to scale up cooperation and build trust among strangers, right? And this had an enormous effect on people's ability to trade with other people and exchange technology. So it is a social technology, right? And what religions do, it's a double-edged story. It makes people more cooperative within the group. But the religion is out there competing for resources and souls or whatever it is with other groups that are either religious or not. And so religion can bind people together, but it can also divide people at the level of groups. Now, there are some religions that have tried to move towards a more universalist perspective, the most sort of straightforwardly so
Starting point is 00:42:27 being the Unitarian Universalist Movement, which to a lot of people doesn't even feel like a, quote, real religion because it doesn't have that kind of strong sort of metaphysics and usiness of other religions, right? Okay, up until this moment, I had never heard of the Unitarian Universalism movement. But it's apparently inclusive of a bunch of religions, but adheres to none of them. And it's rather founded on a commitment to theological diversity, inclusivity, and social justice. And their values are just like love and fairness. And they say, you are welcome no matter where you are from, who you love, or what higher power you believe in, which is very different than the Catholic masses that I used to attend, staring up at a bleeding dead man and worrying about
Starting point is 00:43:11 going to hell if I said a rosary out of order. Now, for more on that, you can see our recent OCD episode. Yay. So religion is, I think, fundamentally about cooperation within its scope, which can be either very narrow or fairly big, and in some cases even cover the whole world. So it's a cultural invention, and it's a set of things that influence us emotionally, all of those rituals, all of those prayers, all of those parties, all of those dances, all of that stuff binds people together and makes them feel like a cohesive, cooperative unit, but often at the cost of making other people feel more distant, right? So for those of us who want to see a sort of maximally wide and inclusive world, then religion is both at opportunity
Starting point is 00:43:57 and a challenge. So yes, to community and doing good with it, no to starting wars about who is the most moral. Now, on the topic, tangentially, of original sin, Daniel Schmaniel said, I saw a study showing babies under two years old showed signs of understanding right from wrong. What's the explanation? Is there a genetic component to morality? And patrons Rachel May, Sarah King, and Hannah Bonanas, all asked about nature versus nurture generally. Hannah Bonanis added that having left a fundamentalist cult, I'm often trying to figure out for myself what makes a good person other than just trying not to hurt people. Chrysalis Ashton said, what a great topic. I'm taking an ethics class right now, and we've talked about this. My question is, what part of morality is innate and what part is learned and do we even know?
Starting point is 00:44:47 So what I would say is morality first is not like a thing in the brain. Like on Star Trek, the next generation, I'm showing my age here, but maybe, you know, the kids know this. Oh, my gosh, it's the best. During the pandemic, my family, we watched all seven seasons just straight through. It was like the great. That was our religion, basically. And so you have Commander Data. and he has his, like, ethics module
Starting point is 00:45:12 that was, like, added to him so that he wouldn't be like his evil twin brother, Lord. Captain, although the result of my actions proved positive, the ends cannot justify the means. But morality for humans is not a module, right? It's really our sort of whole social, emotional intelligence complex. What you see as kind of naturally arising out of human experience is certain basic cooperative tendencies.
Starting point is 00:45:36 That's a lesson you probably learn as a toddler, if you don't turn out to be a psychopath, right? That, like, toddlers are pretty violent. Like, if they were eight feet tall, we'd be in trouble. Then you learn, like, you're not allowed to behave that way, and you internalize that, right? So certain basic feelings about physical violence, lying, stealing, the stuff that, like, this group ain't going to work
Starting point is 00:45:57 unless we have certain boundaries that we've emotionally internalized. And then a sense of who's in and who's out with varying degrees. Who do you owe things to? You know, when you have food, Who do you have to share it with? Is it no one? Is it just your immediate family?
Starting point is 00:46:12 Is it everyone in the village, right? So innate or not? What do we think? So it's more like language in a sense. Like, is language innate or not? Well, no one comes out of the womb speaking Mandarin Chinese in the way that a gazelle might come out of the womb and be able to walk pretty soon, right? You have to be exposed to Mandarin in order to learn it. But you can speak Mandarin to a chimp all day long and they're not going to learn it, right?
Starting point is 00:46:36 So what we have is an innate capacity. to acquire morality, but you don't just acquire morality like it's one thing. You acquire different flavors, different versions of it in the same way that humans have genetic adaptations that enable us to acquire language, but language is not innate in the sense that we pop out talking. We have to be exposed to it. So it's a genetic predisposition to learn. And Josh says there is no nature versus nurture. Some things are pure nature, like your physicality, your anatomy, but all nurture relies on the nature that you have. You have to have the software to be able to run whatever is nurture. Okay. Yes. You mentioned data and a lot of people,
Starting point is 00:47:23 Policontra, Mark Rubin, red-headed scientist, genetic sore, biscuits and gravy, all wanted to know about AI. Paul asked, given the continuing emergence of AI, what would an autonomous artificial intelligence agents do. Mark Rubin wanted to know, has there been much work regarding moral dilemmas and AI, would a self-driving car swerve into and kill a small child to avoid hitting a family? What happens if you've got a way go that's got to make a quick decision? What's it going to do? Right. This has been a hot topic for a while. And there have been different versions of this. Like people first went to trolleyology when they started thinking about self-driving cars. Oh, okay. And then there are people who kind of said,
Starting point is 00:48:04 that's never going to happen. When are you ever headed towards five people and then you can turn it on to one person, right? And I think people were sort of taking the problem a little literally. But the more realistic version of this are things like this. So you're driving along on a two-lane road
Starting point is 00:48:19 and there's a cyclist in front of you going pretty slow by car driving standards, right? And you could maybe swerve around them but there's traffic coming the other direction. When is it okay to swerve? How close can you get to that cyclist? How much time do you feel like you have to give yourself, how far away does that oncoming truck have to be, right? We all, like everyone who drives, you can drive nicely, you can drive like an aggressive jerk.
Starting point is 00:48:45 No one can avoid that question, right? And what it really means to drive like a jerk is you are taking too many risks, especially with other people's well-being, but maybe also with yourself. So autonomous driving trolley problems are not these stark choices where there are exactly two options. So it's a more fluid kind of thing. the same underlying tensions, right? And one of the key tensions here is between the well-being of the individuals in the car versus those who might be outside the car. There was this kind of now infamous episode where Mercedes-Benz executive was asked, will these new autonomous Mercedes that you're developing, will they privilege the riders? And the executive said,
Starting point is 00:49:25 well, yes, because at least you know you can save the people in the car, so you should save the people in the car, right? But then people push back and was like, oh, so you've got these you know, basically like bigoted cars that are going to only care about the people inside them. And then Mercedes said, no, no, no, no, no, no. That's not what we mean. No car should ever make any value judgments at all, which of course is actually impossible, but, you know, good PR, right? But critically, like, you know, someone who's inside the car might be much more protected than a pedestrian or a cyclist. So cars have to deal with this stuff. Now, it's pretty clear that we're not going to be able to solve these problems with a hard and fast,
Starting point is 00:50:04 set of rules. So, like, if you're giving the car, let's say, simulations and it does different things, let's say it swerves around that cyclist and it doesn't hit the cyclist, but almost does. Is that a win or a lose? Is that something you want to reinforce with your machine learning algorithm? Or is that something you want to dissuade? So there are value judgments that are made in training. And for more on this, we have a recent AI ethics episode with Dr. Abeba Burhani, and I find it scary. But, and AI, obviously not our brains, but. But, you know, based on what our brains might do. And we had Dr. Lena Carpenter who said,
Starting point is 00:50:40 oh boy, oh boy, oh boy. As someone who researches moral development and altruism, I am thrilled to see this ology. And in their research, they look at motivations behind pro-social behaviors. And Zoe Schultz asked, I wonder how neurodivergence has influence on moral dilemmas. They say, I'm autistic and have quite a black and white,
Starting point is 00:50:59 right and wrong way of thinking. And for them, they can easily pick a side in these moral dilemmas, but when I hear friends debating, they can't easily choose and want to know, do neurodivergent people see or solve moral dilemmas in a different way? Grace Robesho asked about moral OCD, also known as moral scrupulosity, and wondering if there's any sort of research on, maybe on one side, psychopathy, and then on the other side of the spectrum, neurodivergence, it has this very, sometimes this really moral bent. Do you look at those kind of factors that might be innate in terms of how certain brains work?
Starting point is 00:51:36 You're asking a lot of questions. Yeah. So this is not research that I've done. But I can tell you, people have looked at lots of different neurodivergent conditions, right? Some of which would go into the heading of psychopathology and others not. You mentioned psychopathy. This is something that's been studied. And what you find is that people who have diagnosed with psychopathy are more likely to say that it's okay to push the guy off the footbridge, that they're more likely to give
Starting point is 00:52:01 those sort of utilitarian response. But we don't think it's because they care more about the greater good. I think it's because they don't have that emotional response that goes, ah, don't push, you know, don't hurt people, don't commit acts of violence, right? As a parallel to that, this is work that was done as an undergrad thesis done by a student named Shin Chang, who was interested in this stuff. And she thought, hmm, I wonder how Buddhist monks would respond. Because there were sort of some teachings that would suggest that Buddhist monks might actually kind make more of a utilitarian response. But if you ask most people, would a Buddhist monk push someone off the footbridge?
Starting point is 00:52:36 I said, no, of course not. Buddhist monks are very pious, good people, right? So she went to the mountain city of Lassah and interviewed, I think, 48 Buddhist monks. And something over 80% of them said that it would be okay to push the guy off the footbridge. 80% of the monks were like, push them off the overpass. Off you go, sucker. And when you asked them why, many of them cited this. sutra, this teaching, which describes a sort of advanced or, you know, almost enlightened being
Starting point is 00:53:08 who was in a situation where there was a murderer was going to cause a lot of harm. And the only way to stop them was to kill them. And the person killed them not out of malice or hatred, but to prevent this harm. And actually with the expectation that it would be carmically harmful for he himself. But because he did it with that pure intention of promoting the greater good, then that was sort of part of his path to being a bodhisattva and enlightened being on earth. And that's what we found with Buddhist monks. So you've got psychopaths and Buddhist monks both giving the utilitarian judgment. And what that tells you is that the same response can be given for very different reasons.
Starting point is 00:53:44 For the psychopaths, it's because they didn't have the emotional voice in their head saying, don't do that. And for the Buddhist monks, they have that voice, but they can also cultivate a kind of compassion. They say, but what about the other five people? And, you know, I hear both voices, but at least you're saving more lives. And so they would say it's acceptable. They did say about this sutra that this sutra is like not for the little kids. Like you don't want people going around thinking that, you know, they can commit murder if they think it's for the greater good. So you need to kind of have those guardrails. Other people have studied other types of conditions. So there's a condition called Alexa thymia, which involves people not having good access to their own emotional states. And those people are more likely to give a utilitarian response. I mean, mentioned earlier patients with different types of brain damage. So patients with damage to the basalateral amygdala or the hippocampus are more likely to say that it's wrong to push the guy off the footbridge, etc. I will say anecdotally that a lot of people who take a strong
Starting point is 00:54:42 utilitarian stance in their own lives, that there's a higher incidence of people who are autism spectrum and some people have sort of posthumously diagnosed Jeremy Bentham, the founder of utilitarianism as having autism spectrum disorder or being neurodivergent in that way, as you might say. And the thought is that, you know, the utilitarian calculation is available sort of to reasoning. So one notable thing about Bentham, he was one of the first people sort of in the philosophical, Western philosophical tradition to advocate for what we now call gay rights. So he wrote a paper in the late 18th century, so late 1700s,
Starting point is 00:55:23 arguing that, you know, from the principle of utility, from the idea that, like, is this actually causing any harm, that maybe there's nothing wrong with two men, you know, having a sexual relationship. So a Guardian article reviewing Bentham's writing, it was titled, Of Sexual Irregularities, notes that Bentham was a big fan, big fan of consensual sexual expression, said it's one of life's greatest pleasures, and that as a utilitarian, no action should be illegal unless it causes harm to others. And the guardian adds that in one surviving letter to a friend, the philosopher joked that his rereading of the Bible had finally revealed that the sin for which God had
Starting point is 00:56:04 punished the inhabitants of Sodom and Gomorrah was not in fact anal intercourse, but the taking of snuff. So he and his secretary had consequently taken a solemn oath to hide their snuff pouches. But in grave seriousness, in a time when same-sex relations were punishable by execution in England, which is obviously doing a lot of harm. And currently, at least domestically, for me, we're seeing much more overt racist policies in recent years, lack of due process and deportations, key political figures convicted of fraud and in many cases violent sexual crimes. And yet there is increasing trans and homophobia with the U.S. Supreme Court. looking to overturn the constitutional right to same-sex marriage just this week. So it's shocking and gutting
Starting point is 00:56:52 to go so far backwards, so fast. And that was like insane, you know, culturally at his time, right? But he applied his principle in an impartial way and in my view sort of jumped ahead two centuries in moral thinking. Let's suppose he was neurodivergent. I would imagine that that would contribute
Starting point is 00:57:13 to his capacity to see that because if you're very tuned in to the social world and what other people will think of you, you might be less willing to reach that conclusion and put pen to paper with that. Whereas if you're a bit sort of socially detached, but you've got a good thinking reasoning brain, then you might get there. So it's a nice case where at least it's possible that his neurodiversions, if that's in fact what he had, was a kind of philosophical strength. Is there a right answer? Is the utilitarian answer the right? answer where you push the guy? Well, so, I mean, there's no agreement about this. Yeah, yeah, I know,
Starting point is 00:57:51 but I don't like the word utilitarian. I prefer to call myself a deep pragmatist, which I think better capture is sort of my philosophical orientation. But no, this is highly controversial. And in fact, part of the reason why I and other people have spent so much time on these trolley dilemmas is because these are objections to utilitarianism, right? That I kind of had the thought, yeah, great or good, that makes sense. But then, you know, we have the debater asking me, is it okay to kill one person and give their organs to five other people, those are very salient objections. And I wanted to understand the objections. A kind of unfortunate side effect of all of this is that, you know, I was studying the footbridge case because it's utilitarianism at its least appealing, right? And then
Starting point is 00:58:32 this work really took off. And then people started associating utilitarianism with pushing people off of footbridges, which is really kind of like addressing the most salient objections to it, but not the heart of it, right? Yeah. So, no, this is not about trolleys or trains or overpasses or footbridges. It's like a metaphor. So I want people to associate utilitarianism with providing opportunities in health care to poor people around the world and making sure that we're not, you know, torturing animals unnecessarily and things like that. To me and to Peter Singer, that's what it's really about. But so much focus partly due to me has been on these kind of horror cases, but it was done out of a kind of intellectual integrity. That is, if this is
Starting point is 00:59:17 going to be your philosophy, you need to defend it at its least appealing. And so that's how we got there, right? So an unfortunate sort of cultural narrative side effect of doing all of this, but I hope at least your listeners will understand that it really is about making the world better and not about shoving people in front of speeding trolleys. Of course. Speaking of culture and pop culture, 23 Skidoo wanted to know if you've played the trial by trolley game. Have you heard of this? I've played. In fact, we have a copy of it in my lab where I'm sitting. Yeah, we've done this at lab parties and other things. It's better than you'd think. Yeah. Yeah, it's pretty fun. Yeah, we have it too. As soon as we booked you to do this, my husband and I got it. And yeah,
Starting point is 01:00:02 it's a blast. But pop culturally, also Tommy McElrath, first time question asker, wanted to know if you've seen the good place. And if so, how much should you enjoy the trolley problem episode? Oh, God. Michael, what did you do? I made the trolley problem real so we can see how the ethics would actually play out. The thing is, I mean, ethically speaking. No time, dude, make a decision. Well, it's tricky. I mean, on the one hand, if you described to a purely utilitarian world huge. Oh, I thought it was great. They did a brilliant, amazing job with, you know, just demonstrate. And I actually did a little bit of consulting for them for later season, but not for the original trolley episode. No, I love the way that they dramatized it. Although it's interesting, and this
Starting point is 01:00:47 is sort of part of the pop culture meme of trolley is really just the switch case. And it's often just kind of a platform for like in the board game. Like, it's just, do you care more about this or more about that? Would you kill your dog in order to save 10 people? What about Hitler? You know, those things, right? But as a cognitive scientist and neuroscientist, the really interesting thing is the contrast between the switch case and the footbridge case. Okay, okay, I can do this. I am choosing to switch tracks, so that way I only kill one person. But they kind of managed to make the switch case very footbridgey
Starting point is 01:01:21 by kind of really sort of dramatizing the horror of running somebody over. And then my other favorite thing about that is like there's a movie theater in the background on the street and the movie that's showing is called Bend It Like Bentham, which I thought was absolutely brilliant. Oh my God. Jeremy Bentham, of course, the founder of utilitarianism philosophy and the aforementioned OG LGBTQ ally. Also, side note, went down a hole on this, but he asked that after his death, his body be dissected in front of an audience for medical research. Then he wanted it preserved to be shown sitting upright as if he were just hanging out in the living room.
Starting point is 01:01:59 And if you were to enter the student center today at University College London, you would run into a, a glass case with what looks like a wax figure of Ben Franklin. But no, that's Jeremy Benton's skeleton dressed in his usual, like mid-1700s, frilled shirt and nice dark jacket. His head is wax, though, because they mummified his actual head and then they put glass eyeballs that he had selected before death. But people thought it was like a little much. So then they decapitated it and they put his real head in a box between his feet. But then sometimes little collegiate rascals would steal the head and they'd hold it for ransom until a fee was paid to their charity of choice. Honestly, I think Bentham would have liked that. But yeah, currently his corpse is in a
Starting point is 01:02:42 lifelike seated position. Bent like Bentham. You know, last list or question, which I thought was such a good one, Aaron Burbridge, Han the Beat, Ninja Squirrel, and Tara King, in Ninja's words asked, how do you see a common connection to someone's identified political party and how they answer these moral dilemmas. You know, you mentioned abortion for the life of the mother and LGBTQ rights, things like that, but also in our country, the right tends to be much more religious, specifically Christian. So how are morals and greater good looked up, you know, through the lens of political parties? So people have looked at this with the trolley stuff. And my recollection is that there's a trend where, like, people who are more politically conservative
Starting point is 01:03:35 are more likely to say that it is wrong to push the guy off the footbridge, etc. But it's not a very strong case. I think the much more salient thing is that there's much more here that we have in common that divide us, that your typical conservative and your typical liberal are going to be feeling the same internal tension about these things, thinking it's better for five people to be alive and one dead than the reverse, and it sure does feel wrong to push somebody off of a footbridge. Like, that's, that's universal, right? And then there's like a little tendency of a trend. And maybe it's because people who are more conservative or more religious, they're more likely to trust their intuitions and less likely to kind of question it, although I do love the story about the monks,
Starting point is 01:04:16 the Buddhist monks. What about some of your recent research about getting people to maybe understand each other a little bit more? Yeah. So this is a This is the big one, and we've got the paper we've been working out for five years that's just coming out in nature, human behavior. We're very excited about this and really sort of building this stuff out. Okay. And this is a paper that Josh just published in June. It's titled, Defusing Political Animosity in the United States with a cooperative online quiz game. So where do I start in this? I'm going to let him start. So we've got to back up a little bit. You mentioned my book, Moral Tribes. That came out of it 10 years ago,
Starting point is 01:04:49 and this was me trying to put together all the philosophy that I've been thinking about and all the science. It was a successful book, but it didn't spark a global philosophical revolution. Bummer. And I was like, well, what? Maybe I had the wrong theory of change.
Starting point is 01:05:04 And so I started asking myself, instead of trying to unite the world's tribes or reduce tensions between different peoples by getting everybody to agree on a philosophical outlook, maybe we can work on people's thoughts and feelings in a more direct kind of way. On the biological side, everything around us is ultimately about mutually beneficial cooperation.
Starting point is 01:05:27 I mean like molecules come together to form cells. Cells come together to form colonies and multicellular organisms. And organisms have organs that cooperate and individuals cooperate with each other in small groups, in tribes, in chiefdoms, in nations, occasionally in United Nations, every level from molecules up to nations in the UN, it's about parts coming together that can accomplish more together than they can accomplish
Starting point is 01:05:53 separately, and that's why they work. Now, it's not all like, you know, unicorns and rainbows that the units are competing with each other at each level. So it's cooperation and competition at increasing levels of complexity. That, in a nutshell, is the story of life on Earth. And so from a biological point of view, the way to bring people together is to have them be on the same team. Social scientists, and I'm partly a social scientist, in addition to being a philosopher, reached the same conclusion. And this goes back to Gordon Alport, who sat in the building that I sit in now in the 1950s, and developed what's called the contact hypothesis, which is the idea that if you want to break down barriers between races,
Starting point is 01:06:32 between people of different religions, they need to be in touch with each other. And it needs to be a kind of in a cooperative sort of way. This is not exactly how he put it. His contemporaries, Sharif and Sharif explicitly talked about, you know, kids at a summer camp who were either put on separate teams and made to fight with each other or they had to team up to pull the truck out of the mud and, you know, came to the same conclusion there, sort of compatible with this idea. So this was the Robbers Cave experiment in the early 1950s, which shipped a group of pretty
Starting point is 01:07:01 homogenous waspy boys, like, around the age of 11, to a figure. summer camp that their parents knew really was a psychological experiment. And there were two groups of campers and they hated each other until they had to help fix the common water supply or, yeah, tug that truck bearing their food rations from a rut, which was all set up ahead of time like a reality show. But then they all became buddies. But the overarching idea being that cooperation toward a common goal increases respect, it reduces tension, and it builds bridges. Probably literally because bridges are really large. So I was like, okay, if the biologists and the social scientists have all known that
Starting point is 01:07:38 cooperation, teamwork, like, this is the heart of it, why haven't we solved this yet? And there have been lots of historical cases where this point has been made. So when people were first, in World War II, we're first talking about integrating the U.S. military, a lot of people, you will never get white people and black people to fight in the same unit. And they had people making these dire predictions of like people turning on each other in the And instead, what they found is that these integrated units were great. And it really changed people's racial attitudes. Because when you put people on the same team and their lives are at stake and there's a job to get done, they not only do the job, but they become like brothers, right?
Starting point is 01:08:15 Or sisters, as the case may be. Or non-binary buds, I gotcha. And in sports and things like that. And in some sense, every modern city where people from different religions and backgrounds and races come together and work together is testament to this idea that people's attitudes shift through working together, through cooperation, either sort of tacitly or very directly, like in the same job. So it's like, okay, so why haven't we solved this in some more systematic way? So we got to thinking, like, what do you have to do? Well, you need something that works, and we think, you know, mutually beneficial cooperation teamwork is the key. It's got to be done in a way that's scalable. And today, that really means digital, right? And it needs to be something that people are motivated to do,
Starting point is 01:08:58 which you can think of as fun, right? And we said, okay, to me, the center of that Venn diagram is a quiz game. And so we created this quiz game. Our first work was on Republicans and Democrats, where we would say, okay, we're going to pair up a Republican and Democrat together. We have them answer these quiz questions, and they're connected by chat, and they're in the same boat. They have the same score.
Starting point is 01:09:22 They both win money together or lose money together, depending on their answers. They have to, you know, agree on an answer and submit it, right? and they get the money if they're right and they lose it if they're wrong and if they give different answers then they're automatically wrong and so we wanted questions that would really promote teamwork so we had sort of two types of questions like this so one are kind of cultural questions where one side is more likely to know the answer than the other right so I don't I'll try you alley what's the name of the family on the show duck dynasty do you know oh oh I know this yeah I can't think of
Starting point is 01:09:59 I love it. I can't think of it. Oh, my God. The Duggers? It's Robertson. Not the Duggers. Not the Duggers. Yeah. Sorry, I didn't mean to jump the gun there. But, you know, a lot of Republicans are more likely to know the answer to that question. Yeah. We sort of figured out, like, what are the foods and movies and TV shows that Republicans are more likely to watch or know about and liberals, right? Questions like that. But ask Democrats who Leslie Knope is or if there are Miranda, and answers just flow much faster. But, yeah, then, of course, there are questions about some political myths, like many people that are left-leaning think that assault weapon deaths outnumber handgun deaths. But handgun deaths are more common in the U.S. many people die by suicide by them. We cover that in a suicidology episode with Dr. Quincy, Mayfren-Lazine. So from pop culture to the most serious of social and economic issues. And then we have questions that are kind of more political in nature. You ask about things like rates of crime among immigrants, conservatives are more likely to think
Starting point is 01:11:00 that it's very high, Democrats are more likely to think or liberals are more likely than it's low. And in that case, the liberals are right. And so we have these questions where everybody gets to be right and everybody gets to be wrong. And not everybody answers always like in keeping with their stereotypes. But on average, people play, they had the experience of, I know some things, and there are some things I don't know, and my partner knows some things that I don't know. They report having higher levels of respect for the other side. They say, you know, those liberals and conservatives can make valid points, more open to leaders who support kind of political compromise. And some of these effects, we test people the next day, the next week, the next month, four months later, some of these effects we see lasting four months from playing the game one.
Starting point is 01:11:44 Wow. So the paper's abstract reports that, yes, gameplay of let's tango.org improves democracy-related attitudes. And it also receives high enjoyability ratings, which may increase motivation to engage with this intervention, they say. And in all the ones we've done so far, we find that when people do this even for 20 minutes, first of all, people enjoy it and we see positive effects, at least immediately. And we're starting to do work with employees at businesses.
Starting point is 01:12:14 We're starting to do work with Jews and Arabs in Israel. We're building a game for Hindu and Muslims in India or people of Catholic and Protestant descent in Northern Ireland. Wherever there's an us versus them kind of conflict or tension, put people on the same team and let them have that cooperative experience. And we are now working on bringing this out into the world. So if you go to let's tango.org, you can sign up, you can put your email in there,
Starting point is 01:12:42 and we'll let you know when there's a game that you can join. And I hope we'll get to the point where we can have games going on all the time. The research suggests that this is a way to bring people together, and there's no limit. I hope that a year or two from now will have had millions of people had this positive experience of cooperating with people with whom they disagree politically. Also, anyone who wants to throw money at this, please let me know we need to build this out. I think it would be a huge change in our politics. there will be people who have conservative worldviews and liberal worldviews and people who are
Starting point is 01:13:15 religious and people who are not. That's always going to be there. But what doesn't have to be there is the sense that those other people are untrustworthy and unworthy of respect and that we can't share power with them because of the terrible things they'll do if they're given the chance, right? And it's that conspiratorial sense that they're an untrustworthy other undermining me and my people and what this country is supposed to be that's what's so toxic and we can have our
Starting point is 01:13:45 disagreements about policy issues deep, important, powerful disagreements but not have that toxicity and that's what I hope that the game when it gets out there will do and I know you've been working on this research for five years on or off the record anything changed
Starting point is 01:14:01 after January 20th were you like oh shit it's worse than I thought strangely not that much changed after January 6th or January 20th of this year we've been a divided country for a while I mean this is this is not new this sense that that we have different conceptions of who counts as us so what's happened in recent years is different and you do see it in the numbers like polarization and distrust it's up and the good news is that the game it's not an overnight fix, but it moves the needle. People play the game once. We see effects four months
Starting point is 01:14:41 later. And there's no reason why you can't play it lots of time. So to try that, gather up your liberal and your conservative associates to sign up at let's tango.org for science, and I don't know, maybe to prevent a civil war and the collapse of democracy and the crumbling of the Constitution. Speaking of horrors, what is the most frustrating thing about being a social scientist and a neuroscientist and a philosopher? What's the most frustrating thing about this trolley problem and about examining ethics? What sucks the most? Impatience, you know? Okay.
Starting point is 01:15:20 I feel like it just takes so long to learn things and to solve problems. Academics are great at slowing things down. I mean, we have internal review boards. We might have the answers, not me personally, but collectively, we might have the answers. And what if we don't get there in time, you know, what if, you know, the complete collapse of American democracy or the nuclear war or whatever it is like comes before we figure out not just the basic science, but I said, it's an engineering problem. Like, we have to figure out how to build these things.
Starting point is 01:15:52 I feel like, to me, it's just, gosh, how can we solve these problems as fast as possible? that's what kind of just drives me nuts. What about the best thing, as long as we're doing superlatives? I mean, it's just an unbelievable privilege to work with the people that I work with. The paper that we're about to publish, this is with a graduate student named Lucas Woodley, who's next door, who's just a joy working with him. And Evan D. Philippus was the grad student who started this work. He's amazing. And Shankar Ravi is our tech guy on this, and he's been great. So working with these people and the people of the global development incubator, I don't listen, Andrew Stern is the head of it
Starting point is 01:16:29 and Therese Semple-Smith has been working on this for a long time and I don't want to go down the whole list but like it's great and also just like the privilege of being able to spend my life like what I get to do for my job is to try to as a scientist
Starting point is 01:16:44 sort of unlock the mysteries of the mind and then as a kind of applied scientist try to bring those lessons out in the world I mean I just feel so lucky that I get to do this and I just like I just don't want to miss any opportunity, you know. So yeah. So I'm just incredibly lucky that way. When I heard that trolleyology was a thing, which the fact that it's an ology works so well for me. I didn't have to
Starting point is 01:17:10 come up with an allergy. I was like, yes. But I heard about it from my nephew, from Mr. Bernardi's class in Easton, Connecticut. So he's, my nephew's 12. And so they're using your work and trolleyology to talk about morals and ethics at that age. So if people are interested in this stuff, if you want to give in a way that you support your favorite charities but also can have huge impact, then go to Giving Multiplier
Starting point is 01:17:40 and use ologies as your code and you get extra matching funds. And if you want to participate in helping unite the United States and sort of foster mutual trust and respect, then please go to let's tango.org. you can sign up to play games and try to bring along people who are different from you so that we get all of America playing this game. I'll send it to all my friends in California and my relatives in
Starting point is 01:18:05 Montana. That's perfect. That's perfect. You should get an equal number. Yeah. This has been great. Thank you so much. Thank you. Thank you. It's such a joke. Okay, so ask academic people ethical questions and steer yourself down the tracks to let's tango.org to sign up for some across-the-isle quiz gaming and givingmultipler.org slash ologies to donate to your cause of choice and one that you may never have heard of. And you can get some matching funds at a special higher rate for ologites at giving multiplier.org slash ologies. So thank you so much, Dr. Joshua Green, for setting that up and for this chat and for all the work
Starting point is 01:18:46 you do to make this place the better one. You can find links in the show notes to his work and you can tell everyone you know to sign up for Let's Tango.org. You can find links to the things we discussed at alleyward.com slash ologies slash trolleyology. We are at Ologies on Instagram and Blue Sky. I'm at Allie Ward on both with 1L. Again, if you need any kid-friendly episodes,
Starting point is 01:19:06 you can check out Smollogies, anywhere you get podcasts. Ologies merch is available at Ologiesmerch.com. Thank you patrons of the show for all your great questions that you submitted via patreon.com slash ologies and for supporting the show since before day one. Aaron Talbert admins our Ologies podcast Facebook group. Aveline Malick makes our professional transcripts. Kelly Ardwire does the website.
Starting point is 01:19:27 Noel Dilworth greases our lovers as scheduling producer. Susan Hale is our managing director who keeps us on track. Surviving our barreling toward the publish button every week are brave editors, Jake Chafy and lead editor, Mercedes Maitland of Maitland Audio. Nick Thorburn is the Whistler who made the theme music. And if you stick around to the very, very end of the episode, I tell you a secret that you may or may not want. want to hear. So last night, I got home from a trip in Chicago to go to a lovely wedding. And we were so
Starting point is 01:19:57 tired, your pod mother, Jarrett, wanted to rewatch a movie. So we put on Fight Club, which I hadn't seen in years. It came out in 1999. And two things struck me, not literally. They didn't strike me that way. But in this episode, we talk about the three factors of violence, right? And the third is that the victim does not want the harm. So I was like, huh, weird. You know, I guess despite watching teeth tinkle around on the concrete and like clots of blood sliding out of lips. Is that not violence? Interesting. Someone has probably written a whole dissertation on that. But speaking of ethical quandaries and impacting the earth and billionaires who could save it, but nah, there's a scene in Fight Club where a character played by Jared Lato is leering at this
Starting point is 01:20:43 newscaster on TV. And I was like, wow. Lauren Sanchez, former newscaster for real on TV, but now Jeff Bezos's new wife in Fight Club, being called Hot by Jared Lato. And that is a whole other origin story and dissertation for you. But here we are, 2025. Okay, bye-bye. Hackaderminthology. Pomeology, cryptozoology, lithology, nanomology, meteorology, meteorology,
Starting point is 01:21:16 meteorology, morphology, nephology. Seriosity. Cellicology. Why don't you just tell me the right answer? Well, that's what's so great about the trolley problem is that there is no right answer. And in banana pants!

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.