Modern Wisdom - #484 - Cosmic Skeptic - 8 Impossible Thought Experiments

Episode Date: June 9, 2022

Alex O'Connor is a philosopher, podcaster & a YouTuber. Philosophy is hard. Ethics are hard. Working out what is moral is hard. Today we get to put our mental muscles to the test with some of the most... challenging thought experiments in moral philosophy. Expect to learn why brain tumours might be a good way to learn what is actually moral, whether ethics is just an expression of emotion, whether we can kill someone to stop them nuking a city, why it might be best to just not have any more children, whether an expensive education is cheating, what it means to say that someone is morally responsible for their actions, why Alex wore a suit to a boat party and much more... Sponsors: Get my free Reading List of 100 books to read before you die → https://chriswillx.com/books/ Get 15% discount on the amazing 6 Minute Diary at https://bit.ly/diarywisdom (use code MW15) (USA - search Amazon and use 15MINUTES) Get 10% discount on your first month from BetterHelp at https://betterhelp.com/modernwisdom (discount automatically applied) Get 30% discount on your at-home testosterone test at https://trylgc.com/modernwisdom (use code: MODERN30) Extra Stuff: Watch Alex on YouTube - https://youtu.be/gcVR2OVxPYw Subscribe to Alex on Patreon - https://www.patreon.com/CosmicSkeptic To support me on Patreon (thank you): https://www.patreon.com/modernwisdom - Get in touch. Instagram: https://www.instagram.com/chriswillx Twitter: https://www.twitter.com/chriswillx YouTube: https://www.youtube.com/modernwisdompodcast Email: https://chriswillx.com/contact/  Learn more about your ad choices. Visit megaphone.fm/adchoices

Transcript
Discussion (0)
Starting point is 00:00:00 What's happening people? Welcome back to the show. My guest today is Alex O'Connor. He's a philosopher, podcaster, and a YouTuber. Philosophy is hard. Ethics are hard. Working out what is moral is hard. Today, we put our mental muscles to the test with some of the most challenging thought experiments in moral philosophy. Expect to learn why brain tumors might be a good way to learn what is actually moral, whether ethics is just an expression of emotion, whether we can kill in moral philosophy. Expect to learn why brain tumors might be a good way to learn what is actually moral, whether ethics is just an expression of emotion, whether we can kill someone to stop them nuking a city, why it might be best to just not have any more children, whether an expensive education is cheating, what it means to say that someone is morally
Starting point is 00:00:40 responsible for their actions, why Alex wore a suit to a boat party and much more. Don't forget that you might be listening but not subscribed in the only way that you can ensure you will never miss an episode every Monday, Thursday and Saturday when they go up is by pressing the subscribe button on Apple podcasts or Spotify or wherever else you are listening. The back end of this month has some of the biggest guests that we've ever had on the show, and I absolutely can't wait to do it. But you're going to miss it if you've not hit subscribe, so make sure that you go and
Starting point is 00:01:10 press it. I thank you. But now, ladies and gentlemen, please welcome Alex O'Connor. I like so kind of welcome the show Chris Williams and thank you so much for having me again yet again. Thank you for joining me here in Austin. How is a philosophy graduate, YouTuber, podcaster, and now wake surfing extraordinary? Yeah, my wrists feel as though they're about to come off. I don't think I've ever used this particular muscle before.
Starting point is 00:01:57 That's a lie. We both know that that's a lie. Yeah, well certainly not this. Yeah, that's right. That's right. Yes, today you turned up to a boat trip wearing pretty similar outfit to the one that you're in today. Yes, I'm trying to live out the philosophy that there is no situation in which you can't
Starting point is 00:02:19 wear something resembling a suit. Maybe you have to leave the jacket at home, maybe at a push. I was just a little bit warm, but shirt, chinos, spatter casual as it gets. You did good yesterday. I can't believe you've shown up to a philosophical conversation wearing a t-shirt and shorts. Well, I mean, this is my uniform, you know? Yeah, well, well, hopefully by the end of this, we're going to be much more philosophically entwined, and you'll begin to understand how fun it is to be a bit pretentious about these matters. It's always dressed formally. Quite right. Okay.
Starting point is 00:02:51 So, the last time that we spoke on the show, we were talking about some moral quandaries and some ethical dilemmas, and I really enjoyed that. I like the opportunity to do thought experiments. It means that people can have their brains fried at home as well as me. Problem being that I am the one that is publicly the most stupid, right? When these questions get asked and you say, well, what do you mean by kindness? And then I have to try and think of something. So go gentle with me today is my request. I'll try my best, but there's, I mean, the implication that I'm any better is a mistaken one, I think, the point of these questions is in many ways to demonstrate that there is no
Starting point is 00:03:31 answer to these questions, or at least that if you have an answer, there's no way to really settle the question in your favour. We don't really have a better grasp of the good or the just city than did Plato and theents. We haven't really progressed very much. And so it seems a bit futile to be discussing this kind of stuff. But as they say, the wise of every generation discover the same truths. If there is such thing as moral truth, it seems to be something that's out there and graspable by individuals as they go throughout their life. It's not going to put you in a better place than any of your ancestors, but it will put you in a better place than
Starting point is 00:04:09 you were yourself a few years ago. So they're still worth asking and answering to see what you think about these things, but don't expect to become a moral expert. It was a wonderful question on an exam paper for when I was studying philosophy and theology. I can't remember, I think it was on the ethics paper, it must have been, and I can't remember the exact wording, but the question was something like, does studying ethics make you an expert in ethics, or does it make you a better person or something like this? And if not, what's the point? Because of course, you can study ethical theories, you can have an answer to every single ethical query. But is that
Starting point is 00:04:50 going to make you a better person? In many ways, it might make you a worse person. What's your experience? You're more able to rationalize yourself out of moral obligation. You're able to get away with things just by convincing yourself somehow that they're ethical because you've got all of these ethical theories running around your head. My experience is that I have a much better understanding of my own moral intuitions. It feels like I kind of get to know myself when I study what's actually going on in my brain when I think something is wrong.
Starting point is 00:05:18 It's a very distinct feeling in your brain. It's distinct from anxiety or sadness or anger. It's distinct from propositions like this chair exists. It's a very specific feeling. It's like trying to pin down what that is, what it's nature is, what it's grasping at helps you to know yourself a bit better, but I don't think it helps you to act upon them any more strongly. Daniel Kahneman that wrote the thinking fast and slow got asked by Sam Harris on stage after all of these years learning about cognitive biases and how irrational the human mind is has it made you any less prone to falling prey to these things than you'll thought about it for a second.
Starting point is 00:05:52 No, not really. You know, okay, well, it seems like the people that spend a lot of their time thinking about this stuff understand it a lot better, but that it does seem to be very ingrained. Just going back to the the wise of every generation discovered the same truths, which is a great quote. Why do you think it is that we need to rediscover the same truths? Why is it that in the same way as technology, you know, we're not rediscovering the wheel, we're improving upon the wheel, in iterating on top of it? Why is it that it does seem a little bit like the same questions get asked in an
Starting point is 00:06:25 unsightly factory answer comes back? That's an important question. I don't think it's always the case that it's unsatisfactory. Some people are perfectly satisfied in the ethical conclusions that they come to. It's just that they're not going to be universally accepted. I suppose with something like the scientific method, you have this process of learning, you have this building block that you have to teach to the next generation and then they build upon it and that's the idea. And every generation gets taught those building blocks a bit more quickly for that reason. You learn a bit more at high school level
Starting point is 00:07:04 because science has progressed that the people at PhD levels You learn a bit more at high school level because science has progressed that the people at PhD levels are going a bit further and putting a bit more icing on the cake. Ethics can't really do this. I mean, it can. People are doing PhDs all over the world in very specific, as of yet, sort of, undiscussed ethical dilemmas and qualms and areas, but realistically, it's because coming to terms with ethics is very much, in many ways, it's kind of like how everybody in each generation will need to learn how to live with themselves in their lives. And people are kind of coming terms with the person that they are trying to figure out how best to live their life, trying to figure out how to be happy. This isn't something you can just learn and then teach to your children. It's something they have to discover for themselves.
Starting point is 00:07:49 It seems to be something a lot more personal. It's informed by experience. It's informed by intuition. If you're talking to somebody about an ethical issue, you might find that their views on an ethical issue are almost entirely dependent upon the experiences that they've had. At the very least, they'll be heavily affected by it. If you're talking about the ethics of theft, with somebody who used to be homeless and used to steal in order to feed their family, they're probably going
Starting point is 00:08:14 to have a different idea of what it means to the, the ethics of the fevery and this kind of thing. Of course, ideally, it wouldn't be this way, because we'd be able to detach ourselves from experience and think hyper rationally about ethics. But this is one of the reasons I've come to think that I subscribe to a view called ethical emotivism, that ethics is just an expression of emotion, which I think ties in very well with this observation. And so if that's what's coming on when we're doing ethics, then it makes sense that it's not something you can just teach to somebody else in a book. It's something that has to be lived. Why is ethics an expression of emotions?
Starting point is 00:08:52 Well, there are there are lots of reasons for this. I mean the the emotivist position is really not so much a a metaphorical theory of what good itself is, but rather what's going on in someone's head when they say that something is good or when they say that something is bad, it's a famously difficult thing to define. The most famous case for emotivism, it's a theory that's actually kind of gone out of fashion of late. It was originally put forward by AJ Ayer in the 20th century in a groundbreaking book called Language Truth and Logic, which stated that the only things that can be meaningful are those which are either empirically verifiable.
Starting point is 00:09:34 You can prove them by observation or things which are just analytically true, like that there are no married bachelor's. That's just taught a logically true. You don't need to go and observe every single bachelor to know that they're unmarried. These are the only things that can be meaningful. Someone came along and, well, it was A.O. himself preempting the objection comes along and says, well, what about ethical claims? These seem to be meaningful. People seem to mean something when they say murder is wrong, but it's not empirically verifiable that murder is wrong. What are you observing? What is it about murder? Where is this quality of wrongness within an action? That doesn't really make sense.
Starting point is 00:10:09 But it's also not analytically true. It doesn't follow logically from murder that murder is wrong. It's not a topology. So it's meaningless, right? And A, I think, well, maybe there's something else going on. Maybe when someone says murder is wrong, what they're really saying is something like,
Starting point is 00:10:24 boom, murder, crudely. So for A, writing murder is wrong, what they're really saying is something like, boom murder, crudely. So for air, writing murder is wrong, is basically the same as writing murder followed by an exclamation mark in an angry emoji. It doesn't add any propositional content. It's just an expression. And this is emotivism, but there are other forms of non-cognitivism, the idea that ethical statements are not cognitive, that they're not true or false in the way generally thought of, because of course emotions, expressions of emotions can't be true or false. Some people think that rather than being something like
Starting point is 00:10:54 boo murder, it's more like don't murder. But when you say murder is wrong, you're expressing something like don't murder. That can't be true or false. Don't murder isn't true, nor is it false. Likewise, boo, murder is not true or false. It doesn't have true value. It's just an expression.
Starting point is 00:11:14 This is the emotive disposition. And I just think that it offers a better account of what's going on in people's heads if they pay attention to the basis of their ethical intuitions. It seems just to be at root some form of expression. The rationalization that goes on, the kind of, well, premise, premise conclusion, well, if we accept this theory, this entails this conclusion, and this is, you
Starting point is 00:11:34 know, sure, there's all that going on, but it's all based upon an expression of emotion. Look at how much of, look at how so much of ethical decision-making or, as you say, like the solving of dilemmas, if you take the utilitarian position, that the best thing to do is to maximize pleasure. Famous criticism of utilitarianism is, well, would you be in favour of killing one person to harvest their organs? Because you've got five other people who need each organ in order to survive They need an organ transplant, but no one there to give it to them But you've got this one guy who walks in and he's got all the organs
Starting point is 00:12:11 So we kill the one person give it to the five Of course there are answers to this there are answers to why this might be wrong even on utilitarianism But the basic idea is like well you wouldn't do that right? And that would be that would be terrible that would be terrible. That would be horrible. And so the theory must be wrong. But on what grounds are you saying that because utilitarianism commits us to the view that we should be killing one person to save five? So, why is that a criticism of utilitarianism? Well, it's only because when you hear that example, when you hear what it leads to, you have this feeling of kind of like,
Starting point is 00:12:50 no, don't do that, boo that, something, something like that. And you might think, well, no, actually, you know, the reason I don't like that is because it will actually contribute to like fear in society, because people will be scared that they're going to have their organs. It's like, okay, like, so, like, why don't you like that? Ultimately, I think it all breaks down to something. If you pay attention to the nature of the feeling in your head, it's something that belongs in the category of emotion. It's just something like, you, or gross, or no, or boo, something like this. It's not one of those things. Common misconception of emotivism.
Starting point is 00:13:20 Ethical claims don't map on to emotions, like anxiety and sadness and boo, rather they just belong in that category, but it's own unique feeling. I think ethics. Imagine we lived in a world, Chris, where there was no word for anxiety. She didn't have a word for it. It's like, okay, so someone feels this thing, anxiety, and they're trying to describe what it is that they're trying to describe what it is at their feeling. They're like, well, it's kind of like, it's kind of like, it's kind of like excited, but sad and like bad at the same time.
Starting point is 00:13:51 It's like bad, it's like, they'd be kind of dancing around. They're saying, it's not quite this, but it's a bit like this, a bit like this. It's kind of somewhere in the middle. And this is a very unique feeling that we have towards, in the way that we might feel anxious towards public speaking or sad towards the death of a friend. This is a very unique feeling that we have towards seeing somebody steal from a homeless man. What is it? Well, it's kind of like, it's kind of like disgust, it's kind of like boo, it's kind of like don't, it's kind of a bit like that, but it's not any one of those
Starting point is 00:14:22 things. It's somewhere in the middle and we don't have a word for it. Well, I just think that we do have a word for it, and that word is wrong, but that it belongs in that category of thought. It's just within that context, and that's how we should understand ethical statements. But that's my view. The last time we spoke, you seem to think that ethics was essentially a project of minimizing suffering. I wonder if that's still an intuition that you hold to. That was the intuition I've got.
Starting point is 00:14:51 I feel like you're now making me commit to some ridiculous thought experiment that you're about to put in front of me and say, well, that's interesting because if we're going to try and minimize suffering, then dot, dot, dot. The premise of our conversation, Chris, was that you wanted me to bring along some ethical quandaries for us to work through together. And hopefully, finally, after thousands of years put to bed, I just think it's a good way to start thinking about ethics to look at some, some, you know, case examples. That's good.
Starting point is 00:15:23 There's a wonderful, if we take this kind of utilitarian position of minimizing suffering, and the reason it's good to start there is because at least in a secular context, most people start here. Most people think ethics is something about minimizing suffering or maximizing pleasure. That is, the thing that is right is the action which minimizes suffering or maximizing pleasure. That is, the thing that is right is the action which minimizes suffering, or maximizes pleasure. Maybe these are the same thing. This is the utilitarian position,
Starting point is 00:15:52 at least one version of it, the crude utilitarian. So there's a wonderful thought experiment, which comes from Roger Crisp, who has a wonderful review and analysis of John Stuart Mills utilitarian. If you're trying to read utilitarianism by Mill, it's worth reading Roger Crisp alongside him. It's just the best kind of introduction to it. It's the book that was set for all undergraduates to read at Oxford as well. Philosophy undergraduates. He comes up with this example which he calls the rash doctor.
Starting point is 00:16:37 So imagine for a second that there's a doctor and there's a patient that he needs to treat that's in pretty dire need maybe there at the, you know, on the brink of death or something. And the doctor has two options. There's like pill A and pill B, the red pill in the blue. No. Pill A and pill B, the pill A is one where, if administered, it has a 99% chance of failure. Call it a 99.9% chance of failure. It's just going to kill the patient in a terrible agony. But it has a 0.1% chance of restoring him to perfect health. That's pill one, pill B, pill two, whatever, B one, I don't care, is a pill that is basically the reverse such that it has a 99.9% chance of success, but it will only restore the patient to let's say 95 percent health. So, it's all pretty good, you know, like a perfectly livable life, just not quite 100 percent,
Starting point is 00:17:30 but nothing that would be complained about. And it has a 0.1 percent chance of failure and killing them painlessly and agonizingly. Okay, so these are the two options. Now the doctor chooses the first pill. The one that's going to have this overwhelming probability of agonizingly killing the man, but it works and it resourses him to 100% health. Did the doctor do the right thing? Is the question. In retrospect, it depends on whether you could have run the experiment again.
Starting point is 00:18:02 Can we do this again? Can we see what would have happened? Problem being that you're never going to actually get to split us this experiment and work out whether or not you would have killed him with pill B. It ends up being probabilistic utilitarianism. Exactly. And so what we end up with is a situation of, I mean, I think most people would want to say
Starting point is 00:18:21 that the doctor should choose pill B. It seems clear to most people. Again, it might be a clash of intuitions, but if you have these two pills, 99% chance of certain death, 1% chance of 100% health versus 99% chance of like 95, or even like 99% health, and only a tiny slim chance of killing the patient, surely the second one is the right one to do. But of course, if we say that the right thing to do is that which maximizes pleasure, or that which minimizes suffering, then we'd be committed to the view that if he chose pill B, even if it works, if pill A would have worked, he did the wrong thing, which seems weird.
Starting point is 00:19:03 It seems like we want to say that there's some sense in which you should choose the second and as you quite rightly identify We should be probabilistic about this. This is where you can distinguish between actualists utilitarianism and probabilistic utilitarianism So maybe what we should do is not what actually maximizes pleasure But what will probably maximize pleasure? But then that's a little strange, because if the reason why we're trying to maximize pleasure or minimize suffering is because we believe that there's just something about suffering, there's something that's moral truth of the universe that minimizing suffering is the right thing to do.
Starting point is 00:19:37 Like what right do we have to add this probabilistic qualifier to it, except because it's kind of practically difficult to swallow that pill, if you will. It's like, okay, we're just kind of letting our practical considerations what override our considerations about the very nature of good and pleasure itself. That doesn't seem quite right. Can we just say that what is good is what will probably maximize pleasure? That doesn't seem like a very steadfast rule of the abstract. Very precise, is it? So maybe then we need to distinguish between the criterion of the good and
Starting point is 00:20:10 the decision procedure. That is, the criterion of the right, the thing that makes something good is still what actually maximizes pleasure, is just that the best way on the whole to achieve that is to adopt in our decision procedure, that is when deciding how to make ethical choices to take a probabilistic approach. So probably maximizing pleasure isn't the criterion of good, but it is the procedure that we use to make the decision about how to get to the criterion of good, which is still actual maximization of pleasure, but you can see already that it can't be as simple as kind of, or the right
Starting point is 00:20:50 thing to do is that which maximizes pleasure. I mean, there's a whole other problem with this, which is that, of course, if you want to be a utilitarian that kind of crudely decides in any situation, the best thing to do is what's going to maximize pleasure even probably. So, it means that every time you go to make a decision, you have to do this kind of moral calculus, a hedonic calculus and figure out what's going to have the best effect here. But what if it's the case that the act of doing the hedonic calculus is actually more harmful? Well, then you shouldn't do the hedonic calculus,
Starting point is 00:21:26 but that means that you're kind of then not acting like a utilitarian. What would be an example of that? Well, just for instance, I mean if you were kind of, I don't know, if you had to make a quick split ethical decision. Like an EMT? Sure. That's got some people by the side of the road and you need to work out, and by taking the time to do the hedonic calculus more suffering is. Yeah, like you maybe you don't even have time to do it there. It's like if the right thing to do is always to kind of analyze the situation, see what's kind of maximize pleasure. In that situation, by doing the analysis, you run out of time in the patient's
Starting point is 00:21:59 dead. Yes. Also, maybe even in kind of mild cases, because an interesting consequence of utilitarianism is that there's almost no such thing as an amoral action, because everything seems to have some minimal effect on pleasure and suffering. The position that I'm sat in seems to at least minimally affect the pleasure and feeling the suffering that I'm having, that might hone a voice towards you. Every single little minute decision seems to be something that has some minimal effect on pleasure and suffering. And so, sure, you could say that every single time I do anything, I'm going to do a
Starting point is 00:22:32 heredonic calculus, but then you basically become paralyzed, becomes very difficult and slow to do absolutely anything. And so, in that second case, it's not like the first where it's obvious that just doing the calculus makes you run out of time. In this case, there's nothing in principle stopping you from doing it. You could just try to make sure that every single time you do anything, you think carefully, about the effect on pleasure. But that just doesn't seem like a very good way to live.
Starting point is 00:22:58 It seems like it's quite harmful to a person. And overall, it might actually be more annoying to people, might cause more suffering. And so, okay, so what if it's the case that living as a utilitarian is wrong by a utilitarian standard? And should we be utilitarians? Well, if we should, then utilitarian ethic dictates that we don't always live by utilitarianism in this way, so let's not. So it kind of
Starting point is 00:23:25 self defeats. Now, there are more ways around this, of course. So this is where you get something like rule utilitarianism, which says that the way to kind of get to the good, the way to maximize pleasure is to adopt rules, which if generally followed will maximize pleasure. And so you come up with rules like, don't murder, don't thieve. And so even in a situation where it might actually minimize suffering for you to thieve in this particular situation, it's like because we've already abandoned this idea
Starting point is 00:23:59 of judging every situation on its individual merits, we say we don't have time for that, we're just going to have a rule, and if we generally follow this rule, pleasure gets maximized. And so even in a situation where, thevening say would actually minimize suffering, you should still not do it because of your allegiance to this rule. But then that seems weird too, right? Because now you've got a situation where you might well know, because you've done the Hedonic calculus. You've worked out that, yeah, in this situation,
Starting point is 00:24:27 if I were to, you know, a famous example might be, if you're a sheriff of a town, and there's an innocent man, but the whole town thinks that he's guilty, and they're gonna cause an absolute riot and just burn the city down if he's set free. But you know that he's innocent. It's like, well, sending that man to jail, even though he's innocent, will minimize suffering
Starting point is 00:24:48 in this instance, because it prevents the whole city burning down. It's like, okay, what should we do here? What you might think? A degree of injustice that's going on that feels like it's outside of the sort of utilitarian outcome. Yes. So justice is one of the biggest criticisms of utilitarianism. The idea that there seems to be the single justice, you have this right that is
Starting point is 00:25:07 impenetrable even by the persuasive force of great deals of suffering. Even that has its limits, of course. Most people would say that you could kill an innocent person if it was gonna stop a nuclear war from going off that destroys the entire planet. It's like, why did you believe in rights in the first place? If you actually believe that there are these things called rights which are genuinely unviable, then you have to
Starting point is 00:25:31 commit yourself to the view that that right can't be violated even in the situation where it's going to prevent a disaster. And if you say, well, no, no, no, no, then okay, in that situation, it'd be okay to violate the right. There's no right in the first place. It was just this rule that you made up that you think on the whole is going to minimize pleasure, minimize suffering, but you can see in an obvious case where this is definitely going to maximize suffering, you just violate the right. It betrays this idea of a right as nothing more than one of the rules of rule utilitarianism. Is there a way to try and have broad rules that for the majority of cases are useful and then ignore the outliers? So I know that the last time we spoke you explained
Starting point is 00:26:17 is it the reducto-oad absurdum where you try and do something which shows if you take this particular ethical theory to an extreme case, something kind of weird or bizarre happens at the entire town getting burned down to put one man in jail as an example. Is there a way that people have tried in ethics to say, well, look, on average, most cases are going to fall within this bulk of normalcy. That might be some outliers, but that doesn't necessarily disprove the fact that overall, this seems to be an optimal way to do things. Well, yes. Of course, there have been attempts, and the reductio ad absurdum is a useful approach. It just means reduction to absurdity. It shows, well, let's take this logic, see where it goes,
Starting point is 00:27:01 and see if it leads to absurdity. The reason why people do a reductio is because, if you put forward this principle, that the issue with something like utilitarianism is that it's often put forward as an objective moral theory, that is, it is objectively true, that this is what should be done. You could be kind of a subjective utilitarian, that things that pleasure is only subjectively good or something like this. Mill himself certainly thought it was objectively true, in the way that the fact that I can see a table as evidence that it's there, or that rather it's a little more subtle than that. But similarly,
Starting point is 00:27:36 the fact that I desire pleasure proves that it's desirable in the way that the table being visible proves that I can see it. So he thinks that it's objective and the problem is if you have an objective theory, if you're saying this is just objectively true, even if you find a single example, no matter how convoluted that proves it wrong, or that makes us think it's wrong, it must be wrong. Imagine if we did the same thing with mathematics. Imagine if we had a mathematical formula. Indeed in history we've had this, We've got like Newtonian physics, a famous example.
Starting point is 00:28:08 Very good. Newton really had it going for him. Got us to the moon and back. But when it comes to things moving closer to the speed of light or something like this, it doesn't work. Okay, so we could just say, well, Newton, you know, Newton gravity is kind of, it's true enough or it kind of works and it is in practice in terms of its practical import. Yes, it's good enough to help us live.
Starting point is 00:28:33 But if we're talking about the actual truth of it, it's not like, yeah, well, I guess it's kind of true. It's like, no, if something proves it wrong, it means it's false. If we had a mathematical formula that we thought was true, but we showed an example of it punching out and actually obviously false answer, we wouldn't just be like, ah, it's right most of the time. We'd be like, this is how science is done. You try to disprove a theory, and if you find one example of something disproving it, the theory must be wrong. And so if we're going to try to objectify ethics in this way, we have to hold it to the same must be wrong. And so if we're going to try to objectify ethics in
Starting point is 00:29:06 this way, we have to hold it to the same standards. It doesn't matter how much of an outlier it is. If you've proven it wrong, you've proven it wrong. Lay it on me. Give me something that's going to make me look stupid. So here's a fun way to try to nail down the intuition as whether you believe, whether you think like a utilitarian or whether you think like a right-based deontologist, a person who believes that there are these things called rights that people have that are immune to amounts of suffering and pleasure. And again, we can get into that more because, of course, we've kind of hinted at these
Starting point is 00:29:40 examples, but the idea is that if the thing that matters is suffering and pleasure, then if, parming an innocent person to save many people, maximizes pleasure, then we should be able to do it. But people want to say, no, this is a thing called rights. So they seem to contradict each other. Mill tries to offer an analysis, a bit like what I just did of saying that rights are actually kind of a construct that comes out of pleasure, but it's generally thought that he's contradicted each other. So there's a wonderful question that I once approached on my YouTube channel, and I want to read it word for word to give it credit
Starting point is 00:30:16 to the place it came from, which is a wonderful quiz called Morality Play. I can't remember the website's name, but I'll send it to you and maybe you can link it in the description. But I think it's the first question. I'll read it word for word and see what you think of this. You are able to help some people, but unfortunately you can only do so by harming other people. The number of people harmed will always be 10% of those helped. When considering whether it is morally justified to help, does the actual number of people involved make any difference? For example, does it make a difference if you are helping 10 people by harming one person rather than helping 100,000 people by harming
Starting point is 00:30:57 10,000 people? In other words, is the moral analysis the same? The proportions are exactly the same. You're always saving 10 times the amount of people that you're harming. But is there a moral difference in, let's say, killing one innocent person to save 10 and killing 10,000, save 100,000? Even if you think both are wrong, even if you think both are justified, are they you think both are justified. Are they exactly equally justified or wrong? So my intuition is that the bigger numbers
Starting point is 00:31:31 feel more wrong. It feels like there's more overall suffering going on. If you were to say you have the choice between suffering for 10,000 people but pleasure for 100,000 versus suffering for one and pleasure for ten. The one for ten, to me, seems more acceptable than the bigger number. You say there's more suffering going on, which there is, but there's also much more pleasure going on. The point of the thought experiment is that these equally balanced out. So in both cases, the balance of suffering and pleasure is precisely the same. Proportionally. Yeah, it's equally kind of canceled out.
Starting point is 00:32:06 And so, yeah, you increase the suffering a lot, but you also increase the pleasure, so that the balance remains exactly the same. So if what you care about is the minimization of suffering or the getting the best balance of pleasure over suffering, then these two ought to be the same, right? Yes, they should be, but I'm obviously logically inconsistent. Well, maybe you feel like it's worse to harm more people to save more people. I think we've had this discussion before that the avoidance of suffering rather than the pursuit of pleasure is an interesting question.
Starting point is 00:32:40 And also, is the pursuit of pleasure simply the absence of suffering? There seems to be a good bit of evidence that suggests that humans aren't actually pleasure seekers, they're suffering minimizers. Yes, you can kind of reject the grammar of the question by saying that, I mean, the implication in the question is that the proportions remain the same, but is it worse? Maybe you could just say the proportions don't remain the same. One way to establish this conclusion is to say that like David Benetard does, the famous anti-natalist who wrote a wonderful book called
Starting point is 00:33:11 Better Never To Have Been that has a wonderful discussion on the nature of suffering and pleasure and their asymmetry. It's not just an argument as to why you shouldn't have kids. It contains a lot of wonderful reflections on these topics. And he thinks that suffering just counts for more. Would you take five minutes of the worst suffering imaginable if afterwards you got five minutes of the greatest pleasure imaginable? Hard to say, but most people say no, it's kind of not worth it in a way. Do you remember when we went to the Life Lessons Festival a couple of years ago and we were sat down, it was a canteen style thing at some disgustingly ugly building in the middle
Starting point is 00:33:47 of London. The Barbican. That was it. The dreaded Barbican. Yeah, it's some sort of brutalist architectural nightmare. It's actually evil. Correct, yeah, exactly. And we were sat down at the canteen, it's sort of a school tables style thing, long
Starting point is 00:34:02 benches. We were having this exact discussion about the relative amounts of suffering and pleasure. It's before you became a full on nihilist, but I think it was a gateway drug to your nihilism. And people kept on coming and going. People would sit down next to us, recognise you or recognise me or just sit near us. And I don't know whether you noticed, but there was an increasing sort of like no go zone that people sort of came sat down, look, didn't like the conversation and then sort of shuffled away or just left very quickly.
Starting point is 00:34:32 Yeah, well, it can be more badly fascinating, but it's also quite depressing to think about. Beness's book is the first chapter or maybe the second is arguing that even if your life is mostly pleasure and just a little bit of suffering, it's still not worth beginning, it's still not worth having a bringing a child into existence that's going to have mostly pleasure and only a little bit of suffering. But then the next chapter, after convincing you if that conclusion or trying to, the next chapter is basically him saying, but even so, your life definitely is way more suffering than pleasure and I'm about to prove it to you. It's not the name of the chapter, but it's the essence of it is why your life
Starting point is 00:35:09 is going a lot worse than you even you think it is. And he kind of puts this real emphasis on the suffering. And it does seem like maybe there's an imbalance here. It does seem that there's a difference in the way that we treat them. For example, I can I can unconsensually inflict suffering upon you if it's going to prevent greater suffering. If you're like, if you're like unable to speak to me or something, there's something going on with your with your communication. And I need to break your arm because if I don't, maybe you're like drowning in some small little like cave and your arm stuck and I need to break your arm to pull you out. I'm justified in doing that, unconcentially, because suffering for more suffering is fine. But if I could
Starting point is 00:35:51 inflict suffering to bring about some great pleasure in your life, if I were to break your arm and by doing so give you like an encyclopedic knowledge of philosophy that would just be perfect for your podcast. Even if like, even if in theory you might actually choose that if given the choice, if it's unconcentual, I don't have the right to do that in the same way. Why? Why can I inflict suffering unconsciously to prevent greater suffering, but I can't inflict suffering unconsciously to grant some really, really great pleasure that might even outweigh the suffering.
Starting point is 00:36:24 It seems to be an imbalance. It seems like this is because of the asymmetry between how we view pleasure and how we view suffering. Quite. But of course, you can just adopt thought experiments. Adapt thought experiments. That's the wonderful thing about them. So if you think suffering counts for more, just imagine kind of whatever the proportions would
Starting point is 00:36:40 look like. So just just make it so that the balance is actually the same. So maybe maybe one percent instead of 10 percent. Or maybe maybe yeah maybe like killing one to save 10 is roughly the same as like I don't know killing like 500 to save 10,000. Because maybe you need to slightly adapt it. You get a discount when you start to do it at scale. As you parlor the suffering, its proportional impact needs to be accounted for, but even if that's the case, it's like just do that.
Starting point is 00:37:10 Just say that the proportions are the same. You've got one situation in which you kill X to save, you know, NX and another situation in which you save kind of like a bigger version of that. But the balance is the same. Is it worse? And you can just ask the question again. Or is it like as good?
Starting point is 00:37:33 It's like, okay, well, if you're a utilitarian, you might want to think yes because the balance is the same. But there's something about this which makes us maybe want to say that it would be worse to kill more innocent people, even to save more people from harm. Well maybe that's because there is this thing called rights. Maybe people actually just do have this property that they have a right not to be violated. Now if you have a right that is immune to the amount of suffering, this is what a right
Starting point is 00:38:03 is, it's a claim that's immune to the amount of suffering. This is what a right is. It's a claim that's inviolable. It's a claim that you have against other people. It always comes along with the corals of duty. If I have a right, I mean someone else has a duty. There are traditionally thought to be four types of rights, but the important one is a, for this discussion, is a claim right. It's a kind of claim that I have over you. You have a duty not to harm me if I have a right not to be harmed by you. It's always comes in pairs. And the point of a right is that it's imbiolable, which means that it doesn't matter how much suffering it's going to save. If you don't get to do that, I can't kill you to steal your food, to donate to ulteriority,
Starting point is 00:38:42 even if that's going to save a bunch of children from starvation, you have a right to your property, you have a right to your life, and so I can't do that. It doesn't matter what sufferings going on here. So if somebody has a right to life that can't be violated, then it doesn't matter how many people are being saved, a violation of that right is a bad thing. And so if I violate your right to save one person or to save 10 people, it doesn't make a difference. Can't violate it. So if I violate one right to save 10 people, it's like, who cares how many people were saving? You violated a right. It's immune to the suffering that you're saving. And so all that we really care about in this analysis is you violated one right. Whereas if you kill 10 people, save 100 people, proportions are the same, you violate a 10 rights. To save 100 who cares, it's immune to the suffering, you violate a 10
Starting point is 00:39:31 rights. So you violate a 10 rights versus violating one right, and so it's worse. So maybe it would actually be worse because you're violating rights in such a way that's actually just immune to considerations of how much suffering you're preventing. And sure, you can take that view if you like. The problem with it for me, of course, is that if you think that you just have this thing called a right that can't be violated regardless of the suffering that's being inflicted, you can meet yourself to the view that if I had a choice, if somebody told me that they were about to launch every single nuclear weapon on the planet, unless you kill an innocent person, would you kill the innocent person? Yes.
Starting point is 00:40:12 If they have an inviolable claim right not to be killed, you can't do that. You have to let the world burn, but you're going to do that. So maybe the right actually isn't as inviolable as you thought, and maybe it actually is sensitive to the amount of people that's being saved. This is a big difference between doing and allowing, right? Because you could argue that by not killing that person, you're killing all of the other people. Well, I mean, you could argue that, but it's, I think you'd be, I think it's always a bit unfair to characterize this kind of situation in this way. If a terrorist tells you that they're going to blow up Manhattan, unless you torture your own daughter or something like this, it seems to me that if you believe in rights, one approach is to say that if you torture your daughter, like
Starting point is 00:41:05 you have violated her rights, you've done something wrong. Okay, so if you don't torture your daughter, millions of people's rights will be violated. So even if you believe in rights, surely you should favor like one right being violated and a realization of right violation. Exactly. But of course, depends how we're approaching ethics. What's the point of ethics? Maybe it's to figure out what's the right thing for an individual to do. So what's the right thing for you to do? Well,
Starting point is 00:41:30 if you violate your daughter's rights, you violated a right. If you don't, then millions of people's rights get violated, but not by you, by the terrorist. You've allowed somebody else to violate someone else's rights, but that's not on you. You don't get any moral blameworthiness for somebody else violating somebody else's rights. They're the ones who violated the rights. And we kind of accept this line of thinking when we refuse to give into demands by hostages and this kind of thing. So they end up killing the hostages. The hostages are making the demands.
Starting point is 00:42:02 Yeah. You know, if it's bloody demanding hostage. If we, um, well, I imagine they probably do get quite demanding at some point. If the, if the terrorists kind of says like, you know, could you imagine, could you imagine you being taken hostage by some terrorists? It would, it would make for some sort of fun, ethical dilemmas. Maybe I can point being there for them into that's it that they would throw you out and say please, please take him back. They'd be like, listen, we're going to kill you unless you do that and I'd be kind of like fair enough.
Starting point is 00:42:36 Yeah, okay. It was, I kind of understand that. But which, which, I mean, which ethical theory are you kind of, are you, are you abiding by that? Yes, probably. You're a vegetarianism. They were like a religious terrorist. It's kind of like, well, well, which, are you like a divine command theorist? Do you think the evil is like a privation? Get him out, get him out, get him out, get him out, get him out, get him out, get him out, reverse.com syndrome.
Starting point is 00:42:54 Right. What's next? Give me, give me, give me something else. Well, let's see what else we've got, let's see what else we've got going on. Okay, here's, here's just take a totally different turn. I want to get your views and calibrate your intuitions on the nature of the relationship between moral responsibility and the ability to have acted differently. Most people think that generally speaking, if you are to be held morally responsible for something, you need to have been able to act differently. If you couldn't help but commit a particular action, then it's
Starting point is 00:43:32 difficult to hold you morally justified, morally responsible for that action. There's an interesting example that's often given in the discussion of free will, it's discussed by Sam Harris and his free will, but I know that for a fact, of, this is a real case, there was a man who was basically exhibiting pedophilic tendencies, he was just sexually attracted to children, and I can't remember if he acted upon it or not, I think he may have done, and so he was... He had to do the nurses in the place where he was being held after a little while, after he submitted himself as psychological evaluation, psychiatric evaluation. Of course, when the evaluation is done, it's discovered that there's this great tumour
Starting point is 00:44:17 that's pressing against the part of his brain that deals with impulse control. Okay, how does this change your moral assessment of this person? You might actually start feeling sorry for them because it's as if I've kind of, if I were to like prod your brain, Chris, in such a way that gave you the same disposition, I'd be doing a great evil to you.
Starting point is 00:44:39 You'd be a victim there. It's not your fault. It's like, okay, cool. And the fact that there isn't a prodder in this situation makes it kind of unique because there's no first mover that you can point to to say that they're the person that caused this time. Yeah, there's no one kind of to blame here in the way that kind of, you know, somebody might be victimized by cancer, but there's not really someone to blame for them developing it. It's just, but you know, it's something that we feel sorry for people who undergo this.
Starting point is 00:45:08 It's like, okay, so this person has a brain tumor that's basically turning them into a pedophile. Okay, do you feel sorry for this person? I mean, I seem to. I would say, yeah, I mean, this is a horrible situation to be in. And they remove the tumor and the disposition goes away. And then a little while later, it comes back, your disposition starts coming back, he starts getting a bit pido again, bit non-se, and then they discover that the tumours come back. Okay. So we say this person, let's start with kind of the attractive quality,
Starting point is 00:45:49 Let's start with kind of the attractive quality, the fact that this person has this sexual attraction. Certainly, it's not their fault that they were this way, because they had this thing in their brain that was causing them to think this, that was through no control of their own. But of course, this is just how sexual attraction works anyway. It's just this thing in your brain that makes you feel a particular way that you can't control. You don't get to choose what you're attracted to. The fact that your inhibitions have been lowered is it's just a gradation, right? It's not a difference of type. It's simply a difference of kind. Presumably, everybody has some degree of inhibition and some degree of sexual attraction. And those are pointed in different directions. I feel like it was his stepdaughter that was the young girl that he sort of made some movements
Starting point is 00:46:32 towards. So he annihilated his own marriage. Toppy dot his own marriage to this lady who had this daughter. Maybe twice, in fact, I feel like she took him back after the first tumour and then it happened again and then she let him go, the first tumour, and then it happened again, and then she let him go, but it was, it was too, so I mean, even that's interesting. Is there a number of times that your non-conscious inhibition reduction should be allowed by a person? Well, this is where I think it's very useful to adopt something like Susan Wolf's real self view. That is the difference between determining whether somebody is responsible, and we remove the moral element because we remove freedom here. We should get there in a second because we should make sure everyone's at the same point here. It's like, okay, so we can probably agree that if somebody finds themselves sexually attracted to children, you can't hold them morally
Starting point is 00:47:28 responsible for the sexual attraction. It like you can't choose what you're attracted to if anything you feel sorry for such people. There's a great study that's neuroscientist where he got straight man, straight women, gay men, gay women, people that are attracted to kids, people that are attracted to animals, put them all into an MRI and put an arousal response meter around them, which is basically a cock-ring for men, like a moisture meter or something for women. Put them in there and showed everybody every different type of attractive image and
Starting point is 00:48:02 video that they could. And you would think that especially in a situation like this, that people that were attracted to kids would maybe try and sort of change, they would adapt what was going on, maybe out of embarrassment or something like that. And it turns out that you can show them absolutely everything under the sun. And they don't get any response. And this is just generally a fascinating intuition when it comes to people that are attracted to kids
Starting point is 00:48:27 that I asked him, do people get to choose what they're attracted to? He said, no, you go, okay. That makes the moral judgment of people who have the attraction, not act on it, people who have the attraction, makes it a fascinating thought experiment. So one of the most interesting thought experiments, I think.
Starting point is 00:48:46 Yeah, I mean, most people have generally accepted in other contexts that sexual desire is a moral. You can't be held responsible for a mere sexual desire. You can only be held responsible for acting upon it. Most people think this. It's one of the great arguments in the history of the liberation of homosexuals has been pressing the point that you don't get to choose to be this way. Of course, with homosexuality it's a bit easier because you could say, well, even if you did, there's no problem with
Starting point is 00:49:23 it. It's like, yeah, even if you did, even if it were possible to choose to be a homosexual, wouldn't be wrong to do so. But one of the great points that's pressed is that it's even worse because you need to not like you get to choose this. So most people have accepted that in most contexts. But thinking along these terms, we begin to think about like inhibition. And so let's think about somebody who commits a moral crime. Let's imagine a similar kind of situation. Let's say that whatever the moral crime may be, take something that you think is a moral. They're committing it because there's a brain tumor that is pressing against the part
Starting point is 00:49:59 of their brain that deals with inhibition. It's now not just affecting something like attraction or desire or whatever, it's affecting their actual ability to suppress the it, to suppress the basic sort of pleasure seeking crude animalistic tendencies that they have. It's like, is that person responsible? If I were to go into your brain personally and like on purpose, like remove a part of your brain or press against a part of your brain in such a way that I knew would make that part of your brain that holds you in moral reprimand with yourself, stop working.
Starting point is 00:50:40 Such that next time you want to do something in moral, your brain just literally wouldn't be capable of stopping you because I do something immoral, your brain just literally wouldn't be capable of stopping you because I just removed that part of your brain. It's just not there. Then you go and commit that immorality. It's my fault. It's not yours. You're a victim there. But again, if you find someone who just naturally doesn't have, isn't able to control their inhibitions. It's just the way that their brain is. It's the way that they were brought up,
Starting point is 00:51:10 which they had no control over, it's their genetics. Maybe they've got a brain injury that's undiscovered or something, who knows? But whatever it is, if somebody is not hard working, somebody's lazy, it's like, okay, what if just their brain is just designed in such a way that they can't overcome that laziness? Can they be held morally responsible for this kind of stuff? In the same way that we want to say that if there's a tumor pressing against the part
Starting point is 00:51:38 of your brain that dealt with kind of your attractions, we'd say you're not responsible for them. And that gives us a reason to think that, you know, generally speaking, you can't be held responsible for attraction. It's like if a tumor pressing against a part of your brain that dealt with your inhibitions made you act in particular ways, then if we said that we can say the same thing, we can say, well, that's because it's the only reason you're acting this way is because of something going on in your brain over which you have no control. It's this tumour thing that's doing something in your brain. That's how all action works, all the time.
Starting point is 00:52:12 Tuna, tuna, tuna or not. If you've got tuna in your brain, whether you have a tumour or not, at what point does a tumour become a tumour? Exactly. Something, yeah. Morally responsible for the way that your brain is made up, you didn't choose your parents, you didn't choose to be born at the time that you did. You didn't choose any of the upbringing that you have. Yeah, and of course, so this isn't quite a moral dilemma as much as it's just an argument against the existence of free will. It's like any decision that you make seems to be a result of brain activity over which ultimately you have no control. I pressed this quite strongly. I think that there just is no free will in any
Starting point is 00:52:45 actions, but at least in moral, the moral arena, people should be able to see that there's a problem here. Like, nobody strictly chooses to be lazy. They just kind of are, or maybe they kind of choose to be lazy in that they act in particular ways that bring about that character, but why did they choose to act in those particular ways? Their brain just kind of was of a psychological constitution that made them do that. And so if we're going to accept this in the language of desire, why not in the language of action as well? Why not in the arena of action? How can we say that anybody is actually morally responsible for anything at all? Wasn't there a guy who had a... he noticed himself getting very aggressive, went up a bell tower, started shooting people after he'd shot his wife and kids and then finally shot himself,
Starting point is 00:53:35 but shot himself in a way that didn't destroy his brain because he said that he needed the post-mortem to have a look. He could tell that something was wrong. This was Sam Harris as well. And you'd think, well, I mean, that's morally reprehensible. But then you go in and find that there's a tumor the size of a golf ball in his brain, which was pressing on the amygdala and making him incredibly angry and incredibly aggressive and stuff.
Starting point is 00:53:58 There is something very, very odd when you think about, was it wrong to kill those people? Yes. Is that person wrong to kill those people? Yes. Is that person responsible for killing those people? Kind of, yes. Are there gradations of responsibility? Yes, it seems like there are. It seems like he is somehow less responsible
Starting point is 00:54:23 than a version of him that did that without the tumour. We go, okay, well, what if in a different version of this universe, that guy had a worse upbringing or a more aggressive father, still no tumour? Does aggressive father guy get more of a pass? Somehow is he less culpable? Should we mediate the sentences that criminals are given based on the past that they have? Well, well, sometimes people think that we should, right? Like you begin to feel a bit more...
Starting point is 00:54:54 At any rate, you wouldn't be surprised to see like a lawyer, like a defense attorney arguing in defense of someone who's committed a crime and pled guilty, saying, you know, this person has a horrible life. A terrible, untrusted, this kind of thing. And people tend to be quite cold towards that. These days, I find it's kind of like we don't care, you know, they committed a crime and fair enough. But I think what you're getting at by saying, what you're kind of getting at is how much we can separate him, this person, from the thing which caused the action. What is the person? Who are you? What makes you you? If I just completely destroyed your brain, are you still you? Maybe not, because there's
Starting point is 00:55:41 kind of nothing to call you. You still got your body, you still got your biceps, you still got your pecs, but you're not really Chris. It's like, okay, well if you have a tumor and I remove the tumor, are you still you? Yeah. Okay, if I removed your upbringing, are you still you? What does that mean? If I gave you my upbringing, gave me yours. Like, who's who? Who who's Chris who's Alex like
Starting point is 00:56:07 It starts to get a little bit unclear. What's that? Paradox of the ship the ship of the seas. Yes, the idea of If you kind of change one plank at a time you got this old ship the ship of the seas and You replace a plank because it gets old and starts molding so you replace the one old ship, the ship of Thesius, and you replace a plank because it gets old. It starts molding, so you replace the one plank. It's still the same ship, it's got a new plank, and then you replace another plank, and then another one, and then another one, and then another one, and every time you do it,
Starting point is 00:56:33 it's still the same ship, but it's just got a new plank on it. And our human bodies do the same thing. Our atoms replace each other about every seven years. There's a lot of people know. Seven years ago, you had no atoms in your body, that are in your body right now. But imagine once we've replaced every single plank on the ship of Thesias, and we've still got this ship, it's still been in constant use every single day, we take all the planks that were discarded, and we build another ship out of it, which one is
Starting point is 00:57:01 the ship of Thesias? Who knows, right? It's like, what does it mean to say a person is a person? And I think a lot of people would say that your history or upbringing is at least very important in determining who you are as a person in a meaningful sense, in the kind of senses that we need here. And so this is where we get this idea of the real self view of the season. The season. The season. Susan Wolf is that there are certain kind of things which, regardless of how in control you are of them, seem to be more your real self than not.
Starting point is 00:57:34 So for example, if you like tripped over and pushed me into traffic by accident, that's not your fault, you're not morally responsible. If you were to just push me in traffic because you thought it was funny, well, on this view of there being no free will, you're still not morally responsible. It's not your fault you had the kind of brain that find that funny and couldn't stop the inhibition. But that seems more tied to like being an essential property
Starting point is 00:58:00 of you rather than just an accidental one. You know, tripping over people, people trip over, and I've got no reason to think that I'm kind of in danger or that I've got a problem or that I'm gonna be, I'm gonna suffer when I'm around you if you trip up and push me into traffic, whereas if you do it because you think it's funny, I still don't think you're morally responsible,
Starting point is 00:58:19 but I think that that kind of behavior is more essentially springing from your nature. And so when deciding whether to be your friend, which crudely might be some kind of thought of as some kind of like social contract that we enter into to put your welfare over that of a stranger's and this kind of thing, it's a bit unremantic, doesn't describe friendship in that way, but that's essentially what we're doing. It's like, these are the kinds of things which inform it. Take somebody who is criminally predisposed because of some well-rechnelized mental disorder. Everybody agrees it's not their fault
Starting point is 00:59:00 that they have a bonus to committing evils, but we know that they're going to commit a bunch of evils. We probably still want to confine them. We probably still want to put them in some kind of jail or asylum or something, maybe something a bit nicer. Maybe we put them in somewhere where where they're kind of confined, they're separated from society, they can't harm people, but they're kind of treated nicely. They're kind of. They're treated comfortably because it's not quite their fault that they're there, but we still separate them out. And if there's any chance that we can sort of change them in such a way that they might be able to re-enter society, if we can kind of teach them, if we can solve the problem that's going on in their mind, then maybe we should do that.
Starting point is 00:59:41 It's like, this seems plausible. This seems to make sense. Okay. And in situations of obviously not being someone's fault, something going on in their brain, maybe the kind of person who has one of those tumors, if someone's got a tumor that's causing them to like sexually assault children, like, yeah, we should lock them up, but we probably shouldn't put them in some concrete cell and restrict their meals and treat them horribly because we think it's not really their fault. So we put them in some concrete cell and restrict their meals and treat them horribly because we think it's not really their fault. So we put them in a bit of a nicer environment and if there's a way to get rid of the tumour, we try to do it.
Starting point is 01:00:11 Okay, so people often, often want to ask, if there's no free will, then how do you deal with responsibility? Well, take this real self-view and just imagine that every time, like when you push me in traffic because you think it's funny, whatever it is in your brain that's making you do that, I kind of imagine it as being analogous to the tumor. So I think, well, if you're going around pushing people into traffic, I should put you in some form of like jail, but maybe not like a really horrible jail with concrete walls, but something a bit nicer.
Starting point is 01:00:40 And if I can remove that tumor, that is, if I can rehabilitate you, then we should do so. And so what you land upon is a rehabilitative view of justice, which is quite popular, even among people who do believe in free will. The idea that even the worst kinds of criminals should be put into rehabilitative centers where they're not treated horribly, where they have a level of comfort and if possible, an ability to rehabilitate them. So if people are sympathetic to that view of rehabilitative justice, I think they're already getting towards the intuition
Starting point is 01:01:14 that I have about a moral assessment of responsibility. It's like, yes, you are responsible, but you're responsible in the way that a tornado is responsible for knocking down a city. It is responsible for it, it a tornado is responsible for knocking down a city. It is responsible for it. It's not morally responsible for it. Does this not undermine a lot of people's intuitions around morality and where good and bad come from? Yes, it completely destroys it. I mean, moral responsibility, as traditionally thought of, is it just disappears if there's no
Starting point is 01:01:41 free will, meaningfully. You can't be held morally responsible. You can only be held sort of descriptively responsible. And we can only react to your behaviors based on whether we, based on our assessment of whether that behavior is part of your real self, or some accidental quality that's unlikely to repeat itself. And it's that that we use to inform whether we want to put you in jail or not. This has got nothing to do with retribution.
Starting point is 01:02:05 But if you look at a lot of the debate around the nature of the criminal justice system, a lot of people are beginning to see a lot of problems with retributive justice anyway. I think it doesn't work. I think it's not morally right. It depends on what work means, right? What are you trying to optimize for?
Starting point is 01:02:24 Are you trying to send a signal to other criminals that they shouldn't do this? Because it is more likely to dissuade a future potential criminal if they know that the punishment is going to be harsh than if it's going to not be harsh? Are you of closure or fairness for the friends and family of the victim. You know, we recently had this big school shooting in the state where we are now, Texas. And it, let's say that the perpetrator hadn't been killed, but he was still alive, it would be very, very difficult to go to those families and say, well, the thing here is that retributive justice kind of doesn't really make any sense this person, although he may have been in conscious control of the things that he was doing. He
Starting point is 01:03:18 didn't really have any will. He could not have done otherwise. Therefore, we think that it's a good idea to put him into a nice jail. And I think that people's emotions and their sense around how they feel about something, something should be done. This person deserves to pay. You know, you hear this sort of language used a lot. Yes, but of course, it would be equally difficult to explain that to people if it was discovered that he had a tumor that was causing him to act in the way that he did. It would still be very difficult to explain that. It would be difficult to swallow, but I think if you did that, if you said, look, this person had a tumor in their brain that was causing them to act this way, some people might still say, well, you know, screw it,
Starting point is 01:04:06 he still deserves punishment, but I think at least some contingent of those people would be more likely to say, oh, how does this affect your moral assessment? And crucially, why? Why does that make a difference? Well, this is the basis of the insanity plea, right? That there is a lower degree of culpability for something that someone's done because of a clinical diagnosis that makes their standards for behavior different to your standards for behavior. Basically highlighting that's a person that you can't extrapolate out what you think good and bad behavior is for yourself onto, that they're in a different category somehow.
Starting point is 01:04:47 But of course, the only thing that that should affect is the retributive part of the justice. Of course, by pleading insanity, what you're trying to do is separate the person from their insanity. You're saying there's this real person, this real self that wouldn't do this. And the insanity is one of their accidental properties. It's not an essential property, but an accidental property. And it's the insanity that's caused the crime. So you attribute the crime to the accidental property and not to the real self. And so you say the real self isn't responsible. And so the real
Starting point is 01:05:19 self shouldn't get punished. It's like, okay, that seems to make sense, but wouldn't get punished. It's like, okay, that seems to make sense, but wouldn't affect how much we want to separate them from society. It wouldn't affect how long we keep them locked up. It wouldn't affect that we keep them under lock and key in the first place. We would just think, well, we're going to do that, but now we're thinking of it in terms of kind of prevention rather than retribution. And I think that any time you closely analyze what's going on with behaviors, it always makes sense to think in terms of prevention and deterrence rather than retribution. And this is something people are beginning to see in the retributive justice is very much
Starting point is 01:05:58 going out of fashion. But of course, there's this human intuition that says, if somebody commits a horrible crime, they deserve to be punished. But we've got to think about why it is that people think that, why that's developed. I mean, that itself, evolutionarily, is probably itself a deterrence tool, right? Because if people who commit bad actions get punished, then other people are less likely to do them. And before we have a criminal justice system, the way that people get punished is by vigilanteism. And vigilanteism only exists if you have people
Starting point is 01:06:28 who are just infused to harm this person in response to them being harmful. And so you have this very natural, pre-rational, possibly even pre-human disposition, to think people are morally responsible and to want them to suffer for suffering's sake because they caused suffering. But I think that if we're going to try to be consistent with our view of how the brain works, at least, and of course you can just reject this entire view, you can say no free will does exist. It does exist and there's an improper analogy being given here. That's fine. But if you accept the view that we're playing with here, I think you have to be consistent
Starting point is 01:07:09 and say that that retribution is a is a lost cause. What's next? What is next? Okay. There's something else we can ask that's along the same lines. Let's talk a bit more about this, this idea of kind of things that you have that you aren't really responsible for. Do you think it's unfair that people will get admitted to good colleges like Harvard or Oxford and Cambridge because they have a lot of money in their family. Yes. Yes. Yeah, they're able to pay for top tier tuition that other people don't get.
Starting point is 01:07:52 They're able to pay to go to private schools. They basically pay their way into these colleges. So you're talking about money being used to give the person that is applying a step ahead, not money that is being used to get inside door by back handing it to some administrator. That's right. Yeah. Still, my intuition is yes. Seems unfair.
Starting point is 01:08:14 It seems like maybe it's not the kind of unfairness that we could kind of like legally rectify because that would have a wealth of implications for individual liberty and property and spending your money and this kind of thing, but it seems at least sort of undesirable that people should be able to do that because of course they didn't choose to be born with a lot of money. And so yeah, they get this wonderful tutoring, they get a big house with no worries and a stable family and they're able to study in peace and calm while they're made
Starting point is 01:08:45 Just the washing up for them whatever. So like this is unfair that for that reason they end up going to Harvard for that reason They get a high paying job and the cycle just continues. Okay, why then is it more fair? For somebody to get into Harvard or to Oxford or to Cambridge because they're clever That person didn't choose to be born with a high IQ. They didn't choose to be born in a situation with the kind of upbringing that made them interested, that made them want to read. They didn't choose the desires they had and the interests.
Starting point is 01:09:18 They didn't choose to take an interest in physics when they were like seven, they just did. But by doing so, they end up going to Harvard, and by doing so, they end up getting a well-paid job. And so why is one more fair than the other? We want to say that sort of paying away into college, this way, is unfair because they're getting in because of the amount of money they have, not because of their merit, not because of how intelligent they are. But why is it any more fair to let someone in based on their merit or intelligence? Well, when you realize that merit and intelligence is an endowment genetically as opposed to an
Starting point is 01:09:49 endowment financially, it feels like there's, I think the intuition is there's something more selfie about the brains and the conscientiousness, again, also wildly heritable. I had this discussion a little while ago. It's very interesting. Talking with a behavioral geneticist who is quite progressive, so she's on the left side of the aisle, behavioral genetics has been very much adopted by people that are on the right. And I was asking, how do you square this circle between wanting equal access to opportunity, even perhaps a little bit of equity as well in terms of equitable outcomes, when you know
Starting point is 01:10:32 that people are entering this race in different ways. If you were to flatten out all of the opportunity equalities, if you were to get to a stage where everybody's level of preparedness was exactly the same. What you're left with the differences that occur are now genetic. I mean, that's even more brutal. You're saying to somebody the reason that you didn't get into college is exclusively because your parents have shitty genetics. Quite. They had the same diet growing up, they had the same access to sunlight, they were given
Starting point is 01:11:05 the same primate, all of that stuff. So, okay, we flatten society down to give everybody equal opportunity of learning and food and nutrition and stuff. You're okay. That leaves all differences now down to nature. I believe that to look at. That doesn't seem like a very fair world. Yeah, it kind of makes it almost even worse. I mean, if you don't get into college and someone says, well, hey man, like, you know, it's your upbringing, it's because you didn't have the money. Like, if you'd have had a bit more money,
Starting point is 01:11:35 better resources, maybe you could have made it. It's like, man, that sucks. But, okay, but if someone says, like, hey man, you didn't get into Harvard, and there's nothing you could have done about that. You are just too stupid. That seems so much worse. You're not wired for this. someone says like, Hey, man, you didn't get into Harvard. And there's nothing you could have done about that. You are just too stupid. That seems so much worse.
Starting point is 01:11:48 You're not wired for this. Yeah. And so like, in other words, when somebody says, you know, this person paid their way in, you know, they didn't get in because of their merit because of their intelligence. It's like, why are you saying that with an angry face? Why is that any better or worse?
Starting point is 01:12:05 It's like, well, okay, maybe a college's job is just to pick those who are just actually best suited for study because we want people in the high paying jobs that are going to be efficient because it's going to help the economy or whatever, it's fine. People who pay themselves into good tuition are, by the time they get to college age, like better suited to study a course. They're going to get better grades. They're going to do the job better. It's like, but it seems to be like an unfairness about this. Sure, there's none fairness about this, but there's none fairness about the intelligence that either you're born with, if it is genetic or if it's
Starting point is 01:12:41 not genetic, even the kind of upbringing and surroundings that you have and the interests that you develop, you don't get to control this kind of stuff. So how is that any more fair? So in other words, if we're gonna criticize one, why don't we criticize the other? It's another slight ethical quam. And there are different interpretations
Starting point is 01:12:56 you can take, if you could either say, okay, then maybe it just is unfair to let people into college based on their level of attitude. And so we should just be basically letting anybody into college for any reason, which is kind of the approach that's being taken not by like most universities, like individually, like the Harvard and the Oxford
Starting point is 01:13:15 to the world, they still have interview processes and they're still selective, but there are so many universities now that basically anybody can go to a university if they want to. And this, most people think is a good thing. It's like, yes, it kind of doesn't matter where you're at, doesn't matter what you want to study, doesn't matter how intelligent you are, it doesn't matter how apt you are, like there's going to be a course that's going to be suitable for you and you're going to
Starting point is 01:13:36 be able to go and have this experience. This is great. And it seems kind of more fair. So you can take this approach of saying, well, yeah, maybe we should just extend that logic and say that the Harvard and the Oxford to the world need to abolish the interview process and basically just select people that random or something. Tyler Cowan literally said this the other day. He said that he feels like the selection process isn't necessarily selecting for things that are useful long term, that you're selecting for people who have got great road memorization, but also are incredibly orderly, or also the sort of people that will stick to rules,
Starting point is 01:14:11 and he is a disruptor, and he wants to see more people that would break the rules, that would probably struggle with homework, that would not turn up on time, that would do a lot of the things that universities want to see, but the problem being that that undermines the personality to perform in the university and what you end up with is trying to change the definition of a university. It's like what are the outcomes that we want here? He wants to have people that are prepared to move humanity forward and businesses in interesting and novel ways. Or maybe that's not the job of the university.
Starting point is 01:14:48 So maybe you need something which is not a university now to prepare people to do that. Well, maybe you're trying to retrofit an existing establishment to now produce people in a different way to the way that you actually want them to be. Crucial, I mean, the defining factor here is going to be what is university, like what's it for?
Starting point is 01:15:10 Is it like a societal tool to make people apt for jobs? Is it a tool to give people a life experience? Is it like, is it a qualification machine? Like, what is it? Is it something for kind of private individuals who are interested in academia to go and find somebody to study under for the sheer love of knowledge? Well, that seems to be how they started,
Starting point is 01:15:33 but that's not what they are anymore. They seem a lot more kind of institutionalized, embedded into society as a whole. Like, that's why we have governmental student loan schemes, because the government sees universities as like an important part of society when they were founded originally as a way for people to learn stuff just for the sake of it. The nature of universities is always changing. Maybe we just begin to see that change. But of course, that's only one response to this
Starting point is 01:15:59 problem. You could just go the other way as well and say, well, then yeah, if it's fair to admit people based on their, if it's fair to admit people based on their merit, it's fair to admit people based on how much money they've had to put into private tuition and things like this. And if how much money somebody, the problem we're doing that, because put in those terms it doesn't seem so bad. It's like, yeah, okay, a college should have the right to choose someone who's most apt and sure it's a bit unfair that someone's paid for tuition, but they've still been, you know, they've still had the
Starting point is 01:16:28 teaching and they are actually better now. So maybe it makes sense for the college to accept them. The problem with that view is that, of course, the amount of money that somebody has can become a reliable statistical indicator of how apt somebody is. If you've got a bunch of applicants and you don't have time to interview everybody, you could think, well, those with more money probably paid for private tuition, probably going to be more apt. And so a college would have warrant to start accepting people without even interviewing them just on the basis of their bank account. That seems problematic. But why? If you can't do it on the basis of bank account, why can you do it on the basis of intelligence? If we get into this idea of saying that you're actually equally responsible for either, you
Starting point is 01:17:11 are as responsible for the amount of money that you have in your bank account when you're born, as you are, for the intelligence that you have when you're born, there seems to be a bit of a problem here. As a book by the Harvard professor Michael Sand, called The Tyranny of Merit, which seeks to undermine the entire idea of meritocracy on these grounds. It says, like, yeah, sure, we don't live in a meritocracy, but so much of politics is basically saying, you know, this is aristocracy, or this is people buying their way into politics, and the implication is like, we want meritocracy.
Starting point is 01:17:42 You know, this isn't meritocratic, is it aristocratic? And Michael Sandell's like, okay, cool. So let's imagine a perfectly meritocratic society that everyone wants. It's actually still got a bunch of problems. It's still completely unfair. It's still just as morally arbitrary as aristocracy is. There's something wrong with aristocracy.
Starting point is 01:18:00 There's something wrong with meritocracy as well. It's a pretty depressing revelation, but it's a book that's worth reading if people are interested in this line of thought. Because we haven't really gone deep enough. There are probably people listening thinking, like, what are you talking about? Are you talking about the amount of money that you have being, like, having the same justification is how intelligent you are as to getting into college. I hope most people can understand why we're getting at that.
Starting point is 01:18:27 But if you want to go further, the tyranny of merit will help to explain it. I also had them on my podcast. So if people want to go and listen to that, then they're welcome to do so as well. Lots of next. What is next, indeed. Okay, here's a question for you. Do you think that if there's like a shortage of treatment, if there's limited funds in like a hospital,
Starting point is 01:18:50 should non-smokers get priority for lung cancer treatment? No. Why? Non-smokers are smokers. Non-smokers. That is, there's limited resources, limited money. You've got a bunch of people who need lung cancer treatments.
Starting point is 01:19:06 Some people have lung cancer because they've smoked cigarettes all their lives knowing that there was a great risk of getting lung cancer. Some people have never smoked cigarettes in their life, probably due to the fact that they wanted to avoid getting lung cancer, but they got it anyway. Is one more deserving of the treatment? Well, I think yes. The question is, should we prioritize treatment for those who did not smoke? My visceral intuition is yes. Sure, but probably not. Why'd you say probably not? Because the judgment that the people who smoked made,
Starting point is 01:19:48 you don't know if they perhaps had a genetic predisposition, perhaps the smoking contributed nothing to the fact that they have lung cancer. Perhaps all every single person that didn't smoke also decided to sit on the couch and eat and every other smoker was an endurance athlete that had dedicated their lives to being fit and healthy, apart from the smoking. It seems like there is a culpability of people who do a thing, which then causes a thing to themselves to happen, that were sort
Starting point is 01:20:19 of playing off the back of here. Yes, of course, I mean, imagine, for example, that we could prove somehow through some developed technology that, yes, of course, I mean, imagine, for example, that we could prove somehow through some developed technology that, yes, this person got lung cancer because they smoked. Fine, give it to them, give it to them, give it to the non-smoking mate, they get all the treatment. Do you really think that? No. Okay, well, let's entertain the idea. A lot of people intuitively think that that might be a good idea. It seems unfair. It seems unfair that somebody should have to wait in line for their lung cancer treatment when the people in front of them in the line are people who are there
Starting point is 01:20:52 because they smoke cigarettes or their lives. It would be kind of rage-inducing. And so we might think that we should do that. Of course, there are problems with this, because you have to draw a line of, as you say, you have to kind of draw this line of culpability. I mean, there are kind of all kinds of ways that can contribute towards a kind of life. Maybe if you don't drink enough milk, or indeed, don't eat the requisite amount of plant-based alternatives, you don't have sufficient calcium.
Starting point is 01:21:25 And so your bones are more likely to break. They're a bit more bristle. And so you end up breaking your ankle one day or something, and you go to the hospital and there are lots of people who have broken ankles, you know, should the doctors kind of take a survey and say, well, you know, this person has kind of looked after their calcium and take all their life, and they just fell down a hole and broke their leg. You've had a calcium deficiency that you've known about and haven't bothered to solve, and you've broken your ankle, but you may pour life choices that have led to this
Starting point is 01:21:53 and so we should prioritize treatment for others. It's like, still maybe that kind of makes sense. We still think, yeah, maybe that's fair, but it's less obviously fair that somebody should kind of, somebody with a calcium deficiency should not be prioritized in getting treatment compared to without calcium deficiency. Whether the person who didn't have, who had the calcium deficiency knew that by not drinking milk, that it would contribute. So this culpability,
Starting point is 01:22:24 a few layers down. Yeah. And of course, we know that sitting down for too long is bad for us. We know that living in a sedentary lifestyle is bad for us. But it's like, okay, maybe I sit down more than most people. You know, I'm a boxer. I'm a YouTuber. I spend a lot of time behind a desk. I spend a lot of time in front of a camera. It's like, I kind of know this is probably quite bad for me. It's probably quite bad for my back. Okay. So I get a back injury and I go to the hospital and someone says, well, how much time do you spend sat down every day? It's like, well, I, this is my job, I sit down all the time. It's like, could you have stood up more and stretch your back? It's like, yeah, I probably could have done. Then it's kind of your fault. I've got another person here who's got a back injury, but they're
Starting point is 01:23:00 really healthy. They stretch every day and do yoga and they put a back injury. So they need to, you know, skip the human front of you. Starting to get a bit murky here. And if you begin to take into consideration, the fact that we know that all kinds of our behaviors mildly contribute to various things. Like, you know, if cracking your knuckles contributes to arthritis, which we don't know whether it does or not, but, you know, maybe it does. It was that guy that did it. One hand. Yeah.
Starting point is 01:23:29 One hand for his internal. A lot of people think that this kind of disproved the thesis because he cracked his hands. And I can't remember what happened, but I think it's possible that he just didn't develop arthritis at all, which of course doesn't settle the question. Also, it's just like one guy, but like, you know, credit to him, I think that's one of all experience in your life. But it's hard to say guy, but like, you know, credit to him. I think that's a, that's one of the things that you're life. But it's, it's hard to say whether it does or not. But if it does, if someone's just kind of just does it, you know,
Starting point is 01:23:51 naturally just has a kind of habitual click, you know, should they be not prioritized for arthritis treatment? It's like what because they kind of sit around clicking there. Well, everyone, everyone clicks their, their fingers, you know, everyone does that. Well, not everyone does. There around clicking there, well, everyone clicks their fingers. You know, everyone does that. Well, not everyone does. There'll be someone who doesn't, and maybe they should be brought to the front of the queue.
Starting point is 01:24:10 If you start administering medical treatment on the basis of dessert, you're going to end up in this whirlwind of, of course, there's an epistemic problem of trying to work out who has certain issues because of particular lifestyle choices, but supposing we could just know all of that, it still just seems to be something callous and wrong about it. It seems to restrict your freedom too much because there's a point at which you could just like, you could say that basically any behavior you partake and that isn't perfectly safe is contributing to this. I mean, if you get hit by a car because you cross the road
Starting point is 01:24:48 and another person gets hit by a car because the car's swerved into the pavement, it's like, sure, crossing the road is more dangerous. You chose to do that. You could have not crossed the road. It's like, yeah, but come on. Clearly, this isn't the way that we should be administering our medical priority, but then this should apply writ large. So when you have the smoker and the non-smoker
Starting point is 01:25:11 and they both need lung cancer treatment, it's a kind of difficult pill to swallow, but it seems like we have not sufficient warrant to prioritize one of the other. It's not to say that one doesn't deserve it more than the other, but it's not always the case that what a person deserves is what they should get in practice. We saw this during COVID, right, that there was different ethnic groups or different sized groups that were being put to the front. Is there something different between saying that fat people should get COVID treatment ahead of black people or black people ahead out of white people and there's an interesting Sort of distinction here because the skin color is more innate whereas the although there are heritable
Starting point is 01:25:54 Characteristics when it comes to your body weight that seems to be more people feel like that's more of a lifestyle choice Yeah, no, notice how people were saying like Well, you know there aren't that many kind of COVID casualties. People say, well, there actually are. Look at all these examples. People said, yeah, but they all had pre-existing conditions. Co-morbidities. A lot of them were fat. And that's kind of like, they shouldn't have been fat. It's like, can we really say that it's sure, not looking after your weight is going to bring on a lot of health issues, but you certainly didn't know it's going to give you COVID-19 because that wasn't even a thing like a few years ago. And it's like, it does seem to call us to call us to me to say something
Starting point is 01:26:37 like, okay, yeah, people are dying of COVID, but it's only people who are overweight and they basically chose that lifestyle. It's like, this isn't how we should be doing medicine. Leo Kersh said that COVID was basically enacting Tory policy by killing the old and the fat and the poor and the next general election he was going to vote for COVID. But yeah, I think that it was very interesting. And people were up in arms about that. People were up in arms about the fact that in America, they were talking about racially prioritizing people. But you think, well, black people did have the,
Starting point is 01:27:17 I think it's a lower levels of beavered amines, deved amines, and then they have a genetic marker that changes the way that their lungs actually absorb viruses. So it meant that they were able to take on a higher viral load. Are you okay? I mean, that doesn't feel like it was their choice. That feels like they're a vulnerable population in a way that they didn't choose, and that feels different to the fat people somehow. I mean, I don't know if that's true, I can't verify it because I actually haven't had that discussion, but yeah, say it is. Yeah, the idea that, well, for people who are overweight overweight it's kind of more their fault and so we should feel less sorry for them or something like this or we should feel less bad about the fact that COVID
Starting point is 01:28:12 is is making them into casualties. I've heard this kind of line thought a lot during the COVID pandemic and I kind of I understand why people are saying it. They're saying, look, I have to stay inside now because of a virus that isn't going to harm me. I'd be fine if I got COVID, but I'm staying inside to protect wider society, which means the vulnerable, which includes some groups who are vulnerable through their own life choices. That's not fair. But this is a similar line of thought of being stuck in the queue for lung cancer treatment and saying, I now have to wait. I'm now being like non-prioritized because I'm behind somebody in a queue who also has lung
Starting point is 01:28:51 cancer, but they've got lung cancer because of their life choices. That's not fair. It's the same kind of thought that's going on. And I understand it because if I was stood in that queue with lung cancer, I'd be annoyed too. But what's the solution here? Oh, we jump us up to the front. Well, we allow people to come out of their homes. Well, we jump people up to the front. We allow people to come out of their homes as the analogy here, which allow people to come out of their homes
Starting point is 01:29:23 and, you know, if the fat die, the fat dies for their own life choices. And if the people with lung cancer die, and I don't, well, that's because they smoked and I didn't, and that's fine, they deserve it. But as I hope you can see from our previous discussion here, like, again, by kind of reductio ad absurdum, it's like, okay, well, let's take that line of logic and apply it to all medical procedures across the board. Not just the obvious ones, but the more subtle ones. Not just the ones that involve smoking or becoming obese, but ones that involve sitting
Starting point is 01:29:53 down for too much per day or participating in too many extreme sports. So this kind of thing, it doesn't seem to be as intuitively obvious. And so I think, look, I get it. I get why you're saying that. I get why you're saying that, I get why you're frustrating. It is frustrating. But this is not an appropriate response to the situation. It's not appropriate to then say, well, sure, okay, let's just like let the fat die or let's let the lung cancer patients who got their virus smoking keep their lung cancer. I just don't think this is the way that the medicine should be administered.
Starting point is 01:30:24 Of course, you might disagree with me here, but this is the nature of an ethical dilemma. It's not supposed to be an ethical monologue, but a dilemma indeed, and I'm interested to hear we abuse, I think, as well. You got any left? You got one more? The only thing that I think should get a mention, perhaps, here, is that we've kind of been assuming, we've essentially been assuming atheism here. We've been assuming ethical theories that don't involve the Numinus. But there are, you know, I'm trying to the Numinus or the, I'm trying to kind of find a word that pins down this idea of God
Starting point is 01:30:57 that doesn't pin us to a particular God or religion. So something about there being a supernatural element. Yeah, it's a wonderful word that's often used to describe this kind of untouchable supernatural element, like when you listen to music and you kind of feel something a bit above you, a bit above material, maybe that's the luminous. If you see it, if you're looking at a piece of art and you begin to see the artist through the artwork, maybe that's the numinous. Rudolph Otto, the famous theologian, could find no better word to describe the religious experience than the Latin formulation of Mysterium, Tremendum, Ed Fassanand, a mystery that at once makes you tremble but is also fascinating. It kind of pushes you away, it tremble, but is also fascinating. It kind of pushes you away.
Starting point is 01:31:46 It's scary, but it also pulls you in at the same time. It's actually famously difficult to pin down the nature of religious experience. But the most famous, just to give it some, you know, air time, because there are a lot of religious people in the world after all, the biggest ethical dilemma that somebody who thinks that ethics is grounded in God has is known as the euthefro dilemma, comes from Plato's euthefro, a dialogue in which the question is basically asked, we'll reword it slightly, but we say, okay, things are good and they're somehow grounded in God. The things that are good, are they good because God commands them? Or does God command them because they're good? Well, let's consider either option. Let's say God commands them because they're good.
Starting point is 01:32:35 This means that there's a standard of good that exists outside of God, which God is subject to. And so if God is supposed to be the grounding of ethics, that's not going to work. Okay, let's say that they're good because God commands them. Fine. Then what if God commanded that rape was ethical? What if God commanded you to shoot up a school? What if God commands you to sacrifice your own child? Well, we know what Christianity teaches and indeed Islam that one should do, because we have the story of Abraham who was told sacrifice his child, and of course his hand is steady at the last moment. But the fact that he was willing to do the sacrifice is something that's celebrated. In fact, the Islamic Festival of Edal Arda is celebrating that particular event. Abraham's willingness to sacrifice his own child
Starting point is 01:33:25 because God told him to. But this seems like a pretty difficult pill to swallow. Okay, things are good because God commands them. You're committed to the view that if God decided to command that rape was ethical, it would just become ethical. And so it becomes trivially true that no matter what God commanded you to do, you ought to do it.
Starting point is 01:33:43 There's nothing wrong with this, per se. There's nothing inconsistent or contradictory about it, but it seems troubling. And so the Thiford d'Alamah has been kind of discussed to no end. And there are some people who think it's a false dichotomy. Some people say, well, God just is good, which I'm never quite sure exactly how that solves the problem, but this is a dilemma that needs to be discussed as well, because even as a theist, even thinking that there's an all-powerful God, you might be troubled at the fact that the objectivity that you seek to find in religious morality collapses into something like divine subjectivity. It's just kind of still just an agent deciding on their own whim essentially,
Starting point is 01:34:28 what's good and what's bad. You've just moved the subjectivity from within yourself to within this God. And if you want to retain some objectivity that even that God is bound by, well then God isn't the ground of ethics in the first place. As I say, this is an ancient dilemma and it's not seriously troubling to most people who thought about it deeply, who are religious because
Starting point is 01:34:48 they've got an answer to it. But it's one of the first dilemmas that I think needs to be thought about if there are any religious listeners, or indeed if there are just people who are not religious but interested in dilemmas for religious ethics too. So there are dilemmas across the board. Alex Okana, ladies and gentlemen, if people want to follow the stuff that you do, if they've enjoyed what you've been talking about today, where should they go? They should go to patreon.com for slash Cosmic Skeptic and give me all their money. No, they should find me a Cosmic Skeptic across the board. It's quite a unique name, so YouTube, Twitter, Facebook. Why should they start on YouTube? What video on
Starting point is 01:35:22 YouTube would you get them to start with? Depends what they're interested in. I do a lot of vegan advocacy. People are interested in that. My favorite video on that that I've ever made is the worst of cognitive dissonance. But apart from that, I'm quite proud of some of the podcasts that I have that I put together for philosophy of religion. I did a podcast with William Lane Craig where we discussed the colarm cosm argument. Alex, my favorite. Alex, that argument goes into the list of things so stupid that I couldn't even believe that it was said by somebody.
Starting point is 01:35:51 Yeah, that's a real genuine quote. I was quite proud to defend the argument that he said was part of his list of arguments so bad he couldn't have made them up. And I think that I actually still defended it. I think he was being a little bit unfair there, but that was a wonderful that was one of my favorite podcasts. So yeah, it turns out you're interested in, but I certainly recommend starting on YouTube.
Starting point is 01:36:12 I appreciate you. Thank you for coming to see me in Austin, and I'm looking forward to the next one already. Of course, offends.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.