The Joe Walker Podcast - Peter Singer — Moral Truths and Moral Secrets

Episode Date: September 19, 2023

Peter Singer is the Ira W. DeCamp Professor of Bioethics at Princeton University. He is widely regarded as the world's most influential living philosopher. Full transcript available at: jnwpod.com Epi...sode recorded on 26 April 2023.See omnystudio.com/listener for privacy information.

Transcript
Discussion (0)
Starting point is 00:00:00 Hello and welcome back to the show. I'm Joe Walker and my guest today is Peter Singer. Peter is widely regarded as the most influential philosopher alive, and he is the Ira W. DeCamp Professor of Bioethics at Princeton University, a position he's held since 1999, though it was recently announced that he'll be retiring from Princeton at the end of 2023. This marks Peter's third appearance on the show, and I traveled down to Melbourne to record the conversation with Peter in person. We're celebrating my 150th episode and more importantly, the publication of Peter's book, Animal Liberation Now, a significantly revised and updated edition of his 1975 classic, Animal Liberation. Now, before we dive into the conversation, let me say that the first 30 minutes
Starting point is 00:01:06 delve into metaethics, arguably the thorniest subfield of moral philosophy. But I implore you to listen through, even if you find it too dry for your tastes, for three reasons. First, the rest of the conversation is far from dry. Both Peter and I had a lot of fun and we cover a large variety of topics from AI and long-termism to esoteric morality and Australian utilitarianism. Second, some of those later topics refer back to concepts discussed in the first 30 minutes. And third, metaethics can be genuinely fascinating. Here's my 60 second elevator pitch. So you can divide moral philosophy up into three broad subfields. First, there's normative ethics, that is the study of frameworks for determining the rightness or wrongness of actions, with the principal
Starting point is 00:01:57 frameworks being virtue ethics, deontology, and consequentialism. Second, applied ethics, the application of those ethical theories to specific issues like abortion. And finally, sitting underneath those two subfields is metaethics, the study of the nature of ethical theory itself. One big question within metaethics is, assuming there can be moral truths, what makes them true? Is morality just a subjective thing, a matter of opinion? Or is morality a true fact about the universe, something we can, through the application of reason, discover, if not directly observe, something akin to a mathematical truth like 2 plus 2 equals 4? And does the answer to this question even matter? Well, Derek Parfit, arguably the
Starting point is 00:02:46 greatest philosopher of the late 20th century, and a friend of Peter Singer's, thought it did. He spent the second half of his life, until his death in 2017, obsessed with the issue. And Parfit's obsession is in fact how we begin this conversation. Enjoy. Peter Singer, welcome back to the show. Thank you. It's great to be with you again. It's nice to see you again. I'd like to structure our conversation by starting with meta-ethics, then move to normative ethics, and then finally talk about social movements and some other specific issues. So I want to start with whether there are moral truths. And I've been reading the new Derek Parfit biography by your friend,
Starting point is 00:03:30 David Edmonds. Yes. And towards the end of the book, he describes a thought that kind of seizes Parfit. And I'll quote from the book. The thought was this, everything Parfit had written to date, every philosophical argument he had ever made, every conclusion he had ever reached was pointless, worthless, and illusory, unless moral reasoning could be moored to solid ground. The solid ground had to be moral objectivity. If morality was not objective, then it was a waste of time debating it. If morality was not objective, there was no reason to act in one way rather than another. He went further.
Starting point is 00:04:09 If morality was not objective, life was meaningless. His own life was meaningless and every human and animal life was meaningless. End quote. So my first question is, must anti-realism, put simply the view there are no objective moral truths, entail nihilism? I don't think that moral relativity necessarily entails nihilism, as that term nihilism is usually understood, and as Parfit was intending it in that passage you read, I think there's, if you like, there's a strong sense in which everything might be meaningless and there's a weaker sense. So I think what Parfitt was getting at there was the idea that if there is no objective truth, then you can't say that it was good,
Starting point is 00:05:06 sort of good period, good in all things considered, that something happened or didn't happen. And in that sense, Parfit is saying you can't say that it was good that I made this contribution to philosophy or anything else that anyone does. And if the contribution to philosophy seemed a little bit esoteric, you also can't say it was good that Hitler was defeated and that the Nazis and their descendants are not ruling the world today.
Starting point is 00:05:37 So there's a sense in which that is true, that you need to think that there is an objective truth to really say in the full sense that's a better universe without the Nazis ruling this planet than it would have been if they had ruled it. But nihilism as popularly understood basically would imply that it doesn't matter what anyone does, it doesn't matter what you do, it doesn't matter in any sense.
Starting point is 00:06:13 And there clearly are senses in which it does matter. So, for example, from our own personal perspectives, we don't want to have miserable lives in which, let's say, you know, we're being tortured by ruthless, brutal Nazis or someone like that. So that matters to us. And also, we may care about other people. We may feel sympathy with them. And we may hate the fact that we know that somewhere in the world, lots of people are being tortured. So in that sense, you could say you don't need to be an
Starting point is 00:06:52 objectivist about morality to have those feelings. And you could say, I care about these people, so it's not meaningless to me. And I could bring you into that circle of concern and say, surely you as a benevolent person, you care about people too, so it matters to us, not just to me. And we might say, you know, in general it matters to most of the people that we know or to decent people in the world and, you know, we care about that. So I think that's the sense in which you could avoid nihilism
Starting point is 00:07:26 and say things matter even though there's no objective truth. So that's why I think Parfit is taking a very strong sense of mattering and saying in that strong sense they don't matter if there's no objective truth. But many of us would say things matter in a somewhat weaker sense. They matter for me, for us, for some group, and therefore I'm not a nihilist. I see. So do you personally subscribe to the strong sense?
Starting point is 00:07:58 I actually do think that there are objective truths. So I'm prepared to acknowledge that both senses exist, and I think that Parfit has a point about the stronger sense. But I think he stated it in a very dramatic fashion in the passage that you read. And he believed it, you know, he believed it very deeply in that sense. But he was seeing things, and in a way, this is something about Parfitt, he was seeing things from this universal objective point of view. That was the way he viewed things. And that's why he said, you know, that statement in there, we don't have reasons for doing something unless there are objective truths. Now,
Starting point is 00:08:36 that goes completely against the sense of having reasons for doing something that we associate with David Hume, the 18th century Scottish philosopher, and that, in fact, economists today use all the time because they see reasons as being instrumental. If you want something, you know, you want oranges, so you have a reason to go to the supermarket which has oranges and buy them. And, you know, for Parfitt we'd say, well, is it good for you to get oranges, you know?
Starting point is 00:09:11 Are you going to get pleasure from oranges? Maybe pleasure is one of the things that are objectively good, so then you have a reason for it. But if it's just the fact that you happen to want an orange, that doesn't give you a reason for actually having oranges. So Parfitt is taking this, as I say, this universal side of the division between people who are objectivists about reasons for action. Kant was one. Henry Sedgwick, whom Parfitt greatly admired, the 19th century utilitarian philosopher was another, and against
Starting point is 00:09:45 people like David Hume and a long tradition of other philosophers who follow Hume, John Mackey, who was an Australian philosopher who I knew when he was at Oxford. A.J. Eyre would be another who is very much in Humean tradition. So, you know, that's the issue that you would need to discuss if you want to really say is parfit right when he says that if there's no objective truths about morality we don't have any reasons for doing anything right so let me come to that so bernard williams contended that external reasons don't exist what was parfit's response to that? And do you agree with Parfit's response? And perhaps you could explain what internal and external reasons are. Right. So Bernard Williams is another I could have mentioned alongside Mackie and AJ Eyre in
Starting point is 00:10:38 that Humean tradition. And Williams says that we have reasons because we have projects. He used this term project for things that we want to do, basically, and they may be simple things. I just gave the example of wanting to eat an orange. Okay, so that could be in some sense a project, but we also may have life projects like, you know, I want to write a book, I want to be a big wave surfer, whatever those projects might be.
Starting point is 00:11:09 They're things that we aim at, that we choose, and they give us reasons in one sense, and this is what Williams would emphasise. They give us reasons to, you know, start writing my book, think about what I'm going to write on or, you know, practice on the smaller waves so I can work up to the bigger waves, whatever it is. I have reasons for doing those because of my aims and projects. And those are internal reasons. Internal to me, they don't give you a reason to do those things if you don't want to write a book or don't want to surf. Whereas external
Starting point is 00:11:45 reasons are reasons that exist for anyone. So I would say you have a reason to reduce suffering, whether it's somebody that you love and care for or whether it's the suffering of a complete stranger or the suffering of a non-human animal. We all have reasons to reduce suffering because suffering of a complete stranger or the suffering of a non-human animal we all have reasons to reduce suffering because suffering is a bad thing the world is a better place if there's less suffering other things being equal um and so that's an external reason and obviously there are a wide range of moral views which would include other things that we have reasons to do or not to do. So Williams was saying all reasons are internal, and Parfit, who greatly admired Williams
Starting point is 00:12:34 and recognised that he was a highly intelligent person and somebody who was very good at making philosophical arguments, he was really perplexed and baffled that Williams didn't see the external concept of a reason, that he couldn't understand that there are external reasons, there are facts about the world which whether you have this project or that project, whether you care about this or that, they give you reasons for action. And Parfitt would say this, you know, he would say,
Starting point is 00:13:10 I don't understand how Williams cannot see this. But that was the way it was. He never got him to see it. Why was he so obsessed with getting Williams in particular to see it, like to the point of literally cry about it. I think what underlay that was the sense, which is very present in On What Matters, that if people who are thoughtful and reflective and intelligent
Starting point is 00:13:42 and knowledgeable about the subject, think about basic questions in ethics and disagree fundamentally about those basic questions of ethics. That casts into doubt the claim that there are truths on these basic questions. And as your opening quote indicated, Parfit was very concerned about that there shouldn't be doubt about the idea that there are basic truths in ethics.
Starting point is 00:14:16 And, you know, the idea of On What Matters was to show that people don't disagree as often or as fundamentally as many people think. The original working title for On What Matters was Climbing the Mountain. And the claim behind that title was that philosophers from three major and apparently disagreeing theories about what we ought to do ethically, were actually like three mountain climbers climbing a mountain from different sides who then meet at the summit and realise they've climbed the same mountain.
Starting point is 00:14:56 So that's what Parvath was trying to do, to show that all of these different philosophers disagreeing about things are really climbing the same mountain in the sense that they end up with a theory that is compatible with the other theories. So I want to ask you about what Henry Sidgwick called the profoundest problem in ethics. And that problem is the dualism of practical reason. So put very crudely, the idea that rational self-interest and utilitarian impartiality are both supported by reason but can be intention so why is the dualism of practical
Starting point is 00:15:34 reason the profoundest problem and how do you differ from parfit on this question let's let me say why sidrick thought it was the profoundest problem in ethics to start with. Sidgwick thought that the way to find truth in ethics is to look for self-evident axioms, as he called them, basic truths. And he found some axioms, an axiom of prudence, an axiom of justice, and an axiom of universal benevolence. And he thought that these were self-evident, and on the axiom of universal benevolence,
Starting point is 00:16:19 he thought that that is a grounding for utilitarianism, that wanting the best for everyone, and he actually did include non-human animals in that everyone, wanting them to have the best possible lives, the greatest possible surplus of happiness over misery, that when you reflect on these basic truths, you can see that they are self-evident and that utilitarianism can be derived from them.
Starting point is 00:16:50 And Sidgwick was trying to show that you can put ethics on a rational basis. That was the aim of his masterpiece, The Methods of Ethics. But he couldn't reject the idea that egoism, the idea that what I should do is what is in my own interests, not the universal interests, but my own interests, that that also has some kind of self-evidence. He didn't actually say it was a self-evident axiom, but he found it hard to deny that I have reasons for doing what is in my interest, what will make me happier or avoid suffering for me, that I have reasons to do that, which are different from the reasons that I have
Starting point is 00:17:38 to increase the happiness and reduce the misery of strangers, that my interests, because they're mine, have some special weight for me. So because he couldn't really reject that view, or he felt he couldn't reject that view, he ends up with this dualism of practical reason. So practical reason doesn't just tell us to do one thing, it tells us to do two different things. It tells us to promote the universal good, the universal interests of everyone, and it tells me to promote my good. And that fact that he couldn't reconcile the two dismayed him in the first edition of The Methods of Ethics. He has a very dramatic ending saying that this shows that the attempt to put the cosmos of duty on a rational basis has failed
Starting point is 00:18:33 and I think he talks about despair and so on. By the time he got to the seventh edition of the book, he'd somewhat calmed down and the language was no longer quite as dramatic, but he still accepted that this meant that reason doesn't really give us clear directions as to how we ought to live our life. And Parfit, to some extent, accepted that conclusion and did think that this was a profound problem, although clearly he was on the side of there being objective reasons,
Starting point is 00:19:08 but the objective reasons are not necessarily just the universal reasons. So he thought, for example, that the most rational thing to do maybe normally would be to promote the greatest good of all, but suppose that you could produce just slightly more good in a stranger at some harm to yourself, more good in the stranger than the harm to you would outweigh, but it was a significant harm to you. So then he said, well, maybe it's not irrational to prefer your own harm, sorry, to prefer to avoid your own harm in those circumstances. So you don't always have an obligation to maximize good impartially
Starting point is 00:19:54 considered on Parfit's view. Now, what is the evolutionary debunking argument and how does Parfit avoid it? So let's talk about evolutionary debunking arguments in a simple case and then we'll get on to its relevance to this particular question of the dualism practical reason. So, for example, suppose this is an example that comes from Jonathan Haidt. Suppose that there's an adult brother and sister and they're staying somewhere by themselves in a cabin
Starting point is 00:20:38 in a remote country place and they think that it would be interesting to have sex. So they decide to have sex, they enjoy it. But they decide that they won't do it again. And there are no further consequences from it. Oh, and by the way, in case you were worried that the sister would get pregnant, she was already on the pill, but just to be safe, the brother used a condom anyway, so there was no chance of any conceptions taking place. Now, Jonathan Haidt put this example to a number of students, and the general reaction
Starting point is 00:21:15 was that this was wrong. But when you asked them why it was wrong, they couldn't give any clear answer, and often they gave answers which actually were in conflict with the description of the example. You know, like, well, it's wrong because, you know, if siblings have sex then the children might be disabled in some way. But they were told that there was no offspring and there was no chance of any offspring.
Starting point is 00:21:43 So Haidt refers to this as moral dumbfounding, as we have these moral intuitions that we can't really explain. And there might be an evolutionary explanation for this. The evolutionary explanation might be that for all of our past evolutionary history, even before we were humans perhaps, if siblings did have sex, then they would conceive and then there could be, you know, there's a higher probability of abnormalities and that was therefore something that they developed an inhibition against because
Starting point is 00:22:21 that helped them to survive and have surviving offspring. So, Haidt's explanation of this moral dumbfounding in this particular case is it's a biologically evolved reaction, negative reaction, a yuck reaction, if you like. So, it's not really that they're thinking about the rights and wrongs of what the brother and sister did in this case. It's rather that we are biologically programmed to say, no, wrong, can't do that. And I think that that's a plausible story for what's going on in that particular example. But it's clear that there could be many other
Starting point is 00:23:07 examples where you have something similar. And if we apply this now to the dualism of practical reason, then what we have is on the one hand, a response, the axiom of universal benevolence, a response that clearly would not be likely to have been selected by evolution, because to help unrelated strangers, even at some disadvantage to yourself, where there's a greater benefit to the unrelated strangers, is not a tray that is likely to lead to your improved survival or the improved survival of your offspring. It's rather going to benefit these unrelated strangers who therefore are more likely to survive
Starting point is 00:23:52 and whose offspring are more likely to survive. So that doesn't seem like it would have been selected for by evolution, which suggests that maybe it is a judgment of our reasoning capacities in some way. We are seeing something through reason. Now, if we compare that with the egoistic judgment that I have special reasons to prefer my own interests to those of strangers, it's more plausible to think that that would have been selected by evolution
Starting point is 00:24:22 because, after all, that does give you preference for yourself and your offspring if you love your offspring and care for them. So if we have these two conflicting judgments, then maybe we can choose which one by saying, just as in the case of adult sibling incest, we debunk the intuition by saying, well, that's just something that evolved in our past and that doesn't really give us reasons for thinking the same thing today. Maybe we can say that also about the intuition behind egoism, but not about the intuition behind universal benevolence,
Starting point is 00:25:07 which therefore gives us a reason, not a conclusive or overriding reason, for thinking that it's the axiom of universal benevolence that is the one that is most supported by reason. I see. So you look for the reasons that may have evolved or may have been the reasons that may have been selected upon and those kind of are eliminated and then what's left, you say, it's likely that that must be able to be supported by reason because it's not something that could have evolved. Yes, that's right. I suppose you could say it puts the onus of proof on the person who wants to maintain that the judgment that is likely
Starting point is 00:25:56 to have evolved is also a judgment that reason supports independently of its possible evolutionary history. Whereas the person who's supporting a judgment that doesn't have a plausible evolutionary history doesn't have that burden of proof. So you have a paper with Katarzyna Dilizari-Radik in the collection of essays, Does Anything Really Matter?, where you claimratic in the collection of essays does anything really matter where you claim to resolve the problem of dualism by sort of using that argument that you've just outlined i guess i'd just like to test one objection on you and get your reaction i read this paper yesterday and this objection is highly sketchy, but this is a podcast, so.
Starting point is 00:26:46 Right, okay. I should mention that we, that paper is somewhat related to a book that Katarzyna Lazari-Rudik and I wrote called The Point of View of the Universe, which is a phrase from Sidgwick and which is defending sort of Sidgwickian ethics here. Yeah, yeah. Okay, so I'd like to basically take up the challenge of providing an evolutionary explanation for impartiality okay so evolution by natural
Starting point is 00:27:13 selection applies not only to genes as you know it applies to any process that combines variation selection and replication and culture is such a process and cultural evolution can occur at different scales depending on the balance of selection pressures it doesn't just have to occur on the level of tribes or nations we could actually think of humanity as a superorganism to the extent that selection pressures apply at the level of the whole planet maybe some examples of those might be like existential risks, nuclear war, climate change, things that force humans to cooperate globally. And in my view, universal altruism is a cultural innovation that's spread because it
Starting point is 00:27:59 helps us cooperate at a global level. So universal altruism is a cultural value, albeit one operating at a higher level than things we normally consider cultural values. And indeed, for me, this is kind of strongly implied in Josh Green's book, Moral Tribes, where he argues that utilitarianism provides like a common currency for adjudicating and negotiating between parochial common sense moralities. So impartiality doesn't escape the evolutionary debunking argument and we need to reject egoism on some other grounds. Okay. Thanks for the objection.
Starting point is 00:28:43 Slap it down. So I think the weakness of the objection is that evolution is only going to occur if beings, so at the level that we're talking about, as you said, there's evolution of gene level and you can argue about whether there's evolution at the level of individual organisms and you can argue about whether there's evolution at the level of individual organisms and you can argue about whether it's evolution at the level of larger groups and how large those groups can be and then you're going to the level of all of humanity. But the evolution is only going to happen if these units
Starting point is 00:29:24 at whatever level you're talking about get selected for and against and you know basically survive or don't survive and that clearly happens with genes all the time it happens with individuals it happens with groups but somewhat less frequently because if you're talking about depending on what groups you're talking about, if you're talking about ethnic groups, they may live for hundreds or thousands of years. And if you're talking about the level of humanity, it hasn't happened, right?
Starting point is 00:30:01 Humanity has survived. We're still here. They've become extinct. Yeah. happened right humanity well but we survived we're still here they become extinct yeah so um what is what is being what where are the variants that disappeared here well i i guess like other possible civilizations on on different planets um i mean we haven't we haven't blown ourselves up and and perhaps a reason for that is that we have this sort of cultural innovation known as impartiality known as utilitarianism more broadly um it's a very short time in which we've actually had the ability to blow up blow ourselves up
Starting point is 00:30:38 right i suppose basically since 19 well not even in 1945 because although we had atomic bombs they weren't powerful enough to blow. Maybe sometime in the 60s, there were enough nuclear weapons around for us to become extinct. But it's hard to be confident that we're not going to blow each other up, actually. And if you look at what goes on much more frequently
Starting point is 00:31:05 at the level of conflict, you see non-impartial reasoning. I mean, it's not impartial reasoning that led Putin to invade Ukraine, and it's debatable whether it's impartial reasoning that leads the West to defend Ukraine. I would argue that it is somewhat more impartial reasoning that leads the West to defend Ukraine. I would argue that it is somewhat more impartial reasoning to uphold the rule of law in terms of respecting national sovereignty and territorial boundaries and changing them only by negotiation and peaceful means.
Starting point is 00:31:39 But there's an awful lot of conflict going on here. We're just seeing it right now as we're talking in Sudan, for example, and there's no impartiality going on there, lots of other conflict. So I'm not persuaded that this is actually the idea of impartial reasoning has actually taken hold throughout humanity. It seems to me extremely tenuous. I wish it wasn't so, but I don't see it as actually doing the kind of work as yet that would be necessary for us to say this is something
Starting point is 00:32:15 that has evolved and helped us survive. I don't disagree that cultural selection happens on multiple levels, but I think it's no coincidence that, like, if you look at all of human history, these kind of ideas of impartiality and utilitarianism have kind of coincided with the era of globalization and increased interconnectedness. I agree with that. I don't think that's a coincidence. I think that may have something to do with a greater understanding of other peoples and seeing them as more like ourselves, and I think that's a very good thing. But to say that therefore the impartial idea is as debunkable
Starting point is 00:33:07 as the egoistic idea still seems to me to be putting two very different things in the same footing. Okay, fair enough. I don't necessarily believe that argument myself either, but it's fun to play devil's advocate. Absolutely. It's a good try and it's certainly something that needs to be thought about and answered. So let's move to normative ethics. And I have a bunch of different questions,
Starting point is 00:33:33 but I also want to talk about esoteric morality. So before we get to that, a few kind of miscellaneous questions. From a consequentialist perspective should Derek Parfitt have lowered his standards for good work and published more and more often so I'm not sure that Derek Parfitt was actually capable of publishing work that he didn't think was as close to perfect as he could possibly make it. You know, he was notorious for actually not submitting work to publishers. And in his early life, he published extremely little. He published one very well-known, famous article on personal identity.
Starting point is 00:34:23 But he probably would never have published Reasons and Persons had he not been told by All Souls College, where he was a fellow and which was the ideal environment for him because you didn't have to do any teaching. You could just spend all your time doing your research and writing. And he had one seven-year fellowship at All Souls that was then renewed. But he was told towards the end of that second seven-year fellowship at All Souls. That was then renewed. But he was told towards the end of that second seven-year fellowship that he would not be made a permanent fellow unless he published something more substantial than he had. So that was the pressure that led him to write Reasons and Persons.
Starting point is 00:35:00 And apparently he was sort of constantly going back and pulling it out of the press when he'd already submitted it to Oxford University Press saying, I need to change this, I need to change that, this is wrong. So he was kind of a somewhat obsessive personality and therefore I don't think it was possible for him to say, I'll do more good by writing more things that are you know will do good uh but you know would he have actually had better consequences if he had you know maybe i don't know um i think he saw that there were
Starting point is 00:35:36 other philosophers writing things that were having an effect who were being influenced by him in various ways and perhaps i was one of them. Jonathan Glover would be another one, writing books that, if you like, were at a somewhat more popular level. They were still philosophical works, but they were not written only for other philosophers, as I think Paffett's works generally were, although it's great that now other people are reading them. Yeah, so I think he saw himself as contributing to a larger field of discourse, philosophy in general, and as making a distinctive
Starting point is 00:36:15 contribution to that, which he very certainly did, and raising new questions and problems, and was aware that there would be other philosophers who were not up to his level in terms of the original powerful arguments on new topics that he was producing but who was still going to be able to do something that was that he might have done if he wanted to but that he was perhaps better suited for doing what he was doing, which was to try to produce the closest to perfection in philosophy that he could.
Starting point is 00:36:53 Yeah. I guess it's just an interesting question more broadly, like empirically, when is perfectionism the right strategy? Yeah, it certainly is and and um it's definitely not always the right strategy and and very often you know given you're trying to influence human beings who are certainly not perfect um then it's often important that you've shaped what you're doing to suit them and to lead to the best consequences that they can bring out rather than to produce perfection. Are there any aspects of Bernard Williams' ethical approach that you find particularly valuable or useful?
Starting point is 00:37:38 And what do you think his best critique of utilitarianism was? So Williams did a lot of work on different topics, but clearly I have studied most his critique of utilitarianism, in particular in the little book Utilitarianism For and Against, where he was responding to the Australian philosopher J.J.C. Smart. And, you know, that's a good work to give to students because it's fairly short, it's brief. Smart is a very plain writer, very straightforward,
Starting point is 00:38:12 defensive utilitarianism. And then you have Bernard Williams, whose critique often uses interesting examples, and I think that's one of the best things that Williams did. So particularly in that he has two quite famous examples where he's arguing against the idea that it's pretty straightforward that the right thing to do is the thing that will have the best consequences, produce the highest levels of welfare. One of them is called Jim and the Indians. This is an example of Jim is a botanist who is looking for rare species in somewhere you imagine in the Amazon. And he
Starting point is 00:38:53 then walks out into a village where there's a clearing and where he sees the 20 villagers lined up against the wall and there are men with rifles apparently about to shoot them all. And he walks into this clearing. The officer in charge says, who are you? He explains who he is. So it happens that the officer is an admirer of botany and knows who he is. He says, oh, such a famous botanist come to our region.
Starting point is 00:39:24 You're very welcome. And the botanist says, oh, such a famous botanist, come to our region. You're very welcome. And the botanist says, well, what's going on here? Jim says that. And the man says, well, the officer says, we're about to shoot these people who are subversives or whatever, done something wrong, and we're going to shoot all 20 of them. But in honour of your visit to this area, if you would like to take up this gun and just shoot one of them, we'll let the other 19 go.
Starting point is 00:39:51 So for a utilitarian, this is a clear case, right? We assume that there's no possibility of doing anything else. You can't use the rifle to shoot the officer or all of them because there's lots of other men with who will then shoot you maybe immediately and everybody will get get shot including the villagers so the only thing you can do is to shoot one person and save 19 or to say no i cannot stain my hands by shooting a man who may well be innocent um so go ahead and shoot all 20 um so as i say the utilitarian will say you ought to take this offer and and save 19 lives William says well not so fast you know there is still something wrong with participating in this
Starting point is 00:40:33 terrible act that is going on you'll be complicit in some way so Williams doesn't say that you shouldn't shoot one but he does say it's not as simple as the utilitarian says. And then his other example is about George and George is a chemist or biochemist, I think, well, anyway, chemist, let's say, who's looking for employment, needs a job, sees a job advertised at a factory that is making chemical weapons or a research lab that is making chemical weapons. George is opposed to chemical weapons, but he learns that if he doesn't take the job, then somebody who's very zealous about actually promoting bigger and more deadly chemical
Starting point is 00:41:26 weapons will take the job. So again, the utilitarian would say, well, it's pretty tough for George. He's going to have to do this work he doesn't like, but he can slow down the process of making chemical weapons. He can pretend to be designing or researching new weapons without doing very much and that will prevent the great harm that would come from more deadly chemical weapons being developed which will certainly happen if he doesn't take the job and this other person does and that's actually we talked about Bernard Williams in his sense of projects that one is is clearly a project if you like because George's project has nothing to do with producing chemical weapons, just the opposite.
Starting point is 00:42:09 He'd, let's say, I don't know, rather produce fertilisers that you can grow better crops with or who knows, do something good anyway. So William seems to think that George actually shouldn't take the job in this case and that it would be, you know, he has more reasons against taking it than taking it. Whereas for the utilitarian, George has most reasons for taking the job. And those are challenging examples that, you know,
Starting point is 00:42:37 students like to argue about. So I think they're good points that Williams has introduced into that debate. But they don't keep you up at night as a utilitarian? No, they don't, certainly not anymore. When did that book come out? It's 73, I think. So probably already then I was sufficiently committed utilitarian
Starting point is 00:42:58 to not be kept up at night. Maybe if I'd come across them at an earlier stage when I was less confident about utilitarianism, they would have troubled me more. If Everett's interpretation of quantum mechanics is true, and each time the universe is faced with a quantum choice, it splits into different worlds, how do you aggregate the branches under a utilitarian calculus?
Starting point is 00:43:23 Have you thought about the implications of the many worlds theory for utilitarianism and ethics in general? I have to admit I have not. It's, I don't know, you know, how one would know whether that hypothesis is true. And I suppose, you know, if you ask me now, just off the top of my head, somehow you have to know what's going on in all of these worlds and whether the consequences of choices that you make are better or worse in all of these worlds. And I don't know how you could possibly do that since you're only going
Starting point is 00:43:56 to be in one of them, right? So, no, I think the answer is I don't know what utilitarianism or really any ethical view would tell you to do in those circumstances. Fair enough. Okay, some questions about esoteric morality. So you have this really interesting paper with Dilazari Raddick called Secrecy and Consequentialism, a Defense of Esoteric Morality, which actually Brian Kaplan brought to my attention
Starting point is 00:44:23 after you had a recent debate with him. Right. But you're not promoting our book, The Point of View of the Universe, because again, that's a paper that we developed into a chapter in that book. Yeah, of course. So those who want to read all of these interesting things you talked about, please order The Point of View of the Universe. Yeah, yeah, exactly.
Starting point is 00:44:41 That came out in 2010, was it? I think a little later than that. Okay. 2014, maybe. That came out in 2010, was it? I think a little later than that. Okay. 2014 maybe. Yeah. Great book for anyone interested in these issues. Thank you. Could you briefly outline the broad argument of the paper
Starting point is 00:44:56 and then I'll ask some specific questions? Sure. Sure, and this also takes its lead from something that Sidgwick wrote in The Methods of Ethics. And the question here is to what extent should a utilitarian follow generally accepted moral rules? And that's a large debate that's been going on for some time between utilitarians and opponents who say that there are moral rules that we ought to keep.
Starting point is 00:45:37 And utilitarians like Sidgwick want to say, no, you shouldn't stick to a moral rule no matter what the circumstances. There could be cases where you should break even generally accepted moral rules. But moral rules do in general tend to lead us to make sound decisions. So utilitarians don't think that in absolutely every decision you make, you should always try and calculate the consequences from scratch. They would say, let's say, I don't know, you're walking down the street near your home and a stranger comes up to you and says, can you tell me where the nearest train station is?
Starting point is 00:46:26 You know this very well. So you should tell the stranger where the nearest train station is. That will normally be a good thing to do. You could, of course, lie and you could say the train station is that-a-way when you know that's the opposite direction. But why would you do that? Generally speaking, helping strangers who ask for information does good. So you don't have to do those, try and do those calculations. But there are some circumstances in which you might produce better consequences by not following the rule. The problem with saying to a utilitarian,
Starting point is 00:47:07 don't follow the rule in these circumstances, is that it might weaken trust in the rule or it might weaken respect for the rule. So if other people know that utilitarians are going around breaking rules all the time, or people do, then maybe that will lead to a worse state overall because people will break rules when they really shouldn't be breaking rules. They'll break rules for their own convenience or because of some irrelevant emotion that
Starting point is 00:47:40 they have at the time, and that won't be a good thing. So, Sidgwick then raises the question, so what should utilitarians do in circumstances where you could do more good if you break the rule, except for the fact that you'll weaken support for the rule and that will be a larger bad consequence than the good consequence that you'd achieve by breaking the rule. And Sidgwick then says, well, sometimes it may be the case that you can only do good if you can keep what you're doing secret. So this is what's known as esoteric morality. The idea that there is, sometimes you should do something
Starting point is 00:48:33 and the fact that it's the right thing to do will be true if you can keep it secret, but if you can't keep it secret, it won't be the right thing to do. So that's essentially sense of keeping morality esoteric. And that's been a controversial doctrine for Sidgwick. And it's another point in which utilitarians and Sidgwick in particular were attacked by Bernard Williams, because Bernard Williams refers to this as government has morality.
Starting point is 00:49:06 What he means by that is government has in the heyday, let's say, of the British Empire, where the British colonized various peoples in other parts of the world. And you imagine them living in their nice white-painted Victorian-style government house building, making rules for the betterment of the natives, in inverted commas, of those people, and saying, well, of course, they're rather simple people. They don't really know, you know, what's the best thing to do. So we need to make some rules which apply to them, and we'll educate them or bring them up, if you like, indoctrinate them in believing that these are the right thing to do. But of course, for us sophisticated government bureaucrats, we will know that actually it's not always the right thing to do, and we will sometimes break those rules ourselves in the general good, where we wouldn't actually tell the local people
Starting point is 00:50:07 that we're breaking those rules because then they would not keep the rules that would be best if they do keep. So, you know, essentially Williams was saying this idea of esoteric morality divides people into the uneducated masses who have to be brought up with simple rules and the more powerful elite who think that the rules don't apply to them. And that's obviously an unpleasant way to view morality. In the article that you mentioned and then also
Starting point is 00:50:47 in the chapter, A Point of View of the Universe, Katarzyna and I defend Sidgwick and say that, of course, the whole attitude that Williams is talking about of the idea that our nation, white people about, of the idea that our nation, white people presumably, have the right to rule over others and are wiser than they and know more about their situation than they do, that is objectionable. But that's not an inherent part of esoteric morality.
Starting point is 00:51:21 There may be many circumstances in which you don't have those assumptions, but it's still the case that generally you ought to breach some rule where it would still be better if other people did not know about the breach of the rule and therefore the confidence in the rule was weakened. Thank you. So some specific questions. In the paper you consider the standard originally proposed in your famous paper, Famine, Affluence and Morality,
Starting point is 00:51:56 the standard that people should give everything they can spare to the global poor. But you and Katarzyna write, quote, perhaps advocating so demanding a standard will just make people cynical about morality as a whole. If that is what it takes to live ethically, they may say, let's just forget about ethics and just have fun. If, however, we were to promote the idea that living ethically involves donating, say, 10% of your income to the poor, we may get better results, end quote. So am I correct in thinking that the 10% recommendation is just a straight up example of esoteric morality? And because this is only audio and we're not doing video,
Starting point is 00:52:37 if you want to imply the opposite meaning of your verbal answer, just give me a wink. Okay, no, no, no winks. I don't need to give winks here um and because this has been a fairly sophisticated philosophical discussion i'm going to assume that the people who have listened to this point um and i'm relying on you not to put this up front as the very first thing in the program i'm assuming that people who have listened to this point can follow the idea that, yes, we may want to promote a standard in general that is a reasonably simple standard, that is one that's easy to remember, that also picks up various religious traditions about the tithe of 10% of your income donated to the poor,
Starting point is 00:53:25 and that encourages people to do that rather than produce a more demanding standard, which will, as in the quote you read, mean that fewer people actually follow it. And even though there are some people who then give significantly more than 10%, the total amount raised for people in extreme need is less than it would be if we'd promoted the 10% standard. is the case, then I do think that that's an example of esoteric morality that, yeah, we will say give 10% and many people will do that. But if people really want to inquire and think about this and challenge us and say, well, why 10%? I could give 20% and that would do more good. I could give 40% and that would do still more good. Then we'll be prepared to say, all right, so since you have
Starting point is 00:54:26 thought through this and not just accepted the 10% guideline, then we'll acknowledge to you that this guideline was done for general acceptance to produce the most good. But if you understand the situation and you are prepared to do more than 10%, great, then you should do more than 10%. Let me say, by the way, that I haven't myself, certainly not in the last few years, endorsed the 10% guideline. I did talk about it, I think, in one stage many years ago. But in the book The Life You Can Save, which your listeners can download free from the website
Starting point is 00:55:04 of thelifeyoucansave. from free from the website of the life you can save dot org at the back of the book i have a kind of a progressive table that's more like an income tax table that starts with something much lower than 10 for people who are really on fairly low incomes but still have a little bit more than they need and goes up to 33 and a third percent, a third of your income for people who are really earning a lot. And essentially, you know, even that 33 and a third, I think people who are very wealthy ought to be giving more than that. But, you know, that's what I'm doing is anyway a step towards
Starting point is 00:55:44 what I think people should really be doing, a step closer to that than just the 10% figure. So my next question is an empirical question. And I see a possible tension between your approach to giving to the global poor and your approach to the treatment of animals. So with respect to animals, one could argue that the meat boycott you called for in animal liberation was a very demanding standard. And maybe it was better to just encourage people into pescatarianism or something else to avoid the greater evil of
Starting point is 00:56:19 intensive farming. But instead, you went pretty hard in calling for a meat boycott. So, why does giving to the global poor fall into the more esoteric bucket? Because I can see plenty of reasons why you might actually get better outcomes by publicly and consistently calling for the more demanding standard. I can also think of historical examples where that has been successful. For example, maybe you could view the abolition of slavery as an example of where people kind of radically self-sacrificed over quite a short period of time it's interesting it's an interesting question because you know one difference between these is that with giving to the poor i mean there, there just is a continuum, right?
Starting point is 00:57:05 There's no reason why you should use 10% rather than 11% or 11% rather than 12%. It's a constant continuum. Whereas with slavery, to take that example, freeing the slaves is a demand that you can make and that has a clear, you know, ends the evil you're trying to combat, whereas reducing slavery, while it would do some good, still leaves this problem essentially as it was.
Starting point is 00:57:42 And the other thing you have to remember about the abolition of the slave trade or of slavery in general is that there was never universal support for slavery. And this is a difference from getting now to the animal issue. So, for example, when British ships were transporting slaves from Africa to the United States, slavery was not legal in Britain. And if somebody who was a slave had been a slave landed in Britain, they were free. And in the United States where the slaves were going, slavery was, of course, not universally accepted in the United States. It was accepted in the southern states, basically, and the northern states opposed it.
Starting point is 00:58:32 So I think the demand to end slavery was a demand that always had a good prospect of success. The demand to give enough money to end poverty is much more difficult and, as I said, there are endless degrees and in some sense there will no doubt always be people, some people who have less resources than others. With animals, it's a bit of, you know, it's a bit of the differences go both ways because, as I said, there is near, well, there's certainly
Starting point is 00:59:09 a very clear majority and even overwhelming majority accepting the consumption of animals, of meat, and that makes a particular problem. There is something like you could imagine, like the abolition of slavery, the ending of this problem. But because we're always going to be interacting with animals, there will always be questions of conflicts of our interests and their interests.
Starting point is 00:59:36 So it's hard to imagine that we're ever going to get to a completely, you know, situation that resembles the abolition of slavery, although we could certainly get a lot closer to it. Also, you raised the question of how demanding is it to ask people not to eat meat. At the time I wrote Animal Liberation, I had stopped eating meat, as had my wife. We made this a joint decision.
Starting point is 01:00:10 And we didn't find it particularly difficult, I have to say. Or in a way the main difficulty was that you had to keep explaining this to people and justifying what you're doing and some of your friends would look at you as if you'd become a crank. There were kind of those social difficulties. But in terms of having enjoyable meals, you know, cuisines that we love to cook and feeling good on a vegetarian diet, feeling perfectly healthy and zestful and all the rest of it,
Starting point is 01:00:40 there was no problem at all. So to me that's actually not so demanding an ask right maybe it's it's a more demanding ask than asking reasonably comfortably off people to give 10 of their income to the poor but it's not much more demanding than that and it's certainly less demanding than giving a larger sum but within a consequentialist framework, you might get better results just arguing for pescatarianism or something like that. Well, pescatarian I don't think is a good example because I think... Fish suffer greatly.
Starting point is 01:01:13 Fish definitely suffer. And because they tend to be small, there are more of them suffering. You're going to eat more of them. In a way, I think you could argue that just from the animal welfare point of view, let's put climate change reasons for being vegetarian aside for the moment, from an animal welfare point of view, it's better to eat cows than fish because one cow can feed quite a few people and especially if the cows have reasonably good lives.
Starting point is 01:01:41 Whereas with fish, you know, either they're coming out of aquaculture, which is just factory farming for fish, and I think they have pretty terrible lives, or they've scooped out of the oceans, in which case their lives were good, their deaths were horrible, and there's a lot of overfishing going on, and we're running down sustainable fish stock. Okay, interesting. I accept that. So let me ask you a few more questions about esoteric morality. So if esoteric morality is Because if everyone was a consequentialist and therefore knew that everyone else would practice esoteric morality, that would potentially lead to a degradation in trust, which would be a bad outcome.
Starting point is 01:02:38 So in light of esoteric morality, there is like an optimal number of consequentialists. So I think the point about esoteric morality works in a society where not everyone is a consequentialist and some people believe in certain moral rules and follow those rules because they think that they're kind of right and you don't want to weaken that trust because if you did they might just become egoists for example they might just think about their own interests if on the other hand you accept the possibility that everyone is a utilitarian, I don't think the situation is the same. Because
Starting point is 01:03:27 if you really believe that everybody, or even virtually everybody, so if you meet a stranger, it's overwhelmingly probable that they're utilitarian, then there's a sense in which you can trust them. You can trust them to do the most good. Now, if you ask them to promise that they will meet you at a certain place at noon tomorrow, it's true that you can't trust that they will turn up there because if there is a greater utility in them doing something else, then they will do something else. That's true. But you will want them to do something else because you are a utilitarian. Now, now that we all have mobile phones, of course, you would expect them to call you up and say, sorry, I can't, can't meet you as we arranged because I've got to drive a sick
Starting point is 01:04:13 person to hospital or whatever else it might be. But I don't think that there's a, I don't think that there is a limit if you assume that everybody could function well as utilitarians. Okay, interesting. So there's like this valley where the proportion of utilitarians increases, trust diminishes, but then at a certain point, trust starts to increase again. And the benefits, yes, and the benefits then overcome the disadvantages. Okay. So, I mean, this is kind of an empirical question, but at what point would you start to worry about trends like the spread of atheism and weirdness, to use Joe Henrich's acronym,
Starting point is 01:04:55 that kind of potentially drive consequentialism? I don't accept the acronym that consequentialism is weird. Well, but you know the Western educated industrialized rich democratic? Yes. The Western kind of psychology. Isn't that kind of correlated with utilitarianism in a way? I don't think so, actually. I think utilitarianism is a more universal tendency.
Starting point is 01:05:22 There's a little book that, again, Katarzyna and I wrote in the Oxford University Press, very short introduction series, very short introduction to utilitarianism, and which we regard Mozi, the Chinese philosopher from the era of warring states, as likely to be a utilitarianism a utilitarian although we don't have a lot of extent writings of his but um there seemed to be utilitarian tendency in his thinking um among the the greeks there were some people with utilitarian tendency certainly epicurus was a hedonist not necessarily a universal hedonist but hedonist, not necessarily a universal hedonist, but hedonism was maximizing pleasure and minimizing pain has been around for a long time. There's some tendencies in Buddha's thinking, I think, towards reducing suffering
Starting point is 01:06:15 and improving happiness. So I think there are utilitarian tendencies that are non-Western. Your paper got me wondering whether the effective altruism movement is not being consequentialist enough in light of esoteric morality. So to explain what I mean, I think that a lot of scientific and intellectual breakthroughs come about through irrational optimism. People just irrationally persisting in solving a problem that doesn't seem obvious to their contemporaries, many such cases of this. And one concern I have about the
Starting point is 01:06:52 EA movement is that if it uses base rates to give people career advice, it might persuade some people not to work on things that could turn out to be really important. So here my claim is that EA gives advice that's rational for the individual, but collectively could result in worse outcomes. So perhaps from the perspective of esoteric morality, there may be cases where EA should not push the outside view, to use Daniel Kahneman's term, the outside view when giving career advice? Do you have a reaction to that? You might be right. I don't really know how you would calculate how often you will get
Starting point is 01:07:36 those extraordinary benefits from people pursuing these strange obsessions. But, yeah, it is an empirical question and it's possible that you're right and if you're right then uh yes effective altruists should not be persuading people to go for what has the the best strike average so esoteric morality is related to this idea of straussianism if we think about strauss's book, Persecution and the Art of Writing, where philosophers kind of write very cryptically for an audience who, you know, their work may be published more broadly, but only a very select few can actually understand and
Starting point is 01:08:16 interpret what they're trying to say in order to avoid persecution by sort of conveying and discussing uncomfortable truths throughout history. When I think of examples of noble lies, I can really only call to mind examples of where the noble lie kind of blows up in the face of the liar. You know, things like at the beginning of the pandemic when the US Surgeon General told people that masks aren't effective in preventing the spread because potentially they wanted to reserve the supplies of masks for medical professionals. And then that's just seemed to kind of diminish trust in institutions even further in the US. Maybe the only reason I
Starting point is 01:08:56 can think of bad examples of noble lies blowing up is because the good ones, by definition, stay hidden. But I'm curious whether you are aware of any historical examples of where Straussianism has worked successfully, you know, on the part of philosophers or anyone else, someone who's tried to be esoteric in their circumstances, they were successful. And now with the benefit of hindsight, we can recognize what they were successful and now with the benefit of hindsight we can recognize what they were trying to do well that's a very good question um i'm not sure that i can think of that off the top of my head um yeah so when you start talking about strassians then what i think about actually is um the group around george. Bush, who acknowledged the influence of Strass. And I think that led them into the catastrophic invasion of Iraq. I think some of them at least knew that Saddam did not have weapons
Starting point is 01:09:59 of mass destruction, but they thought that they could create a democracy in the Middle East and that that would be a good thing and would increase American influence. So that certainly is not what you're looking for. That's an example that came very badly unstuck. There surely are examples of noble lies that have worked. Well, I mean, let me maybe potentially let me suggest one how likely is it that apuleius was a straussian and that his book the golden ass was a challenge to the prevailing stoic thought and roman mistreatment of animals that he was kind of deliberately trying to make a point about
Starting point is 01:10:40 animal rights in a straussian way but why an in a Straussian way? To me it seems fairly obvious that the golden ass shows a lot of empathy for an animal. This is why I edited a version of the golden ass, because I was attracted to it because of that remarkably early sympathetic portrait of the life of a donkey. And that seems to me to be on the surface rather than hidden. Yeah, but I guess you could argue that he's doing, he's making the point in kind of like a discreet way
Starting point is 01:11:17 that maybe not all of the audience will understand. But he's not outright criticising anyone. Maybe he's using allegory to make make his point but yeah i suppose by the the standards of what we'd normally consider straussian maybe it falls short i think there's yeah i think there are passages where he clearly is criticizing someone for example uh so for those who don't know at one point the donkey gets sold to a miller and is harnessed to turn the mill wheel. And the picture of that mill and the suffering of the donkey, of horses who are there, and also of human slaves who are also working, doing this work and essentially being worked
Starting point is 01:12:02 to death, does condemn that quite strongly. It reminded me of descriptions of factory farms and the effect of them both on animals and on the workers who live there that we have today. Okay, so that brings me to Animal Liberation. You have a new revised edition of Animal Liberation coming out in June, I believe. Yes, June the 13th is the Australian publication date and it's called Animal Liberation coming out in June, I believe. Yes, June the 13th is the Australian publication date, and it's called Animal Liberation Now to indicate that it is really not just an update, but a very significantly revised and changed book. Since it was first published in 1975,
Starting point is 01:12:40 how have animals fared on the whole, and how do you assess the impact of animal liberation let me start with the impact of animal liberation um i think it contributed to the start of the modern animal rights movement um how big that contribution was is really hard to estimate some people you know refer to it as the Bible of the modern movement and as having triggered it, but there were a lot of people clearly who were necessary and working for change. But it did clearly inspire some of those leaders, Ingrid Newkirk, who founded People for the Ethical Treatment of Animals, the largest radical animal group in terms of its numbers of supporters. Certainly it's said that reading it changed her views and her life and a number of other people have said that.
Starting point is 01:13:32 So I think it certainly had an influence in sparking that movement and different ways of thinking about animals and a whole debate that went on. But if you ask me to assess, you know, what has happened to animals since 1975, that's a very different story. Because even if there was an animal movement that was active in the United States, the United Kingdom, Australia, Canada, and also in Europe, roughly in the nations of the European Union today, and has had a positive influence on animals there.
Starting point is 01:14:09 And there have, to varying degrees, been improvements, more in the European Union than either Australia or the United States, I think. There have been improvements there. But firstly, they're fairly small improvements, and there's still a long way to go. Some of the worst forms of confinement of animals in factory farms got prohibited, but there are still vast numbers of animals living in totally unsatisfactory conditions. And secondly, the animal rights movement has had virtually no influence in places like China. And since 1975,
Starting point is 01:14:50 China has become a lot more prosperous. And so hundreds of millions of Chinese have used the extra disposable income that they have to buy more meat. And China has supplied that need by building larger and larger factory farms. So there are far more animals living miserable lives in factory farms today in 2023 than there were in 1975. In that sense, the movement hasn't had the effect that it wanted to have and that I wanted to have. Some of these farms like literally skyscrapers. Yes, China is building 26-storey pig farms, for example, filled with vast numbers of pigs who will never, of course, get to go outside in any way.
Starting point is 01:15:35 They're all living indoors in these very barren conditions and it's all sort of maximised for greater productivity and lower cost, not for animal welfare. Yeah, I guess the question of China leads me to my next question because adjacent to China is India. I was in India at the end of last year and I asked this question to many of my local companions and none of them seemed very impressed with the question but it still bugs me so i'll try it with you so the question is this not only is india the
Starting point is 01:16:11 most vegetarian country in the world it's also the home of jainism perhaps the most extreme religion in terms of its respect for animals at the same time, India is also the world's largest producer and consumer of spices. So, you know, the vegetarian food there is absolutely delicious. Is that just a coincidence? Or if not, which way does the causation flow? Have you thought about this? I have never thought about the connection between spices and vegetarianism. No, that's an interesting point. I mean, my understanding is that India has a lot of spices because it's a hot country and spices preserve food. Hot spices, anyway, chili preserves food from spoiling.
Starting point is 01:16:55 So then maybe you want to say then that the spice is like the independent variable. Yes. And then vegetarianism flows from that. Possibly, I see. yes and then vegetarianism flows from that oh possibly i see so that it was easier for people to exactly to take up vegetarianism because they could have very flavorsome vegetables which and it's certainly true that when we were living in england which is when we became vegetarian at that time in 19 in the early 1970s they did not cook vegetables well. And by not eating meat, you were losing, you know, something that was tastier than what you got if you just had the three boiled vegetables that were often served on the plate.
Starting point is 01:17:36 But, of course, what we did, and this in a way supports your point, what my wife and I did when we became vegetarian, was to start cooking from non-Western cuisines. And Indian was probably the first cuisine that I learned to cook tasty vegetarian food. But Chinese food also has a lot of dishes that are or can be vegetarian or vegan. And some of those also, like Sichuanuan Chinese cooking are quite highly spiced.
Starting point is 01:18:07 Yeah I guess it gets to a broader question which is about the role of contingency. If you look at human history how contingent is it that most of humanity is meat-eating? Is it just an accident of history that more cultures with prohibitions on meat-eating like Hinduism didn't develop or is meat-eating like Hinduism didn't develop? Or is meat-eating kind of like inevitable for most cultures? So I think there's an evolutionary story here again, and that is that if you are short of food, then being able to consume foods that have high nutrient density
Starting point is 01:18:44 gives you a survival advantage. And so meat is one of those foods. If you can obtain it and if you can digest it, then you will have an advantage over those who don't and therefore have to spend more time gathering food and eating food and preparing food than you do if you have something that meets your nutritional needs quickly. So I think we developed a taste for it for those reasons. Now, none of that is relevant today in the sense of, you know, at least certainly people who are affluent enough to walk into supermarkets
Starting point is 01:19:25 and buy anything from the wide range of foods that they provide doesn't need to eat meat and doesn't get any kind of survival advantage by doing so. In fact, by eating as much meat as people eat in the United States, say, or Australia, you probably have a disadvantage in health terms. But nevertheless, we have that taste for it. And I think that's why most cultures do eat meat. And it's the rarities that say eating meat is wrong. If we bite the utilitarian bullet,
Starting point is 01:20:02 why limit your concern to animals in human captivity? So Brian Tomasek argues that we should abolish wild animal suffering too. Do you agree with him? Abolish is a strong term. I agree to the extent that we should make efforts to reduce wild animal suffering. And that's, again, one of the differences between animal liberation now and the original animal liberation, because I didn't talk about wild animal suffering then. I thought that's really far-fetched and bizarre to even talk about that when we're doing all of these horrendous things to animals in factory farms, in laboratories, in fur farms and so many circuses, so many other places. But because of people like Brian and Oscar Orta
Starting point is 01:20:48 and now Katia Faria, there's a number of philosophers who are writing about wild animals and what we might do to reduce their suffering and whether we ought to do that. And it's become a significant subfield of animal ethics. So I felt i should say something about it and what i'm saying is that there are many things that we can do to reduce wild animal suffering which are relatively simple and not controversial in the way that for example saying well predators cause suffering so we should eliminate predators because then there'll be less
Starting point is 01:21:24 animal suffering. And obviously that's highly controversial, first because the consequences might not be better. It might be that you eliminate the wolves and the deer all starve to death after overgrazing their habitat, but also because you would run up against those environmentalists who want to preserve ecosystems and the ecosystems depend on predators and they don't want to see any species eliminated
Starting point is 01:21:48 and certainly not the iconic predators like wolves or tigers or lions. So I certainly don't think the animal movement should get into a sort of situation where it's in head-on conflict with those environmentalists because both environmentalists and the animal movement are minorities. They're not really powerful and I think we have a lot of common ground. For example, opposition to factory farming is clearly something that environmentalists and animal people support. So I think we shouldn't go into those areas,
Starting point is 01:22:24 but there's still quite a few things that we can do. What's like the single highest leverage policy we could implement to reduce wild animal suffering? Well, the single thing, uh, is actually to stop eating fish because the vast majority of the fish, uh, we eat are wild animals. And if we eat, uh, carnivorous farmmed fish like salmon then we're responsible for even more wild animal deaths because the trawling fleets go out to catch the low value fish to grind them up and feed them to the salmon so you're not just killing one fish when you buy a salmon raised in aquaculture and eat it you're killing I think something like maybe 90
Starting point is 01:23:04 fish I read. I may not be correctly remembering the figure, but it's a surprisingly large number of fish who have been killed to feed that one salmon. What about after catching wild fish? So there are a number of different things that we can do then, and again, some of them are exactly what environmentalists would want. Cats kill a large number of wild animals.
Starting point is 01:23:29 Again, people who say, oh, no, you know, my little mogs would never go out and kill animals. But when you put tiny cameras on them and let them out at night, you find that the sweetest cats will go out and kill something. So keeping your cats indoors at night is a pretty simple thing to do at least, if not permanently indoors. And doing something about feral cat problems is another thing as well.
Starting point is 01:24:02 Trying to prevent there being feral cats is going to reduce animal suffering and preserve more species. So I assume you're familiar with the shrimp welfare project. This has become somewhat of a meme in the EA community. But if the shrimp welfare project goes well, perhaps we can cheaply sustain trillions of blissful shrimp. Is there some margin where we should do that instead of spending on human welfare? Well, first we have to assume the shrimp can be blissful. That is that they are sentient beings. And the term shrimp actually, when you look at it, does not refer to any natural biological order. It crosses completely different species, some species of which may be conscious and sentient beings.
Starting point is 01:24:54 So the United Kingdom recently passed an animal sentience law which extended beyond vertebrates and included cephalopods, so octopus. And I think those who've seen my octopus teacher will all agree that an octopus is sentient, but also decapod crustaceans, which includes lobster and crabs and I think some species of shrimp but not all. So, yeah, maybe some shrimp are sentient and some aren't. So if we're going to carry out the shrimp project,
Starting point is 01:25:22 we better find the ones that are. But, of course, being sentient is one thing. That means you're capable of feeling pain. Does it also mean you're capable of experiencing bliss? I don't know. And I don't know how we would know that shrimp are capable of being blissful at all. But you can turn this into a hypothetical example, I suppose right you can say all right let's assume that there are shrimp who are capable of experiencing bliss should we raise vast numbers of them at the cost of not improving the lives of humans at some level and yeah i'm gonna i'm gonna bite the bullet on that and say yes if we have reason to believe that they are capable of blissful existence then um that would be a good thing to do if ai causes human extinction
Starting point is 01:26:10 but we're replaced with artificial beings perhaps brain emulations like in robin hansen's book the age of m and these beings live radically better lives than any human. Is that a bad thing? Or should we just kind of wish them luck and fade into history? I think we should wish them luck. There's another disagreement I have with Bernard Williams, right? Because you know that article called The Human Prejudice, where he defends the idea that we're right to favour human interests even over similar or greater interests of non-humans.
Starting point is 01:26:46 And his sort of closing argument in that is to say that if super intelligent aliens come to Earth and decide that everyone will be better off if humans get eliminated, Williams says that the question to ask then is, whose side are you on? So we're humans, so we've got to be on the side of humans, is his impression, which I find a very strange remark, because this idea of whose side are you on? I mean, you can exactly think of Russians who support the war in Ukraine, for example, saying that to other Russians who dare to express some dissent
Starting point is 01:27:29 or, you know, say, well, why should we be attacking Ukrainians? You know, we used to get on very well with Ukrainians. And, you know, that whose side are you on just says you're a Russian, you've got to be on the Russian side against the Ukrainians. And in a way, Williams is saying something remarkably similar to that, which I find a completely indefensible way of arguing for being on the side of humans against animals, but also in your example,
Starting point is 01:27:57 if in fact things will be much better without humans and these will be replaced by these minds which can experience much greater, richer, wonderful lives than we can. Okay, good luck to them. It would sort of be another form of speciesism not to wish them well. Yeah. Yeah. I liked his paper.
Starting point is 01:28:20 I find it kind of like intuitively very appealing, but when I actually read it, there's no like strong philosophical principle that I can kind of grasp onto apart from just like the sort of, well, surely you should be on the side of humans vibe. Yeah, I think if you really wanted to construct it as a philosophical argument, it would go back to what we talked about earlier, the idea that we're humans we have human projects and there is no impartial point of view you know we can't take the point of view of the universe because we're humans yeah to accuse him of saying that
Starting point is 01:28:57 non-human interests didn't matter isn't actually the claim he's making. No, he's not saying that, exactly, he's not saying that there's a universal point of view from which non-human interests don't matter. He's saying we inevitably take the human point of view. Yes. And so for us, we have reason to defend humans even against these superior aliens, in his case, and artificial intelligence in yours,
Starting point is 01:29:28 whereas I'm taking the universal point of view and saying no, you're just saying, you know, we have reasons to defend humans and the aliens have reasons to support aliens. I'm saying that there's got to be something more than that. To that extent, I go back to what Parfit was saying right at the very beginning of this conversation there are objective reasons for action and williams is ignoring them so obviously in recent years effective altruism as a movement has taken a turn from the kind of original global health and development focus to a greater concern about existential risks
Starting point is 01:30:08 and preventing those risks. You're not entirely persuaded by the kind of long-termism project, if I can use that term. What are the reasons for your scepticism? So I'm not sceptical about the value of taking a long-term view. I think there is considerable value in doing that. And ultimately, and again, this goes back to one of Sidgwick's axioms, I think all moments of existence of sentient beings are equally important.
Starting point is 01:30:39 So I'm not saying, I'm not a presentist in the sense of saying the present is what matters or the near term future is what matters and the long term future doesn't matter. What I'm sceptical about is the idea that we should be putting most of the resources of the effective altruism movement towards long term goals, particularly really long term goals. So Will McCaskill talks about thinking about the next billion years. I find, you know, except for saying, you know, yes, it would be good if there were still humans around in a billion years, I find it pretty inconceivable to think how you can make any difference
Starting point is 01:31:21 to that far in the future. So I think that there are just so much way of uncertainties as compared with the good that we can be reasonably confident about doing in the near-term future, that it's a mistake to focus the effective altruism movement primarily on long-termist goals. Talking specifically about the cause priority of mitigating existential risks and within that, again, specifically being concerned about artificial general intelligence. I was actually inspired to think of this while I was reading, I think it must have been Simon Boccaccio's letters
Starting point is 01:32:12 during the Italian Renaissance. I think one of them to Petrarch, but the reason I was reading it is he was describing the plague, the Black Death, and there was this line to the effect of all human wisdom and know-how were kind of futile in the face of the plague but it made me think well they they weren't the people that at that time just didn't have the right knowledge to prevent the plague and that that's sort of a tragedy and i guess there's a risk in not having the right knowledge sort of waiting for the the proverbial asteroid and so i guess the critique would be that some members of the ea movement
Starting point is 01:32:53 underestimate the risks of not developing technologies that could protect us from existential risks and by attempting to slow down risky technological advances, they might inadvertently increase those risks of omission since it's impossible to know ex ante how technologies will develop. Why do EAs tend to focus on the dangers of commission rather than omission here? It's impossible to predict which approach is more likely to cause harm, but loss aversion would make you focus on the risks of commission over the risks of omission. Yes, that's an interesting point to make. And I think it does emphasize the uncertainties that come into long-term thinking.
Starting point is 01:33:42 And even this is not very long-term right thinking about ai and super intelligence um is probably this century rather than uh several centuries in the future so um it but what your point shows shows the difficulties of working out what the circumstance will be and what we might regret not having done at some point in the future. Yeah. I mean, if we decide to slow down progress in AI and then in 50 years, Earth gets wiped out by an asteroid and in the counterfactual, if we'd had AI in time and it helped us design some kind of system to protect against that, I think there's a bias that leads us to focus more on the the risk of ai killing us all then on the risk of not developing technologies quick enough that can save us yes
Starting point is 01:34:31 although i mean i don't think it's true that uh eis think about um commission more than omission in general i agree with that yeah yeah um i mean i'm thinking about this article by nick bostrom about the there's the fable of the dragon where he yeah he's actually talking about death yeah death and the fact that we don't try to overcome death um and means that it you know this vast number of people die when they maybe wouldn't have to die if we did more research in terms of overcoming mortality. So it's not a general point, but maybe your point is on this specific issue of the harms of AI, they do. I'm not sure.
Starting point is 01:35:18 I'd be interested to know what they would say. They may say, look, we're not against developing the kind of AI that would predict when asteroids are going to crash into our planet or work out what we could do to prevent asteroids crashing into our planet. But, you know, it's some larger, more general kind of superintelligence that we're trying to warn against. Yeah.
Starting point is 01:35:40 I guess my response to that in turn would be I'm not sure that you can... Separate those things. Yeah, I guess my response to that in turn would be I'm not sure that you can... Separate those things. Yeah, exactly. So not every moral, philosophical article or book sparks a movement. In fact, very few do, but you've separately created or sparked social movements with animal liberation and with famine, affluence, affluence and morality obviously there's
Starting point is 01:36:06 some overlap between those two movements but it's not a complete overlap do you have a sense of what makes those two works so charismatic and and what separates them from works that uh have attempted to be but but have failed to be successful in sparking world-changing social movements? Look, I think one thing that they have in common is that they were both written early in the development of applied ethics in the modern sense of applied ethics, right? Obviously philosophers have done applied ethics for a very long time in different ways through ancient Greece and medieval times. But there was a period
Starting point is 01:36:52 in the 20th century when philosophers didn't do applied ethics and in fact hardly did any normative ethics because mostly they were concerned with meta-ethics. Some of them thought that ethics wasn't really a subject because people like A.J. Eyre thought that there's no scope for argument in ethics and reasoning really. So I started doing philosophy at a time when that had been the dominant movement, but it was just starting to break up. It was just starting. So in the 60s, the radical student movement against the Vietnam War
Starting point is 01:37:31 and against racism in the south of the United States, students were demanding relevance in their courses and some of them were doing philosophy and were saying, hey, philosophers, don't you have a view about what makes a war a just war and don't you have views about equality and why it's justified and so on? So there was this sort of crack in what had been fairly monolithic, at least in the English-speaking world,
Starting point is 01:37:58 for saying ethics doesn't tell us how to live. And so I was able to write both of these works that you mentioned and for them to be regarded as part of philosophy because of the time when I was writing in the early 70s. And if I had been a decade earlier, probably I would have had to choose between leaving philosophy as a profession and as a discipline and writing that kind of work or not writing it at all and sticking to what philosophy was doing, which is basically the analysis of moral language. So I think I was just lucky, really, to some extent,
Starting point is 01:38:46 in being there at that time. And then I picked two issues that are really relevant to pretty much everybody who is likely rereading philosophy texts. So, you know, I'm thinking of students now, but also, of course, the academic professional philosophers. But, you know, students were obviously eating. And so animal liberation challenged them to think about the ethics of eating animal foods. And that was something that every student in every class could think about.
Starting point is 01:39:22 And famine, affluence and morality challenged them to think about what they were spending their money on. And even though students don't have a lot of money, most students do have some that they spend on pretty frivolous kinds of things. And so that was another question that was raised to them. And so, as I say, to one, you know, first I was lucky to be able to be in this position to do this kind
Starting point is 01:39:46 of philosophy, but secondly, I then used that to write about very everyday questions that were going to affect pretty much everybody who was going to be a philosophy student. And for that reason, I think philosophers took them up. They were reprinted in anthologies or extracts of the book and the article were reprinted in lots of anthologies and lots of people read them. And I think that really helped. Particularly, I would say, you know, with famine, affluence and morality is a little bit different from animal liberation
Starting point is 01:40:22 because animal liberation, the very first piece I wrote was for the New York Review of Books, which is not a philosophy journal. It was more widely read. And then the book was published, and that was also not just a philosophy text. That reached a wider audience. Whereas Famine, Affluence and Morality was published as an article in a philosophy journal and then started to reach a larger audience
Starting point is 01:40:49 by being anthologised in these readers that publishers were putting out for the then relatively new courses in applied or practical ethics. And so lots of generations of students read Famine, Affluence and Morality and insofar as it had an influence on sparking the effective altruism movement, it was because philosophers, philosophy students, much younger than me, people like Toby Ord and Will McCaskill, read famine, affluence, and morality, as so many students did, as part of their undergraduate philosophy training. And then they remembered it, and they thought about it and they started to think,
Starting point is 01:41:28 hmm, maybe there is something here that's important that I should be doing something about. That all makes sense. I think there's another factor at play here as well, and that is that movements need charismatic figures, even Bitcoin. You think I'm a charismatic figure? I don't think so.
Starting point is 01:41:46 I think you are. I'll explain why in a moment. But even say Bitcoins, Toshi Nakamoto is like an anonymous figure, but he's still a figure. He's still charismatic, even if he's a group of people, whoever he is. I mean, you can have different types of charismatic figures, but one clear type are people who self-sacrifice for their beliefs. So famous examples, Jesus, Gandhi, and I guess it goes to this concept of skin in the game. And I think the sort of philosophy that you have done has the
Starting point is 01:42:22 virtue of having clear practical implications, which then gives you the opportunity to be a charismatic figure insofar as you actually adhere to those kind of recommendations yourself, you know, because you donate meat, because you give such a significant portion of your income, you know, you win a million dollar prize and you donate it to charity, that kind of gives you the opportunity to be a charismatic figure in a way that, you know, many other philosophers
Starting point is 01:42:53 don't have that opportunity, A, because their philosophy doesn't have clear practical implications. And then B, I mean, you also need to then take that step of actually having skin in the game. Well, thanks. I certainly don't claim to be, you know need to then take that step of actually having skin in the game. Well, thanks. I certainly don't claim to be, you know, the Messiah or Jesus comparison or somebody as self-sacrificing as Gandhi was either.
Starting point is 01:43:20 Yeah, I do think it's important to show that you live your values. I agree with that. And, you know, that has probably been important, as you're suggesting. Whether it's made me charismatic, I'm still not prepared to accept that. But, yeah, I think setting some kind of example, even if it's, you know, I never claimed to be a saint or to live, you know, 100% morally, but, you know, doing something substantial in that direction I think is important. So we discussed by email a few months ago that, you know,
Starting point is 01:43:55 the interesting fact that a higher proportion of Australian philosophers are utilitarians or consequentialists than most other countries. What is your explanation for this? One explanation is that Australia is more secular than certainly compared with the United States. And, you know, that's almost the first thing I noticed when I went to live in the United States, when I first went to Princeton in 1999,
Starting point is 01:44:20 was that it's really a much more religious country and not just in these conservative southern states, but in many ways, you know, that people would assume religious belief in me. I remember I gave a talk somewhere about animals and there was a little social gathering afterwards and a woman came up to me and without any sort of preamble said to me, Professor Singer, I've always wanted to know,
Starting point is 01:44:48 and I'd like to know your view, do you think the animals will be with us in heaven? I can't imagine an Australian just going up to a professor who'd given a lecture about animals without saying anything about God or an afterlife, which I don't believe in, and ask that question. We just seem to assume that I thought there is a heaven. So I think that's part of it, that obviously, you know, well, maybe it's not obvious because you can be a Christian
Starting point is 01:45:14 and a utilitarian, and there have been examples of that, but generally religions teach sets of rules which are contrary to utilitarianism. So I think being a relatively secular country is part of it. Judy Brett, who's a professor of history, wrote this book about Australian democracy and why we do elections better than some other countries. Democracy Sausage?
Starting point is 01:45:43 Yes, that was part of the title was that the whole title i thought there's probably something like from secret ballots to democracy sausage that's it yeah anyway so she's comparing us with the united states and many ways in which we do elections much better than the united states in which we have different underlying philosophies and she says her is, so the United States was founded by people, many people had left Britain and some other countries to escape tyranny. And then they founded the nation in rebellion against George III, and they have this declaration of rights. And so they're really very concerned about tyrannical government.
Starting point is 01:46:27 That's the dominant thing for them. And so they're very strong on erecting individual rights and safeguards of those rights against tyrannical governments. Whereas Australia was settled later, and at least some of the people who came to Australia, and perhaps the ones who were most politically active in Australia in the relatively early days of the settlement of Australia were political radicals, including people like the Tolpadl Matas who'd been struggling for democracy in Britain
Starting point is 01:46:58 and were imprisoned for it and sent out to Australia. And they were actually influenced by Jeremy Bentham. So whereas the Americans were influenced by doctrines like John Locke and the limits on government and the rights of humans in nature, influential people in Australia were influenced by more utilitarian thinking. And maybe some of that stuck and we still are influenced by the fact that the British government deported these political radicals to Australia.
Starting point is 01:47:28 Have you heard of this book, The Founding of New Societies? No. I'm not sure whether it's quoted in Brett's book, but it makes that argument that it's like a shard kind of splintered off mainland Europe at the time of the founding of the US colonies and then Australia and that kind of preserved whatever was the dominant political ideology at the time of the founding of the u.s colonies and then australia and that kind of preserved whatever was the dominant political ideology at the time of the splitting and yeah so for america they're thinking more about lock when they're drafting their constitution australians are thinking more about bentham when they're they're drafting this yeah yeah
Starting point is 01:48:00 keith hancock also had some words to say about this in his book, Australia, in the 1930s or whenever it was published. You know the line about Australians view their government as a vast public utility? Yeah. Yeah. I mean, for me, this just raises another question, which is, so I think most people are deontologists at an individual level, but then they kind of expect their government to be utilitarian. I'm not sure whether you agree with that claim. Bob Gooden wrote something along those lines. Right, okay.
Starting point is 01:48:31 Utilitarianism is a public philosophy, I think. Yeah, yeah, yeah, exactly. So maybe Australians expect their government to be especially utilitarian for these historical kind of contingent reasons that you've outlined. But then I guess that still leaves the question of like, well, what's the channel or the mechanism from that political ideology to then like so many individuals being utilitarians in like a totalising sense? Yeah, I really don't know the answer to that.
Starting point is 01:49:03 Yes, there's something in the water that leads us that way. It's hard to say, you know, but at least we're not pushing against this rights view that, you know, certainly, again, another thing I noticed when I came to the United States and to philosophy in the United States was that utilitarianism was thought of by quite a lot of people as, you know, something of historical influence. But surely we've moved beyond that because we understand the importance of human rights.
Starting point is 01:49:33 And so you always, in a sense, had an uphill battle to be taken seriously as a utilitarian. And I never felt that in Australia, even though the first class in ethics I took was taken by H.J. McCloskey, who was a deontologist and was an opponent of utilitarianism. But, you know, he was certainly open to people defending utilitarianism and never tried to ridicule it or anything like that. He just took it very seriously, as did other philosophers. Yeah. Funny, it just struck me. You almost see the difference in the two cultures reflected
Starting point is 01:50:06 in the architecture as well. If you walk through Washington, all the national monuments are in a beautiful classical design. If you walk through Canberra, it's all like brutalist architecture, much more utilitarian. Yes, that may be true. Yeah, I don't know. Those are period things, right?
Starting point is 01:50:24 It depends when things get built and replaced. Yeah, I don't know. Those are period things, right? It depends when things get built and replaced. Yeah, maybe just like ideas. Yeah. To the extent that secularism is a factor underpinning Australian utilitarianism, reflecting on your personal journey, do you feel like that is a better explanation of your origins as a utilitarian than Tyler Cowen's explanation of Peter Singer as a Jewish moralist? Do you remember the 2009 Blogging Heads interview you did with him and he put this idea to you?
Starting point is 01:50:54 Yeah. Yes, I think the secular explanation is much better. I have a Jewish family background, but I've never really, well, certainly I've never been a religious Jew and I've also never really been part of Jewish cultural institutions. You know, I didn't attend a Jewish school. My parents were very assimilationist. They sent me to Scotch College, a Presbyterian private school
Starting point is 01:51:21 because they thought that would be the best for me. And many people ask me whether the Holocaust background of my family, because three of my four grandparents were murdered by the Nazis, whether that has some influence. Maybe that does, but that's still not the same as being a Jewish moralist, I don't think. I think it's much more the secularism. Last two questions. If we look back at the history of life on earth
Starting point is 01:51:50 through a hedonistic lens, has it been good on the whole? No, I don't think it has been good on the whole. I think there's probably been more suffering or the amount of suffering and the severity of the suffering probably outweighs the good in the past. I think that the balance is changing. I think the balance has changed over the centuries and particularly I would say the balance changed, let's say,
Starting point is 01:52:20 from the second half of the 20th century. Things seem to get significantly better. And I think, you know, despite the fact that we've now got a major war going on in Europe and climate change is still an uncontrolled threat, I think that there's grounds to be more optimistic today than there have been in earlier parts of human history. And what are those grounds?
Starting point is 01:52:44 What are the best reasons to think that the future will be good overall? Much wider education. Literacy is 90% or something like that, which never was at that sort of level previously. Science and technology have made huge advances and enable us to feed ourselves without too much problem for most of the world.
Starting point is 01:53:10 The proportion of the world's population that is hungry is smaller than it ever was. So I think there have been a lot of, and of course, there's a lot of health innovations. You talked about the plague not that long ago. We deal much better with with covid and people were able to deal with bubonic plague so i think i think there's a there's a lot of things like that peter singer thank you for joining me thank you it's really been a very
Starting point is 01:53:40 engaging and stimulating conversation. Been my pleasure. Thank you. Thanks so much for listening. Two quick things before you go. First, for show notes and the episode transcript, go to my website, jnwpod.com. That's jnwpod.com. And finally, if you think the conversations I'm having are worth sharing, I'd be deeply grateful if you sent this episode or the show to a friend.
Starting point is 01:54:09 Message it to them, email them, drop a link in a WhatsApp group, or even better, blast it out on Twitter. The primary way these conversations reach more people is through my listeners like you sharing them. Thanks again. Until next time. Ciao.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.