Within Reason - #6 — Peter Singer | Utilitarianism and Animals
Episode Date: July 21, 2019Peter Singer is an Australian moral philosopher and author of the seminal Animal Liberation, a book credited with initiating the modern animal rights movement. He speaks to Alex about utilitarianism a...nd how we might apply it to all sentient creatures. Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
This episode of the Cosmic Skeptic podcast is brought to you by you.
To support the podcast, please visit patreon.com forward slash cosmic skeptic.
So welcome back, everybody, to the Cosmic Skeptic podcast, an opportunity to break away from the normal snappier style of videos and have more long-form conversations with interesting guests.
And joining me today in the studio is Peter Singer, who has held professorships at both the University of Princeton and Melbourne, and specializes in practical ethics and is well known for his books, including practical ethics, the expanding circle, the life you can save, and also, probably,
most famously, Animal Liberation, written in 1975, which is thought by many to have
kick-started the modern vegan movement in animal rights movement.
So thanks for joining us today.
Pleasure to be with you, Alan.
It's great to have you here.
And I think a lot of people are going to be excited because it's only fairly recently
that I began to talk about animal ethics on my channel, and people were kind of surprised by
it.
And I want to talk about why my audience should begin to see animal ethics seriously.
because a lot of people see it as a kind of, I don't know if you found this, as a kind of
interesting philosophical debate. Like it's kind of brought up in a Q&A section and the
philosophers on stage kind of laugh about it, have a bit of a chat, think it's interesting
and then kind of push it to the side. But do you find that people don't take it as seriously
as they should be? Because that's something that I've tended to notice. I think it's part of
the problem that in writing about animal ethics, I wanted to
push back against the idea that humans are the only thing that matters, which
admittedly was a view that I held for the first 23, 24 years of my life, which is quite a
long time nowadays, to think not really have thought about that issue. But the attitude that this
can't be as important as issues about humans was around then, and it's still around, which
just shows that I think the animal movement has not succeeded in getting people to see that
view as a bias, as in fact speciesism, a prejudice against taking seriously the interests of
beings who are not members of our species. But it's very easy to understand where that bias comes
from, I think. Would you say that it's irrational to have that bias? Because to me it seems
as though, although it might be unfair or ethically unsound, it's perfectly rational for somebody to
care about the interests of themselves and those who they see as close to them, more
also than they do those who don't.
So I think to answer that, we have to get into a discussion about the nature of rationality
and reason in ethics.
Let me, before we quite go into that, though, let me put it this way.
There are certainly some people who think that reason in ethics is limited, that essentially
reason always starts from some desire or concern.
This was David Hume's view.
and therefore that if, as most people do, you have most concern about your own interests
or about the interests of those you love and care about, those who are close to you,
then there's nothing more to be said against that being a rational thing to do.
But in writing animal liberation, I wasn't really trying to challenge that view.
I was assuming that my readers would generally agree that it's not right to give more weight
to the interest of someone because they're a member of your race or your sex.
So let's assume that you're a white male.
It's not right to say, well, I don't care about the interest of people in Africa because
they're black or I don't care about the interest of women because they're not male.
So I was building on that and saying if you think that it's wrong to discriminate against
to discount the interest of people on grounds of race or sex, why don't you think something
similar about doing that on the grounds of species.
So it's kind of a matter of consistency rather than a matter of making a moral principle
because I suppose at the time you wrote Animal Liberation, you didn't believe in an objective
groundwork for ethics.
And so you weren't able to say this is wrong.
You were only able to say that this ethical position is inconsistent with other ethical
positions that most people hold.
Yes, well, you know, I could say that it was wrong in the way that non-cognitivists
can say things are wrong.
but could I say that it was objectively wrong?
Could I say that it was irrational?
At the time I wrote Animal Liberation, no, I would not have been able to say that.
But you feel like you can now?
My metaethical position has shifted within the last decade, I would say,
so relatively recently in terms of the period of time when I first wrote Animal Liberation.
And it has shifted towards an objective.
view, influenced by, particularly by Derek Parfit, I would say, to some extent by Tom Nagel,
to taking the view that, well, reason isn't just limited to what we desire, that there are things
that you can argue are rationally self-evident. And one of them, I think, is the idea that
I find most clearly expressed in Henry Sidgwick, the 19th century utilitarian philosopher,
when he says, if I take the perspective of the universe, then my interest don't really
care for any more than the interest of anyone else who is capable of similar amounts of
pleasure or pain or good or evil, whatever it might be.
so it's essentially saying you have to put yourself in this larger perspective
Siduik didn't think that the universe really has a perspective but but we can in imagination
take it and then you know I can look at other beings and I can say oh you know they can
they feel pain as I do maybe some of them are more similar to me you're more similar to me
than if I'm talking about a cow or a pig but insofar as beings have it can
can feel pain, have interests, their pains matter, and their pains matter equally if the pain
is just as great as mine or yours.
Yeah, that's a point to press, because you're right that, so I'm more similar to you
than a cow, but I'm also more similar to you than a woman is, but the point that you press
is that it's not about denying differences, it's about saying that those differences don't
hold any moral weight, and you've said before, quite rightfully, that in suffering and in
the ability to feel pain and have preferences, animals are our equals. But I imagine you face
quite a bit of backlash from that from people saying in response, how can you possibly say that
we are of equal consideration to animals? And I think they kind of get mixed up. People think that
by suggesting that there should be an equal consideration, it means kind of equal treatment. But
that's not really the case. No, that's not the case. And the way I put the principle of equal consideration
is equal consideration for similar interests.
So I don't claim that the interests of your interests or my interests
are necessarily similar to those of the cow or a pig.
In fact, obviously they're not.
And the same goes for your listeners or viewers.
They have an interest in abstract philosophical discussion.
No cow or pig is capable of understanding that
and therefore does not have an interest in that.
So clearly interest are different.
But where we could roughly say, as far as I can tell, this cow or pig is suffering a similar
amount of pain to that that perhaps a human infant might suffer if you did some particular thing
to that infant, then I would want to say, and the pain that they're feeling, therefore, is
just as bad and should be given just as much weight as the pain that the human is experiencing.
So let's talk about why we should give that pain weight at all.
I want to kind of dive into some of the metarethical foundations for the practical ethics
that we're talking about here, because it's all very well in good talking about matters of
consistency in saying, well, if we care about the pain of conscious creatures, then we should
extend that to animals. But why should we care about pain? So reading, the classical
utilitarian position requires taking this view of the universe and saying that everybody counts
for one and no one to count for more than one. But the rationale for it,
the sanction seems to be, as, as Mill said, the only thing that we can really use as evidence to
suggest that something should be desirable is that we desire it. And the only thing we really
desire is our own pleasure. We don't have that same kind of desire for someone else's
pleasure except as a contingency for the pleasure that will derive from knowing that they're
doing well. So do you think that the care that we should have for other people's pleasures and
pains is just intrinsic that we should just care about them or that we should care about them
in the sense that they will affect our own pleasures and pains?
My view is the former.
I don't think Mills' argument about from desiring to desirable is a good argument.
And if you want to look at a better philosophical version of utilitarianism, go to Henry
Siddwick rather than to Mills' utilitarianism.
I mean, I think, you know, I'm not putting Mill Dan.
He's a great philosopher, and I think he's on liberty
is an excellent book that still should speak to us today.
But his utilitarianism was a pretty hastily written essay for a magazine.
And whereas Sidgwick's methods of ethics was something he revised.
The edition we read is the seventh.
He was a careful academic, and he wrote much more carefully.
So Sidgwick's view was that,
when we reflect on things that are intrinsically good, we can see that that pleasure
is something that is intrinsically good, that pain is something that is intrinsically bad.
For Sidgwick, this is self-evident in the sense that when you reflect on it, you don't
need to have further steps.
You think about the nature of pleasure.
You think about the nature of pain.
You may well think about, of course, your own experiences of it.
And you can see that pleasure is good, that pain is bad.
And in fact, on Sidgewik's view,
desirable consciousness is the only thing that is good.
So nothing outside consciousness is good.
Nothing outside consciousness is bad.
Undesirable consciousness,
that we would try to avoid, minimize, get out of is something that's bad.
And just thinking about the nature of that is enough.
to see that that is good.
He does go through various moves
to consider other candidates
like virtue, for example,
but he argues
that they are instrumentally good
rather than intrinsically good.
Yeah, but I mean, surely what we're talking about there
when we recognize just that something is good
and that's just a matter of our kind of faculties
working at their base level,
that something just appears to be good
and so we can trust that faculty,
surely what we're really talking about
is something being good for us.
Like the reason we think pleasure is good is because our experience of pleasure is a good thing.
I think that's not quite the same thing as saying that somebody else's pleasure is a good thing
or saying that pleasure in general is a good thing.
I mean, I haven't had experience of pleasure in general, and I haven't had experience of someone else's pleasure.
And so I don't think I can say in the same way, oh yeah, I think that's pleasure, that that's good,
just as a matter of my faculty's working at their base reasoning.
I think the only thing I can really apply that to is my own experience, which would imply that the only thing
I can say that is good is my own pleasure.
Now, I think, I know a lot of people say something like that,
but I think that really fails to distinguish between what I care about,
what I desire, what I want, which may well be.
I mean, I hope it's not for you and I hope it's not for everybody,
but certainly many people would say the only thing I care about is my own pleasure
or that's what really matters to me or it matters a lot more than the pleasure of others.
But that's different from saying, this is what I can recognize as something that is intrinsically good.
And in that, yes, you are limited to directly experiencing your own pleasure.
But we have good evidence that other beings experience pleasure,
evidence from their similar behavior to ours, also now anatomical and physiological evidence
based on their nervous systems.
So really what you're experiencing is pleasure
as experienced by a conscious being
relevantly like you.
And I think when you think about that,
you do judge the pleasure to be good.
You don't judge my pleasure to be good.
Right.
You judge the pleasure to be good
and then because you judge it to be good,
you want it for yourself.
But as a matter of a rational exercise,
you can say, oh, well, I recognize this pleasure that I'm experiencing to be good.
And as far as I know, the pleasure that you, someone else is experiencing, is similar.
So I recognize that that's good too.
It's then a further question as to whether I will care about it, whether I will do something about it,
what kind of priority your pleasure will be.
But I think recognizing that it's something that's intrinsically good is a distinct
Act. But can we be sure that it's that way around, that it's that we recognize something's good and therefore we want it for ourselves rather than us wanting it for ourselves and therefore thinking it's good? I mean, there seems to be good evolutionary reasons that we would develop a system of desires and wants that are based upon what brings us our own pleasure and a system like you discuss in the expanding circle. It makes sense to care about other people for essentially self-interested purposes. And that wouldn't be out of sync with the idea that.
that our care for other people's pleasures can be rooted in our own.
You seem very, you seem in quite fervent opposition to the idea that ethics can be grounded
in egoism, in pure egoism, let's call it, in the idea that it's all based on my own pleasure
and there's really no consideration for other people's pleasures outside of my own pleasure.
But I don't know if it's as heinous as perhaps you're kind of implying that it is,
because if we have evolved as a social creature, then it would make sense
for us to care very, very much, about as much as we do about the well-being of other creatures.
And it would explain rationally why we don't feel the same way about non-human animals.
But it would pose a problem in that sense when we start talking about animals.
But if we're talking about human morality, I don't think it would pose that problem.
I don't think that by saying that ethics is grounded in my own well-being and nothing else,
that I wouldn't be able to have just as much care for my fellow creature as you would.
I don't see why, if you were to take that view, your concern would be justified in extending beyond a relatively small group of people whom you know and interact with and who can return favors that you do for them.
And possibly, you know, on evolutionary grounds, obviously people who are genetically related to you, you could talk about as well.
But it wouldn't give you any reason for, let's say, paying for some bed nets that will protect children in Malawi from getting malaria.
Well, bearing in mind that we're now talking not about what we should be desired and what we do desire.
Like we're talking about the psychology of human beings, what reason we would have to desire something.
I think it makes sense that because when our moral faculties developed in our evolutionary history,
We were living in small enough groups that it would just make sense to develop a sense of empathy that extends as far as human interaction goes.
And so now, although purely rationally, it might make no sense on the hedonistic worldview to give to charity.
The fact that I have this capacity for empathy and the fact that when I see a human being, regardless of where they are on the planet, I can't help but feel that empathy.
means that by donating that money or helping that person,
I'm appeasing my own faculty for pleasure,
even if that's totally irrational.
I do think there is a good evolutionary explanation,
if not a moral reason, but a good motive,
a reason why I would have that desire,
even if I only care about my pleasure,
to care about that person,
because as a social creature,
I can't help but feel that empathy,
and the only way to appease my pain there,
the only way to remove that pain of empathy
for someone else being harmed is to help them.
Well, I thought we were talking about what we have reasons for doing
rather than about the psychology of human desire
because I think they're different things.
I think we may well have reasons for doing things
that we have no desire for doing.
That's unfortunate, but I think that's the way the world is.
We certainly have evolved,
and our evolutionary history has given us,
a set of desires, as you said, may you have given us empathy and concern for others,
primarily focused on those who we're close to, possibly can be extended outwards.
You know, we can talk about how much and so on. That's a question.
But we're also beings, and again, this is a capacity that's evolved, who are capable of reasoning.
And on my view, and I defend this in the expanding circle,
our capacity for a reason can take us to places that are not necessarily serving
the evolutionary function of enhancing our survival and reproductive capacities.
It's, you know, think about it in terms of mathematics, right?
So we have a capacity for mathematics.
Why do we have a capacity for mathematics?
Well, it was useful, no doubt, in various situations.
paradigm case, you see three tigers go into the thicket, you see two tigers come at,
you understand that it's not a good idea to go into the thicket. So from that maybe rather
simple beginning, we develop more mathematical skills. And eventually we have people as we do
here in Oxford in departments of mathematics, doing pure mathematics at a very high level,
which is very remote from any kind of evolutionary imperative
that would have given rise to those capacities.
But they're following a reasoning process,
and the reasoning process itself did begin
because of those evolutionary advantages.
Now, I think that it's possible
that something similar has happened with ethics.
That is, we've developed a capacity to reason,
and that capacity isn't just limited
to the things that have...
an evolutionary advantage for us, but it enables us to see that we are on this planet with
other creatures, that these other creatures, although they're complete strangers to us,
although they have no possibility that they'll ever be able to reciprocate any favours we do to
them. Nevertheless, they're like us, they suffer like us, and reason it enables to see that
if my pain is a bad thing for me, then their pain is a bad thing for them.
and that leads me to see that it's a bad thing full stop.
But it's that if that's the important point
because I think most people would be able to agree with you
and get on board with the practical point
that if my pain is bad, if your pain is bad,
then a non-human animal's pain is bad, let's say.
But we're talking about whether or not it actually is bad.
And I think that the analogy you give with mathematics
can apply here as a pure egoist,
which I'll continue to defend,
I think that I can say that evolutionarily, I can explain the, the development of my moral
faculties through the hedonistic principle, either by spreading my genes or through kind of reciprocal
altruism, and that's how it came about. But now I have a moral faculty that I can apply
reason to and extend it to things that are far detached from that evolutionary origins, such as
caring about non-human animals. But the actual motive, the basis for that, would still just
be a subjective preference for my own pleasure that's evolved naturally. And I think that can
offer a way to compel people to act morally and act in accordance with the moral principles
that we're talking about without having to say that you have to accept a metaphysical claim
that morality exists and morality can be talked about in terms of truth claims.
Well, the motive might be that. I wasn't really talking about motives. I was really talking
about our capacity to understand what's the right thing to do or the wrong thing to do.
I'm saying I think that comes from the motive.
Okay, so we disagree about that.
I think that it's possible that some people get to it
through looking at that motive in the way that you've described.
I'm not going to say that that's impossible.
But I would want to claim that even if you were somewhat short-changed
in that empathy department,
if you are capable of reasoning,
you would be able to get to this through the rational pathway
that I've described as well.
And what would that rational pathway look like for a person who doesn't have that
empathetic quality?
They just don't care.
We talk about moral principles being self-evident.
What does it mean for something to be self-evident in the manner you're describing?
I think what it means for it to be self-evident is that when presented to rational beings
who are thinking calmly and clearly, they will agree.
with it. So another example of a self-evident fact would be? Well, there are self-evident
facts that we may think of as truths that we would agree with, that something can't be red
and green all over at the same time. Some interpretation of mathematics, there's some mathematical
truths. So you're not talking about something like the idea that our site that we can trust
our faculties for sight is just self-evident.
You're talking about things like the laws of logic,
which just seems to be just self-evidently true
as a matter of logical principle.
And you think morality falls within that category.
I think it's possible that the most basic axioms
of morality fall within that category,
then, of course, working out more specific implications of them.
So we could kind of list P and not P cannot be true,
and underneath that you can put pleasure is good.
And they're kind of in the same category.
of kind of certainty, just self-evidency?
They're not quite in the same category of certainty, but they're reached by the same process.
If something is self-evident, that seems to imply that we can be certain about it.
And how can there be different levels of certainty if they're both self-evident?
Are they both just true?
No, I don't think that, so I don't think that all self-evident truths are necessarily
equally certain. I think there are some which maybe everybody is going to agree with immediately
and there may be some which require more reflection. And the sense of saying they're self-evident
there is simply saying there aren't intermediate steps. It's through reflection on the nature
of pleasure that we conclude that it's good. And there's no further chain of argument that I can
put in between the experience of what pleasure is, the reflection of that,
reflection on that, and the conclusion that it's good.
Yeah.
Now, I'm about to ask you a question that given your philosophical history,
you might say it's a nonsense question, but some people don't see it that way.
When you say we can reflect on the nature of pleasure and see that it's good,
what does that adjective actually mean?
Well, if we're prepared to talk about values,
It means that the universe is a better place if it has that in it,
and I would say if it has more of it in it.
Better for better and what, like better for you, sure.
No, better, better full stop.
I don't want to say that all values are only values for someone.
I want to say that we can, we can imagine different universes,
some with lots of pleasurable experiences in them.
And let's say, just to make it simple, no painful experiences,
and others with lots of painful experiences and no pleasurable ones.
Now, it's true, of course, that it is better for the sentient beings.
If we imagine they're the same sentient beings in those two universes,
it's better to be in the one with pleasure in it.
But I also think we could say it's a good thing that this is the universe
that exists rather than that possible one.
So if I'm somebody who can live in this possible world
where I have a moderate amount of pleasure, I'm having a good time,
but I imagine another possible world
where the overall pleasure is higher
but my position in it would be lower
I wouldn't be experiencing as much pain
to me rationally it would seem that I would have to say
it would be a worse place for me to live
it would be a worse place for you to live
but it would be a better universe all the same
but you couldn't say it would be a better place to live
well
and when we're talking about ethics
surely we need to be talking about
what we can do to make the world a better place to live
you could say it's a better place
clearly you could say it's a better place
for the average being living in
world but why should we that's the thing like why should I care about the average being
if when I'm put in that world it's going to be worse off for me I'm not going to enjoy it I'm
not going to have a good time well I'm not sure why you keep pushing in the idea that you're
not prepared to trade off your own your own interests for the sake of any other value
that everything has to come back to a kind of egoism and I find that a implausible position
I don't know whether you're talking about it in terms of really what's of value or what's what we ought to do
or whether you're talking about it in terms of psychological motivation.
I think they're different.
I don't really think it's the right position in either terms,
but I can see that it's somewhat easier to defend on the psychological plane than it is on the plane of reasoning.
I think the place it comes from is, do you think that there can be an action committed
that has no personal benefit?
Yes.
And I'm talking as in doing something which not just is kind of worth the personal benefit it brings you,
but brings you none whatsoever.
I certainly think it's possible to do that, yes.
Could you give an example, perhaps?
Well, I know several people who've donated a kidney to a complete stranger
because they accepted that they can live,
quite adequately with one kidney, whereas there are people on waiting lists for getting a kidney
who have very poor quality of life on dialysis, who may die before they ever get to the point
of having a kidney. And they didn't derive any pleasure from the knowledge that it helped someone
in that manner? I certainly don't think pleasure that they were helping anyone was the
motivation for doing what they did. I think the motivation was that they could make a bigger
difference to someone else's life than it would cost them. But that's what I mean. Like if the
motivation there is the fact that they could do something for someone else. Well, that's a
pleasurable experience. Well, is it? I mean, why are you writing this into it, right? If,
if you, you, you, you, you, you seem to be denying that somebody could act just for the fact
that, that he or she was doing a greater benefit for another person. Well, I don't think,
I don't think you, don't think you can. I think that you kind of have to act in accordance
with your, with your pleasures and preferences.
preferences i'm not necessarily going to die and i mean people that might have preferred to benefit
others than to benefit themselves so i see i see where where there might be a divergence here
because i know that you used to call yourself a preference utilitarian and now you call yourself a
hedonistic utilitarian um do you still see those as different things yes because you see a difference
between somebody's preference and somebody's pleasures you can prefer things that do not increase
your own pleasures definitely now in practical
you give an example in the introductory chapter of a poet who who decides to live a
life of of diminished pleasure in order that she will write better poetry right
because the the preference is to write good poetry even if that's at the extent
at the expense of the pleasure but surely the immediate response to that is to say
well the reason someone wants to be a poet is because of the pleasure they derive
from being a poet and so diminishing one type of pleasure or one means of pleasure
to increase your ability to write good poetry.
Well, the pleasure you receive from writing that poetry,
the pleasure you receive from the knowledge
that you're living, the life that you want to live,
must outweigh the pleasure that you're sacrificing.
Let me ask you this.
Is pleasure a state of consciousness for you?
Yes.
Yeah, yeah.
Well, people do things that are not going to affect
their states of consciousness.
For example, they make disposition
for their assets after they die.
They're not going to be around to witness that disposition.
And, you know, there may be many other ways.
Derek Parfitt has this example of meeting a stranger on a train,
talking to that person, getting to like them,
and then discovering that they are having a serious operation for a disease.
And then the person gets off the train.
that you didn't exchange any contact details,
you'll never see this person again.
But Parfit says you can have a preference
that the person's operation will go well,
but it'll never affect your consciousness.
You'll never know whether that person's operation
did go well or not.
But it's already affecting your consciousness.
So when you leave inheritance
for people after you die
or leave requests for the way you want your body
to be treated or something like that,
the reason why you would do that,
I mean, like in terms of the defense,
that you could give for the psychological hedonistic motivation is that you derive pleasure
now when being alive from the knowledge that when you are dead other people will benefit from that
you can i accept that you can offer that explanation i don't think it's very plausible in terms of the
amount of effort that people often put into um trying to arrange things for after they die um or you know
another case might be trying to complete some book that they're writing before they die uh where perhaps
it makes life much more difficult for them and they're not going to be around very long before
they die anyway it doesn't seem like a good trade-off in terms of yeah but i mean they don't have
to be right about it that's the difference between that's like an epistemological point they could
they could be wrong i mean they could they could decide to do something thing they could decide
to make a sacrifice and get it totally wrong and actually have completely diminished their pleasure
but they could but they thought that it was going to increase their pleasure again you could
offer that explanation, but I don't really see why it's necessary to do so, and I don't find
it plausible to do so. I mean, there's an anecdote about Hobbs that goes along the lines that
you're talking about. Hobbs was walking through London with a companion, and a beggar came
and asked for money, and Hobbs reached in his pocket and gave some coins, and the companion sort of
thought, aha, I've refuted you now, because you're an egoist, but you've just given money to
this beggar. And Hobbs said, no, I gave the money to the beggar because it made me happy to
see the look of pleasure on the beggar's face. It's always possible to say that kind of thing.
And, you know, maybe Hobbs was speaking the truth about himself. But to assume that everybody who
does something like this is doing it in some way to increase their pleasure, it just seems to
dilute the notion of what pleasure is to a point that we may not really be talking about the
same thing. We're just sort of every time someone has a preference for something, we're putting a
little subscript saying, and therefore gets pleasure out of it. I think so when you talk about
Hobbs, I think it goes deeper than just, well, I liked the smile that I got back from the beggar.
I think it speaks to an important part of our human nature that like I say, we've evolved to care
deeply about other people. And so it's not just some triviality. The pleasure that I receive
from helping somebody else is not a trivial pleasure. It's one of the deepest pleasures that I can
have because it's so ingrained into the fiber of my being. So that means, I can see why at first
glance, it appears totally, totally trivial. It appears like, yeah, okay, so technically you derive
some pleasure from doing this good act. But that can't be the main motivation all the time. But
I think it can be if we see it for what it really is,
which is so much more than just that baseline pleasure.
Yeah, I think if you're going to talk about the way in which we've evolved,
you're still going to have some problems because, unfortunately, from my perspective,
we don't have a very strong inclination to help strangers far away from us.
And in particular, we don't have a strong inclination to help.
people who we can't even see as identifiable recipients.
So there's this well-known phenomenon of the identifiable victim,
which we saw in the case of the boys in Thailand who were trapped in the cave, right?
And there were these 12 boys and their coach, you know, knew who they were,
see their parents on television and so on.
And, you know, there was huge concern over those 12 or 13, including the coach, perhaps it was.
people and enormous amounts of money were offered and spent in order to rescue them.
And I'm happy that they were rescued.
But when people are asked to do something for people that they can't identify,
such as will you donate to provide bed nets for children in regions that get malaria,
and of course you can never identify whose life your donation is saved
because you can't tell which of the children now sleeping under bed nets.
would have died had they not had a bed net.
And that response is, unfortunately, much weaker.
So, you know, there are lives we could save for much less than it cost to save the boys in Thailand
by donating to this and similar charities that we're not saving.
So on your view, there'd be nothing further you could say about that
because you're following the evolved preferences that we have
and they clearly point towards helping identifiable victims rather than unidentifiable victims.
That's where reason comes in.
So I can explain the motivation that we would have for any kind of moral principles in general.
And what I'm saying is that the reason why I think we have ethical concern for people outside of ourselves
is because of these evolutionary reasons that I've just explained.
From that, like with the mathematician, you can then say,
and once we have these principles, let's now use reason to apply them consistently,
in which case you can do what you do so well,
which is to point out inconsistencies in people's thinking.
If you're going to save the child drowning in the puddle,
why won't you donate your money to charity
that will save even more people for less of a price?
Well, that's a point of reason.
And somebody might be able to come to you and say,
well, why should I care at all about this,
this whole ethical framework?
And the answer could be, well,
here's an explanation of why you should care about the child in the puddle
and you give a psychological, evolutionary,
motivation for it. Here's why you do, let's say. Here's why I know that if you really thought
about it enough, you would care. That's not the same as saying that they should care, but I can say
that I know what kind of creature you are, I know how you evolve, I have enough knowledge of your
psychological state to understand that what you do care about, as a matter of fact, is that child
drowning in that puddle for these reasons. And since you do care about that child, let's take
the rationale for that and see if it should also apply elsewhere. And then you can, you can draw out
the practical implications of applying it consistently, if you see what I'm saying.
But that doesn't diminish the metarethical point that it's all based upon your own pleasure.
Okay, good.
I think we're making progress.
But I think it does actually cut against what you were saying earlier in terms of the idea
that you're taking pleasure from this, because you've acknowledged that we have certain
desires, that we simply have as desires, let's say, to help the drowning child in the
in the puddle or the shallow pond, and you've acknowledged that we don't have similar desires
to help the non-identifiable victim, potential victim of malaria.
And you've said that that's where we can use reason to say you care about this
and therefore to be consistent you should care about that, which is fine, I totally agree
with that.
But now looking at that person who has been persuaded by your argument about using reason,
I don't see why you're saying, and this person is still doing what gives them pleasure,
because it would seem to me that if that was what they were doing,
they would be much more likely still to look around for more children in ponds to rescue
because that's really, as you've acknowledged, what gives them pleasure.
The other thing is, yes, and I'm being consistent, but that's not the same thing.
Do you think people derive a pretty significant pleasure,
or if you prefer to frame it differently, just say have a strong preference for
having a kind of philosophical consistency.
Don't you think that there is a preference within people
to have consistent moral principles
that they're able to live by?
Yeah, I do think that, I do think that.
And certainly I work with that.
It's, you know, it's kind of a,
like, can be called cognitive dissonance, if you like,
that I know that I think this
and that to be consistent, I should do that,
but I'm not doing that.
and that produces, you know, that can produce some sort of unease or discomfort.
So I think perhaps your, if you're trying to defend your position,
you would say at that point, what you're trying to do is to avoid
some negative experience, perhaps, the negative experience of knowledge
that I'm not acting consistently.
but I'm still
skeptical that you could really explain
this reasoning process now
in terms of acting for my own pleasure
or anything of that sort.
I think we've got pretty far away from that.
My only problem is that I don't see how else you can do it
because talking about,
I could just as easily say that
I can see why someone would think
that moral principles are self-evident,
but I don't think it's plausible.
I don't find it convincing.
I don't see how you could cycle
account for that kind of thing. It's just the same, it's the same thing both ways. I mean,
what can you give me more than just saying, well, it's self-evident? Can't you see that the pleasure
is good? I mean, like Sam Harris says, put your hand on a hot stove and you'll just know that the pain
is bad. Like, try and keep your hand on a hot stove. That's his point. It's like, well, okay,
but there's a big difference between saying that, yes, subjectively, when I put my hand on a stove,
I experience a subjective feeling of pain. Like, I don't like that pain.
I don't enjoy it, which is almost by definition, a subjective preference.
There's a world of difference between that kind of recognition and the jump to an ontological
point, the pleasure as a concept is good.
And I can't see why you're able to make that jump.
Yeah, I don't see it as that greater jump, but I guess it does require you to say
it's possible for value to exist in an.
objective sense rather than just be the values of values for those beings I think
that's that's where the kind of objectivist matter ethic that I'm trying to
defend does get difficult I agree because if somebody wants to consistently argue
that all values are values for beings it's not easy to push beyond that
One way to push beyond it, I guess, is where you're talking about, again, Parfit-style problems about bringing beings into existence.
And that's the question about whether, suppose we have a world with a billion happy beings in it,
whether it would be better to have a world with two billion equally happy beings in it.
And on the view that there's objective value in happiness, it's easy to answer that affirmative.
on a view that says all value has to be value for someone.
It's not so clear, or at least somebody who says,
well, if you didn't have the extra billion beings,
then they wouldn't exist.
They'd never be unhappy.
They'd never have missed out on anything.
Yeah.
Well, I mean, to give you a kind of classical almost cliche
with a utilitarian dilemma to elucidate your own view,
do you prefer the society of 100 people who all have
a hundred points of pleasure
or a society of 100,000 people
who all have 99 points of pleasure
where the average is slightly lower
but there's far more of them.
At least at that level I'm a totalist
but obviously you can
keep going making the world larger
and dropping the level of pleasure
and then you end up at the repugnant conclusion
which is a little harder to swallow
but do you just kind of bite the bullet
with the repugnant conclusion
do you just kind of say like if
if this ethical theory is sound and it leads to this thought experiment where we'd have
to do something that intuitively is just totally immoral, if that's what the ethical theory
requires, then we just have to accept that because this particular thought experiment
is so contrived, we never have to really worry about it and just say that if it did arise,
we'd just have to act in accordance with it.
Yeah, it's not only that we, it's never going to happen and we don't have to worry
about it in that sense, but it's also, can we really rely on our intuitions when applied
to a situation that is so fantastic that our intuitions did not evolve to cope with
it. So, you know, in terms of the repugnant conclusion.
Which for our listeners, perhaps you could just talk about the specific conclusion
we're talking about.
Oh, right. Okay. So you started off by comparing 100 people at a level of 100
with 100,000 people at a level of 99.
Yeah. And probably most people would say, be prepared to say, yeah, that's small.
drop in pleasure is worth the fact that there are now so many more people. But as I said,
you can just continue with that ad infinitum and eventually you'll get to a world where you have
people at, let's say, 0.001. You know, it has to be positive still for this to work. But it can be
life just barely worth living. And yet there's so many people that all of those people at
0.001 adds up to more than the 100,000 at 99.
And a lot of people will say,
no, wait a minute, now you've gone too far, right?
I'm prepared to have more people
if the quality of life is still really good,
but now you're getting to some very dull,
barely worth living kind of life.
And we've lost what, you know,
I'm not prepared to accept that.
That's the repugnant conclusion to the argument.
Now, a lot of philosophers, including Parford himself, tried hard to work out ways in which
you could accept a view that did not have that implication.
Parfit never, well, I shouldn't say Parfitt never found one because there was a posthumously
published paper in which he put forward some suggestions, which I don't, didn't find
totally convincing, I have to say.
people try to set a floor, sort of a baseline, in other words, and say, well, once life sinks
below a certain level, then there's no point in expanding in having more people living that
level. And that level is not the neutral level. It's not the 0.001, but, you know, it's 20 or 30
or something on a scale between 0 and 100. Sorry, I should scale really on a scale between
minus 100 and plus 100.
Yeah, sure.
What we're talking about.
So what do I do about that?
Well, as I say, I think it's really hard to grasp these numbers
and to think of these differences
and to think what this life would be like.
So I do find it uncomfortable conclusion.
And on this one, when you say,
do I just bite the bullet?
But I would still like somebody to turn up with a coherent, consistent theory that answers
questions about when it's good to bring extra people into existence and when it's bad.
And if that theory avoided the repugnant conclusion, that would be a point in its favor.
Yeah.
But there have been a lot of really good philosophers working on this now for 40 or 50 years.
But is the reason why it would be better, you say if something, if there's a theory that
encompasses a solution to the repugnant conclusion that would count in its favor, is that because of
the intuition that we have that the repugnant conclusion is so wrong? Because if we're talking
about ethics being based on self-evidently true moral statements, then we're not talking about
intuition. And if we have a moral theory based on what you describe as self-evidently true
principles, which then lead to a conclusion that we don't like, surely it shouldn't count
in favor or against the theory whether we intuitively like or dislike it, if that's not
we're basing it on, because that's what I'm basing the moral theory on. I'm able to say that
because we're talking about our own kind of psychological preferences, when we come to a
conclusion that seems repugnant, we can use that as reason to distrust it. But I'm not sure you can
do the same thing if you're not basing morality on that. So you said as part of those remarks
that you're relying on intuition, but if I'm basing things on self-evidence, then I shouldn't be
relying on intuition. But I think there are different intuitions. And Sidgwick, who argued that
there are self-evident axioms, described this as philosophical intuitionism. That was the term he took
for it. It's philosophical intuitionism because it's different from common sense morality,
which relies on particular moral judgments. So people who rely on intuitions to say, you know,
it's wrong to lie or various other, you know, incest is wrong, whatever else it might be.
That's what Sidgwick would have called the morality of common sense, which is a kind of intuitionism.
That is specific intuitions, particular judgments.
It's the kind of thing that Rawls also talks about when he talks about making decisions from a position of reflective equilibrium.
We find a kind of equilibrium between a whole variety of intuitions.
I think that there are some things
which are not far away from intuitions
and as I say Sidgwick used that term
which are things that we see as self-evident
and the ones that I think are more reliable
are the more general and abstract ones
the ones that are more specific
I think often are reactions that we have
because they were advantageous to us
to our ancestors in terms of survival and reproduction.
So that's why I mentioned incest as an example.
I think we have an intuition that incest is wrong.
And clearly that has a plausible evolutionary explanation
that you will produce more abnormalities
if people who are closely genetically related to have sex
because in an age of without contraception
they were then going to reproduce.
But I think you can then ask whether
that evolutionary explanation supports the intuition that incest is wrong or actually debunks it.
And I think in that particular case, at least, it debunks it.
It debunks it for modern circumstances where we do have reliable contraception
and where we don't see other plausible harms.
And the case for that would be adult sibling incest.
So, you know, intuitively, if you are...
asked people, and Jonathan Haidt did this in a research, but if you ask people, you know,
you describe a circumstance in which adult brothers and sisters spending a night together.
They decide for the fun of it to experiment and having sex.
They do that.
You know, they're both on using contraceptives or the woman's on the pill and the man decides
to use a condom just to be safe.
So there's no chance of them having a child.
and it doesn't, you know, harm their relationships.
They continue to be close.
They decide not to do it again.
So you ask people, is that wrong?
A lot of people will say, yes, that's wrong.
And when you ask them why, they either say things that are contrary to the story,
like, you know, well, they might have a baby who'd be abnormal.
Or they just sort of fudge in some way or they say vague things.
So I think that's our evolutionary evolved instinct,
speaking there and I don't think we should rely on that but you can't give a similar
evolutionary explanation for the idea that the good of any one person in the
universe is as important as the good of any other on a on a tangential
tangential point there why does the contraceptive requirement or or the
con yeah the concept of requirement of this moral case make a difference are we
suggesting that it would be a moral for people for for for for
incest to take place where there's the possibility of a child because that child would be
born an abnormal child. It seems to imply the idea that it's wrong to have children who
are abnormal. I think Jonathan Haidt put that into the example to avoid evoking that reason
for rejecting it. I think a lot of people, if you didn't, would have said, well, exactly what
you said, they might have a child who would be abnormal and that would be bad with the child.
Wouldn't that be a bad reason anyway? Wouldn't the response be able to come just
as the other attempted response they gave, it's very easy to say, well, that's just, that's a,
that's a bad response. Shouldn't you be able to say the same thing about, well, you'd have a disabled
child? Well, so what? Well, firstly, Hyde was trying to test responses to incest, and these
were not philosophers. He wasn't trying to get into a philosophical discussion about those
issues. He was trying to test this idea, which he calls moral dumbfounding, that we have
these evolved intuitions and we can't really explain, we can't really defend them when
they're applied in situations where other reasons for thinking that those acts might be wrong
don't apply. Right, sure. Okay. But it's a problem that comes up a lot is sort of implied
implied offenses that come with some of the moral theories we're talking about. Something that I've
heard people criticize you for, for instance, is the analogies that you draw between the treatment
of animals today and the treatment of black people or of women in the past. And people say that
isn't that drawing a kind of implicit comparison between the two? I think essentially that that is
what you're doing in terms of drawing a comparison between the moral consideration of both.
But how do you respond to the critics who say that it's totally wrong to be suggesting
that we can treat the suffering of animals
in the same way that we treat the slave threat?
Well, I'm certainly not suggesting that.
I've never suggested that.
I think I'm pretty clear in when I write about that analogy
that I'm referring to certain particular parallels.
That is that in all of these cases,
we have a dominant group, an elite,
namely whites in the case of racism, typically,
and males in the case of sexism,
that takes advantages of those
who are outside that elite
makes uses of them
turns them into slaves
in one case
turns them perhaps into practically
slaves in the case of men
and women in many societies
and in the case of animals
also turns them into slaves
plowing the fields or something to ride
but of course today the much more
common and convenient use is to use them for food
And in each of those cases, not only do they do that because of the power that they have over the others,
but they develop an ideology that justifies it.
So, you know, racism came with a whole ideology about the superiority of whites,
and in some cases supported by appeals to religion, to verses in the Bible,
similarly with the case of men and women,
and identically in the case of humans and animals.
animals. People justify this by saying, yes, and it says in Genesis that God has given man
dominion over the animals, so that's why we're entitled to do this. So that's the parallel
that I've been trying to draw. I've never said that human sufferings are no different from
animal sufferings. I've never said that the differences between humans and animals are no greater
than the differences between
white or blacks or anything like that.
Of course, that would be an absurd claim.
Yes, that would absolutely be absurd.
I hope nobody listening thinks that I was implying
that you'd made that comparison.
I meant the comparison that you mentioned
between the ability to feel pain.
Do you think that in terms of sensory pain alone?
So we're not talking about psychological pain,
which I know kind of comes along with it,
but in terms of just the faculty for feeling pain,
I know you write animal liberation, not only is it true that non-human animals might feel just as much pain as humans do,
but they might in fact feel more pain. And also tagging on the idea that we were talking about a moment ago,
where a society in which more people are slightly less, but we have a value on more people being in existence,
having pleasurable experiences. Putting all this together, I want to ask a difficult question that I've reflected on a lot.
And when I've been talking about this, this is kind of a question which I've struggled to address
that I've brought up to myself, which is considering the different extent in terms of the number
of sentient beings actually involved and how frequently and how badly they're being treated.
In terms of moral wrongness on this kind of more enlightened based on the intrinsic value of
pleasure of sentient beings, in terms of moral wrongness, what was more.
more bad, or what is more bad, between the modern animal agricultural industry and factory
farming or the historical slave trade of human beings?
Those are very difficult comparisons to make, I think, because I'm certainly prepared to
recognize that Africans taken from their homes and their families and treated as slaves
And then even when they got to the new world, obviously families were broken up.
If they had children, they might be taken away and enslaved.
And they have a different awareness of their situation and different possibilities.
So it's very hard to compare what they are suffering with what non-human animals suffer.
And as you say, the numbers are vastly larger for non-human animals.
Yeah, that's the thing that I think makes the difference.
Because I think you can easily say that because,
of the psychological trauma involved in the slave trade, it was far worse for the individual,
but because of the sheer number and the fact that it's likely to continue, the sheer number,
there must be, if we're going to use a principle and kind of look at it mathematically,
there must be this number of animal suffering that would outweigh a number of human beings
suffering. And with the sheer scale of the current agricultural industry, if there is such a number,
surely we must have passed it by now. I agree that in principle there must be a number,
given, you know, assume that obviously there's some forms of slavery that still exist,
but let's say we're talking about the European, the European taking of Africans,
the slave trade, to the new world and all of the terrible things that happened to slaves there.
So that's now finite. It's over. And I don't know what the number is,
but however many tens of millions perhaps, but certainly small compared to the 74 billion
animals that are currently raised and slaughtered for food each year.
It's not even close.
Not even close, true.
But I'm not prepared to say whether that number has already passed,
whether it's worse.
It's possible that it has.
I'm also not going to say it hasn't.
But I certainly think that, yes, in principle,
the amount of suffering that we inflict on animals could mean that speciesism as such
as an attitude and all the practices that flow from it
are actually, have actually done more harm, cause more suffering, and in that sense being worse
than all of the terrible things that slavery did as well.
I think that's enough to answer the question, because I think the difficulty in that question
lies in the intuition that many people have to just say that there is no number of animals
that could suffer and die that would possibly outweigh something like the slave trade
because of the sensitivity surrounding it, it seems incredibly offensive to suggest that that
could be the case, but I think that morally, if we're going to be mature about it, we have to
accept that, and we're going to be principled about it and be consistent about it, we have to
admit that such a number would be reached. But I'm interested again in terms of speaking of
the difficulty that people would have to actually accept these moral principles, intuitively speaking.
Another thought experiment would be something like if we were able to abolish the factory farming
industry tomorrow, but in order to do so, and I know this isn't the case, I'm
not suggesting that this is what comes about through a vegan diet, but just in a hypothetical
situation in a possible universe where it does, all human beings have to live mildly fatigued,
not severely, not such that they can't get out of bed, but enough that they're noticeably tired
every day and they're pretty uncomfortable about it. The pain of doing that would be nothing
compared to the pain saved from the agricultural industry, but could we really expect human
beings to to accept that kind of arrangement to to diminish their well-being significantly but
not so significantly that it outweighs the thing that they're saving i i feel like if you were to
propose such a situation in that possible world if you were to stand up in parliament and say this is
the law that we should we should bring in they probably laugh you out of the room and would they be
would they be wrong in doing so uh well you may be right about what they would do do but i do think
they would be wrong to do so. And the phrase that you use, you know, bring it up in
parliament and they laugh you out of the room, is in fact exactly what happened when the first
animal cruelty law was proposed in Britain in the early 19th century. I can't know, 1810 or 12
or something like that. When, when, you know, somebody proposed the law about, I don't know, beating
cattle that you were driving to market or something. And I think it was Humanity Dick Martin,
I think his name was. He was known as Humanity Dick after that.
obviously and he was laughed out of the room and it took a decade or so I think before he brought
it in so the fact that you're laughed out of parliament doesn't mean that you're not right clearly
and I think you're correct to say that people would not accept that would not accept it now
and possibly will never accept it but again that doesn't show that it wouldn't be the right thing
to do but should we expect should should we expect human beings I
I mean, I don't mean, should we expect it as a matter of, do we think they will, but morally speaking, should we expect human beings to accept that kind of arrangement?
Yes, morally speaking, I think we should.
But as you rightly pointed out, that's different from predicting that they ever will.
What do you think it will, I mean, what is the best approach to get someone to understand that if they had to break their arm in order to, in order to save the suffering of animals?
they have to give up meat and all dairy products and they also have to break their arm
in order to get to this moral paradigm that we're talking about.
How can we possibly go about convincing somebody that that would be worth it?
Worth it for them, maybe not, but again, I would want to convince them that that was the right thing to do
and then at least some of them perhaps because of what we were talking about earlier in terms of that
desiring to be consistent and to do what they see as the right thing, might then do it.
But, yeah, it's, you know, in general, I hold quite a demanding ethic in not only with regard to
animals, but with regard to what we ought to do for people in extreme poverty.
And I recognize that the way people are at present, they're very unlikely to,
fully comply with what I see is the right thing to do. But if we can incrementally push them
along to get closer to it, perhaps one day, not that I'll live to see it, but perhaps one day
people will start to think more along the lines that I think they ought to. Do you think that
the demandingness of an ethical theory can ever be a criticism of its ontology? No, I don't
think so. In no circumstances. Not simply the fact that it's very difficult.
demanding. I think theories can be very demanding just because of that's the way the world is
and they're demanding to us because, as we've been saying all along, we're creatures who have
evolved from ancestors who acted in their own interests and in the interests of their offspring
and we would not be here if they hadn't and we still have a lot of those same characteristics
and that's why they're demanding to us but that's not a reason to show that it's not the right
moral theory. So in the in the in the cliche hypothetical of some advanced civilization coming and
discovering us and let's take you can imagine thousands of examples like this but to take a simple
one whereby their mass production of us for for meat to eat genuinely does as a matter of psychological
states of their brain bring them more pleasure than we could ever experience in a lifetime
including the balancing out of the pain experience of living in such a world. Could we be morally
expected to just throw ourselves on the dinner plate because that's the right thing to do.
So this is again the expected not of prediction that we're ever likely to do that.
No, but it's should we.
And I think that some people would say that because the demanding, I mean, I mean, people
would be able to accept like, sure, okay, according to the ethical theory that we're talking
about, actually the right thing to do there would be to say, okay, take me, cut me up and
eat me because I know that will maximize the pleasure.
But the sheer demandingness of that seems to at least count against it.
in some small sense.
Look, I mean, you know, because it is so demanding,
it's only something that philosophers are going to talk about.
And that's more or less the example that Bernard Williams puts up
when he ends up saying in the article,
The Human Prejudice, which I suppose is a kind of critique of views that I've defended.
He says the only question you ask then is whose side are you on.
But I thought that was really a letdown.
I mean, that question whose side are you on obviously can be, you know, was asked, say,
in terms of people who didn't want to go and fight in the First World War, you know,
what, you're not good British, you're not fighting for a king and country.
But in that war, at least, you know, it would have been better if a lot of more people had said,
no, I'm not just going to take sides because this is my country and, you know,
Germans are taking sides because that's their country.
We could have saved an awful lot of unnecessary bloodshed.
If more of us had said, it's not a case of whose side am I on.
It's a case of what will do the most good.
And I think, you know, therefore, it's, it's not right to simply say,
we can resolve this moral dilemma by asking whose side are you are.
Yeah, because look, I'm not saying that in a situation where the demandingness is,
is like you have to give up your life and your family and your home and everything.
You have to, you have to completely desolate yourself in order to live by this moral standard.
I'm not saying that the demandingness of that would be enough to discredit.
doing so. I'm just saying that in some small sense it at least counts against it.
Nah, maybe. I'm not sure that I want to accept that, but I suppose...
But the problem is that if it does, then we've got a problem because now we're talking about
we've at least, we're now not talking about whether or not demanding this does affect
a moral theory. We're talking about how much it affects a moral theory. Yeah, right. Yeah, and that's why
I'm reluctant to say that it does it all, I guess. But, you know, obviously it would be nice if the
moral views that came out to be the right ones were also ones that we could
yeah really expect people to do in the sense of you know most people would and
we could then start to pick up the laggards and encourage them to till eventually
we got to the point where everybody was doing that that would be nice I would be
happy if moral theories that I believe to be true were like that but we at least
many areas of life I think they're not yeah I just have to accept that but we
can we can we can see these hypotheticals and and it's just because a moment ago
when I asked you if you think demandingness can count against moral theories, you said no quite
confidently. But with this example, I don't see any good reason to think that it doesn't.
Well, I'm not sure that I see a good reason to think that it does, though, either.
I think probably on general principles, I want to say demandingness is not something that
counts against the truth of theory. I suppose one possible response would be to say that the level
of psychological trauma involved in somebody actually doing this thing or the level of commitment
it would require would be psychologically impossible. And since all it implies can, because
you can't put yourself on a dinner plate like that, you couldn't bring yourself to do it.
It would not be possible psychologically to put yourself in that position or accept that
system that because of that impossibility, you therefore can't morally oblige people to do
something that they physically can't do. That's one way of, I suppose.
Yes, and then we have to discuss whether the ought implies can principle is met in the sense of can't by what you described as psychological impossibility and exactly what that means.
There's a sense in which it's not really impossible to throw yourself on the dinner plate and maybe one person and a million would do that.
thereby showing perhaps that it's, in some sense, it's possible for anyone to do it.
Well, it's problematic because the only, I see that as at the moment the only one I can see
in terms of a good moral response to this problem of demanding this,
is to make this point that actually it's psychologically impossible.
But if we're going to speak technically, then in very basic moral decision-making procedures,
if people just aren't of the psychology to act morally,
and they just happen to be inclined to act immorally,
then technically speaking,
it's psychologically impossible for them to have acted differently as well.
So you run into a big kind of roadblock
and free will comes into it
and you run into this thing
where actually you can't make any ethical prescriptions
because they're all psychologically impossible
or they're all psychologically necessary.
So the only good response I can see
against the demandingness criticism
or the criticism of the idea
that demandingness doesn't count, leads to far more problems than the demandingness
consideration would if we just accepted it.
Yeah, I think that's a good argument for it.
So, I mean, I would therefore conclude that perhaps demandingness does play a role
in determining whether a moral theory is, how good a moral theory is, because the only other
alternative to me would be to say that we can't make ethical prescriptions.
Oh, I didn't think that was where your argument was leading.
Maybe I missed something about...
I think that must be where it takes us.
So I thought rather it was going the opposite way,
that once we start saying,
aught implies can and this is psychologically impossible for you to do,
that then we're going to end up with a whole lot of actions
that people don't in fact do,
being ones that they couldn't do.
therefore us not of being justified in saying that they ought to do it in the first place.
And if that's the case, then I think we should just reject the connection between
demandingness and the plausibility of the theory, because then we don't get into that
particular trouble.
Sure, because if you do reject, but surely if you, so you're saying we should reject that
that connection between demanding this and plausibility, but my point was that the only way
we can reject demanding this and plausibility of the theory is as far as I can see through this
argument of psychological impossibility it seems the no I don't think that's right no I think we
can reject it just by saying that how likely it is that people will ever comply with a moral
theory is independent from the truth of the moral theory and the the moral theory is is true
regardless of the whether anybody will over act on it yeah i think i think i think that's fair and uh do you
think it will be do you think it will be long before we reach a system a situation where
animal ethics is taken as seriously because right now i think we can both be an agreement that
ontologically speaking the there are there are there are essentially true things to be known about
the immorality of the meat industry for instance but and and like we say that whole
regardless of how many people actually agree with it.
Do you think we're getting there since 1975?
I mean, I know that looking at it in isolation,
looking at where we are in terms of the way we're treating animals right now
seems absolutely hopeless.
It seems like we're so far off that it's almost not worth even trying
because it's just so unthinkably wrong
and so unthinkably difficult to change.
But then I also look at the progress that's been made since the 70s,
and I'm kind of, I'm conflicted here.
I don't know where your kind of level of optimism lies.
So I'm somewhat optimistic about us making progress on the level of ideas and attitudes,
particularly in, well, I don't quite know how to describe them now,
what you might have called Western nations or so the nations of Europe and North America
and Australia and New Zealand and a number of other countries.
And I'll explain why.
in a moment I think there's progress.
In terms of the treatment of animals,
unfortunately, because of the increasing prosperity of Asia,
in particular China,
the number of animals in factory farms now is much greater than it was
when I wrote Animal Liberation in 1975.
So in that sense, you could say we've gone backwards
in that there's more human-inflicted suffering going on in animals now
than there was in 1975.
Right.
But in terms of the progress,
in a number of countries.
I think it's quite impressive.
And just to give you an example,
I gave a talk at Durham last night
and I stayed overnight
and I walked into the,
I was staying in the castle
which is also a residential college
of the university.
And so I walked into the kitchen
where students eat for breakfast.
And I was offered vegetarian sausages
as part of the breakfast.
When I wanted to put something on my mooseley,
there was soy milk standing there.
You know, in 1975, nobody would have thought of either of those things.
If there was any kind of choice, as I described in the preface of Animal Liberation,
the episode that got me to thinking about animals was when I walked into Baylor College in Oxford
and I was with somebody who I'd only just met.
And there was spaghetti with a sort of meat brown sauce on top.
It was the only hot dish available.
But there was a salad.
So my friend, Richard Kesson, said, is there meat in that spaghetti sauce?
And when he was told that there was, he took the salad.
And that led me to asking him why he was doing that.
And that really led me to thinking about animals and to writing animal liberation.
But that was the only choice you got.
You know, there was no vegetarian hot dish offered, even for lunch or dinner, let alone for breakfast.
So there's a sense in which these things are much more accepted and they're accepted
because there are, at least particularly around universities, but not only,
a lot of people who are aware of issues with eating meat,
many of them animal-related issues, many of them are, of course, also climate-related issues.
Yes.
And as part of that progress, I think there have been a number of specific legislative improvements.
So, again, not everywhere, but if you look at the European Union,
which is a reasonably large and diverse entity.
Throughout the European Union,
it's illegal to keep hens in the kinds of cages,
laying hens I'm talking about,
that I described in the first edition of animal liberation.
The cages have to be significantly larger.
They have to have nesting boxes for the hens to lay their eggs in
rather than just on bare wire.
It's similarly prohibited to keep veal calves in crates
that they can't even turn around in that are so narrow they can only take maybe half a step
forward or backwards and otherwise can't walk at all. Similarly for the sows, the mothers of the pigs
who are sent to market, they were also standardly kept in those stalls. That's also illegal
across the entire European market and in some jurisdictions outside Europe as well. So I think
those things are significant progress and they're particularly progress in terms of signs of people's
attitudes to animals having moved in a positive direction, not nearly far enough, of course,
as we've been saying, and unfortunately not worldwide. But it's a reason for not just despairing
about the whole thing. But I mean, is that something to, is that something to celebrate? Or is it
something to say it's about damn time? What's next? It is about damn time. And, you know,
But I think you do need to have some celebrations, actually.
You know, you were talking a lot about psychology and what we can expect from people.
Sure.
I think that if people in the animal movement focus only on the continuing atrocities that we inflict on animals,
they will feel that it's all hopeless and go away and not do something.
I think it's important to think of the positives as well.
I see, but to give, I mean, to give listeners a point of reference,
So I remember when it was legalized for women to drive in Saudi Arabia.
And it was celebrated all over Twitter.
People were so happy about it.
I remember thinking, what are you all talking about?
Why are we celebrating this?
That's absurd.
Like, it's not something, it's not morally virtuous to do this.
It's a moral obligation.
And so it's not well done for having done this.
It's like, you're awful for not having done it so far, if you see what I'm saying.
It's like kind of looking at it in the wrong framework.
And I look at a lot of the things that are happening now.
Like I wonder, perhaps I'm more sympathetic then to the kind of abolitionist approach rather than this kind of progressional approach.
But I look at things like when people do meatless Monday or something.
I'm interested to see what you think about this because to me, it's like the equivalent of saying, well, you know, I let my slaves run free on a weekend.
It's like, well, that's not good enough.
If you recognize that it's bad enough to stop doing it on Monday, then why are you still doing it on Tuesday?
So that's the way I view.
How do you feel when people have these kind of these approaches where it's like, we'll cut down a little bit?
That seems to imply a recognition of the immorality of it.
And yet why is that not enough to make them stop altogether?
Yeah, you're looking at it from the point of view of what attitude should we have to the people who understand fully the nature of the problem
and are still eating meat on Tuesday and congratulating themselves for having meatless Mondays.
That's one perspective, and I don't really disagree with you about that.
But another perspective is to say, if we could get everybody in the UK, let's say, are having
meatless Mondays, that would be the same as getting one-seventh of the population of the UK to
become vegetarian.
And we're more likely to succeed in getting everybody in the UK or most people in the UK to
give up meat one day a week than we are to get the equivalent number to give up meat all the
time. So from the point of view of reducing animal suffering and reducing our contribution to climate
change, let's do the tactic that is more likely to have those beneficial effects. So from the
campaigner's point of view, I think it makes sense to campaign for Meatless Mondays. I just,
I don't know if I could do it because it would seem to betray my moral principle. It would seem
to to imply that I'm not taking it seriously if I'm willing to kind of, if I'm willing to
to to falter on it, if I'm willing to make what are essentially, what's the word?
Compromises? Yeah, compromises. You're essentially compromising on your ethical principle. And
this is something I see to be as the most important moral emergency of our era. How can I be
expected to compromise on something as important as that? I mean, we're talking about
unthinkable levels of suffering happening every single time, every single day, every single minute,
in the course of this conversation, an unthinkable level of suffering has gone on for no other reason
than the fact that people just like the taste of meat. That's not, I can't see myself kind of saying,
well, that may be true, but maybe we should just kind of like loosen our approach and say that
it's better to do something than nothing. It's like, no, like this needs to end.
now with no moral exception as a matter of moral principle.
And I feel like if I can't express that in the form of activism, then I'm betraying myself.
Okay, then I would say to you, don't go down that path, continue to act for animals in a way that
you feel is not betraying yourself and is consistent with what you believe.
But if there are other people who are more pragmatic by temperament in terms of what they do
and feel that they're not betraying themselves
because they are reducing animal suffering,
don't oppose them,
let them get on with what they're comfortable doing
because we should recognize that it is having good consequences.
Do you think I'd be doing more harm than good
to be an activist with that approach,
with the abolitionist approach?
Not if you don't attack the other groups.
I think the abolitionists who perhaps have done more harm than good
are those who have actually spent a lot of time
an energy in trying to thwart the incrementalists.
And that's really, really such a waste, I think, of energy that could be used in a good
direction.
I remember reading, I don't know where you said that, or even if you said this, because
I read somebody had said that you had said it, maybe you didn't.
This is a good opportunity to check.
But they said something like you were once asked if you order a meal at a restaurant and it comes
with cheese on top or something.
And you've got the choice between sending it back and saying, you know, give me what I asked
for or just kind of shrugging your shoulders and eating it, it might be best to just shrug your
shoulders and eat it, because in front of your friends, you don't want to make it seem like a
difficult thing to do. You don't want it to seem like you have to be that guy, that you want it
to be more appealing and easy for them to jump on this bandwagon. But the other, the reason to do
the opposite and turn it back is to say that if you do just say whatever and eat it with the cheese,
then people will look at you and say, oh, well, he's not taking his moral principle seriously,
so why should I? I mean, which approach do you take? And is that something that you said in the
Yeah, it probably is something that I've said, yes, and that that is consistent with the kinds of things that I think about.
So you would just eat the cheese?
In that case, yes, so it's come anyway.
We assume that if I send it back, it's just going to be thrown at, it's not going to do any good.
And let's also assume that I know that I'm with friends who understand me reasonably well,
and they know that I didn't order the cheese and that I wouldn't have ordered it, but in these circumstances I would eat it.
Yeah.
Okay, so that's circumstantial.
Let me give you another. In fact, I can give you an actual example that I suffer pretty badly from hay fever.
And this morning I went to get some hay fever tablets. And I couldn't, I couldn't, none of them were vegan, essentially.
Now, if I say to my friends who suffer awfully from hay fever, who are just just horribly snotty, just awful eyes, itching everywhere.
And I say, no, no, you can't get your satiricine hydrochloride to make that go away because it contains animal products.
Or should I just say, no, no, it's fine. Just do it.
just get it because as long as you're kind of trying your best,
if we kind of have this cultural philosophical revolution in terms of food,
then the rest of the industries will follow along,
so it doesn't matter too much.
And you don't want them to think like it's too hard to do.
Because if I say to them, no, you can't have your hay fever tablets,
then they're probably not going to want to go vegan.
So what about a situation like that, where it's not like an accidental thing?
They have to actively go and buy that product.
But if I tell them they can't, then it's going to be much harder to make them go vegan.
What should I, what counsel should I give my friend in that situation?
Well, for me, I think you can trade off the benefits to them,
which in this case seem to be very great,
against the relatively, well, very small contribution
that you're making to additional animal suffering
to the profits of the animal industry in this case.
You know, this is my utilitarianism operating here, clearly,
that I think, and I, you know,
when I talk about it, this in animal liberation and elsewhere,
I think really what I'm writing right is I'm addressing people in terms of what they eat
who can walk into a supermarket, find a wide array of food which is both vegan and non-vegan
and nourish themselves adequately from the vegan selection.
And then they ought to take the vegan selection.
So that's, and then people will say, you know, okay, but what about if you're living in Alaska
you're an Inuit, you've always traditionally gone fishing and for a lot of the season
you wouldn't really be able to nourish you well.
Well, you know, that just seems to me to a completely different situation.
And nothing about my views would imply that we ought to go up there and tell
the Inuit people that they shouldn't be eating fish.
So they're okay to be doing so?
if they're yeah so if if that's the way that they live and they don't want to move to the city
which you know would disrupt their lifestyle and and all the rest of it um i'm i don't think it's
it's i'm not going to go and tell them that they should stop i mean i could just as easily say like
my my lifestyle is one where i like to eat meat i like to shop at certain stores i like to go to
kFC like who are you to tell me that i need to completely uproot my entire diet go to a different
shop, maybe spend more money, have to have to learn about nutrition, have to kind of take a course,
make sure I'm getting everything, right, check all the labels, all this kind of stuff. It's so,
so inconvenient. It's not that inconvenient compared to uprooting your lifestyle. Sure, but then
uprooting your lifestyle in the way that an Inuit would have to is nothing compared to the
suffering that the animals are going through. Still, I think, you know, maybe in some future world
when
people
in Europe or North America
or wherever
are not causing
more suffering to animals
than traditional
hunter-gatherers are doing
then we might think about
having that discussion with them
but I just find on the utilitarian principle
it's difficult to suggest that
the pain that somebody would have to go through
in uprooting their life and moving to the city
is in any way worse
then the pain that the animals who they're currently eating are experiencing.
I'm not saying it's worse, but I'm saying that there are cases,
there are so many people who are inflicting more suffering on animals
with much less cause for doing so that that's where we ought to be focusing our concerns.
But then, so my answer to that would be that it's still wrong for those people to eat meat,
but maybe we shouldn't be focusing our concerns.
I'll agree with you on that.
like we should really be focusing on the more important issues, but they're still wrong.
Well, they're still doing what is what is morally incorrect.
Yeah, okay.
I'd probably accept that.
Sure.
Okay, well, that's fantastic then.
I think given the breadth of the disagreement that we've had throughout the conversation,
the point where we can agree might be a good place to end.
So it's been a pleasure and a privilege to have you here.
And it's going to be great.
I might see you.
I know you're speaking at the Oxford Union tomorrow.
debating the motion this house believes it's immoral to be a billionaire is that right that's correct which
and i'm opposing the motion you're you're opposing some people who've read famine affluence and morality
might be surprised about yeah see i i i kind of when i saw you on the list i thought that that makes
sense i could see why you do that and i remember when i told friends i said we should go to the union
peter singer's coming to debate this motion um and they all kind of just go it in proposition right
and and they're amazed amazed to find not but i imagine that because they film all the events so i'm not sure when
this episode's going out, but there's a chance that that footage might already be online by
the time people are listening to this. So it's something they can go and listen to, but I think
that would be an interesting one. Good. I hope they will listen. And I'll try to be there. But yeah,
thank you, thank you for being here. It's been a great conversation. I'll remind my listeners that
if you enjoy this, this podcast, it really helps to give us a rating on iTunes. It helps us with the
algorithm, puts us on the front page. Better statistics can reach out to to wonderful guests as we've
been able to do so far. So thank you all for listening. Thank you for staying with us.
With that said, I've been Alex O'Connor, as always, and today I've been in conversation with Professor Peter Singer.
Thank you.