Making Sense with Sam Harris - Making Sense of Foundations of Morality | Episode 3 of The Essential Sam Harris
Episode Date: January 5, 2023In this episode, we try to trace morality to its elusive foundations. Throughout the compilation we take a look at Sam’s “Moral Landscape” and his effort to defend an objective path towards mora...l evaluation. We begin with the moral philosopher Peter Singer who outlines his famous “shallow pond” analogy and the framework of utilitarianism. We then hear from the moral psychologist Paul Bloom who makes the case against empathy and points out how it is more often a “bug” in our moral software than a “feature.” Later, William MacAskill describes the way a utilitarian philosophy informs his engagement with the Effective Altruism movement. The moral psychologist Jonathan Haidt then puts pressure on Sam’s emphasis on rationality and objective pathways towards morality by injecting a healthy dose of psychological skepticism into the conversation. After, we hear a fascinating exchange with the historian Dan Carlin where he and Sam tangle on the fraught issues of cultural relativism. We end by exploring the intersection of technological innovation and moral progress with the entrepreneur Uma Valeti, whom Sam seeks out when he encounters his own collision with a personal moral failure. About the Series Filmmaker Jay Shapiro has produced The Essential Sam Harris, a new series of audio documentaries exploring the major topics that Sam has focused on over the course of his career. Each episode weaves together original analysis, critical perspective, and novel thought experiments with some of the most compelling exchanges from the Making Sense archive. Whether you are new to a particular topic, or think you have your mind made up about it, we think you’ll find this series fascinating.
Transcript
Discussion (0)
Thank you. of the Making Sense podcast, you'll need to subscribe at SamHarris.org. There you'll find our private RSS feed to add to your favorite podcatcher, along with other subscriber-only
content. We don't run ads on the podcast, and therefore it's made possible entirely
through the support of our subscribers. So if you enjoy what we're doing here,
please consider becoming one.
Welcome to The Essential Sam Harris. This is Making Sense of the Foundations of Morality.
The goal of this series is to organize, compile, and juxtapose conversations hosted by Sam Harris into specific areas of interest.
This is an ongoing effort to construct a coherent overview of Sam's perspectives and arguments,
the various explorations and approaches to the topic, the relevant agreements and disagreements,
and the pushbacks and evolving thoughts which his guests have advanced.
The purpose of these compilations is not to provide
a complete picture of any issue, but to entice you to go deeper into these subjects. Along the way,
we'll point you to the full episodes with each featured guest, and at the conclusion,
we'll offer some reading, listening, and watching suggestions, which range from fun and light to densely academic.
One note to keep in mind for this series. Sam has long argued for a unity of knowledge where
the barriers between fields of study are viewed as largely unhelpful artifacts of unnecessarily
partitioned thought. The pursuit of wisdom and reason in one area of study naturally bleeds into, and greatly affects, others.
You'll hear plenty of crossover into other topics as these dives into the archives unfold.
And your thinking about a particular topic may shift as you realize its contingent relationships with others.
In this topic, you'll hear the natural overlap with theories of free will, political philosophy, violence, belief and unbelief, and more.
So, get ready. Let's make sense of the foundations of morality.
Sam's most important thesis might be the one we'll be exploring in this compilation.
It's possibly his most essential argument to grasp in order to understand his positions in the areas of politics, violence, charity, income inequality, and even atheism and religion.
He first set the argument down in book form when he wrote The Moral Landscape in 2010.
He also delivered a TED Talk, which compressed the argument's central themes into a 15-minute presentation.
That talk was entitled, Can Science Answer Questions of Morality?
Naturally, both the book and the video are recommended to pair with this compilation.
As we explore Sam's conversations
on this subject from the Making Sense Archive, we'll be treading into the exhaustively discussed
philosophy of morality. There's an endless taxonomy of positions in this field.
The ensuing picture can look like a wildly overgrown and gangly family tree,
pointing to countless frameworks with names like
consequentialism, utilitarianism, virtue ethics, care ethics, constructivism, nihilism, divine
commandment theory, and deontology. But at the base of that tree is a fork that bifurcates the
topic fairly sharply. It makes sense for us to start at that primary split
and note which limb Sam climbs.
Let's label the split with one branch marked as
moral realism and the other as its negation,
moral anti-realism.
The path of moral realism contends that there are such things
as objective moral truths.
This would mean that, all things being equal, a declaration like the following is objectively true.
It is morally better to give food to a starving creature than to withhold the food.
It would mean that it's possible for moral statements like this to be right or wrong.
possible for moral statements like this to be right or wrong. And to take it even further,
it would mean that the truth of this moral statement would remain true even if everyone were wrong and confused about it. For a moral realist, a statement like
slavery was morally wrong is not simply a statement of opinion or the suggestion of a
distaste for the practice.
Instead, it's a contention that the argument has its foundations outside of culture, personal preference, or historical context, and that slavery was, is, and always will be
a moral wrong. In philosophical jargon, you could say that objectively true means that it is true
jargon, you could say that objectively true means that it is true from the view from nowhere.
You've likely already gathered that the other branch of the tree, the one labeled moral anti-realism, rejects the entire notion of objective statements in morality.
It contends that when it comes to moral statements, we don't have any path to access this so-called
view from nowhere, and that moral sentiment is always really a matter of evolved preference,
species bias, historical bias, or cultural bias. This branch of ethics declares that the quest for
a genuine foundation for our moral sentiments and emotions that rests outside of our biases will always result in failure,
and that ultimately, all moral sentiments are inescapably subjective,
no matter how convincing or widely accepted.
Before we go too much further, it's important to note that the outwardly expressed moral attitudes
and political positions of realists and anti-realists
can strongly cohere. It's entirely possible, even abundantly probable, to find both a realist and
an anti-realist arguing that slavery is morally wrong, and to find them both voting for the same
political proposition to outlaw the practice. The difference between the two philosophies
presents itself when they try to provide their deepest foundational basis for this moral judgment.
The realist claims that slavery being wrong is a kind of objective fact, not necessarily exactly
like the facts in mathematics or chemistry, but something a bit like them, or at least strongly informed
and dictated by those facts, strong enough to be elevated to a factual moral truth.
The moral anti-realist might agree that slavery is a moral wrong, but declare that ultimately,
the foundations for that judgment are anthropocentric biases, evolved emotions,
historical contexts, and strong moral instincts,
not anything like a scientific fact. One name you'll hear often in this compilation,
and in any discussion on this topic, is David Hume. Hume was a brilliant philosopher from
Scotland who did his writing in the 1700s. He formulated what has come to be known as
the is-ought distinction, which argued that you can't get an ought from an is. Or, to reword it
in philosophical hypothesis form, Hume argued that there is no description of the way the universe
is which tells us how the universe ought to be. This insight is what really fertilizes the entire
branch of anti-realism in the field of ethics. You may have already guessed that Sam very
confidently moves down the moral realism branch. And while he conceptually agrees with Hume's logic,
he considers the confusion that it's caused, and its resulting moral subjectivism and cultural relativism,
to be a kind of ethical and political emergency.
Sam asserts that Hume's is-ought insight has led many people to conclude that science really has nothing to say about morality.
The relativist argument suggests that because science pursues the is-side of Hume's distinction,
and morality pursues the is-side of Hume's distinction and morality pursues the
ought-side, questions of morality are completely divorced from science and are purely subjective
matters for which there is no objective arbiter. Sam points out that this attitude has rendered
many otherwise moral and intelligent people mute and blind when it comes to casting judgment on the moral behaviors
of others, and especially other cultures. Sam's approach to objective morality allows him to
escape this moral paralysis, and, as you can imagine, his resulting utterances have landed
him in hot water from time to time. Before we get to our first clip, it's also important to clear something up about Sam's
brand of moral realism early so we can avoid a common misperception. Sam's argument in favor
of moral realism does not imply that there is only one correct answer to a moral question.
It also does not imply that he knows the right answer. It's only a contention that there are such right answers,
or, more accurately, that there are right directions to move towards,
that it's possible to objectively compare the moral value of two states of being
and two states of the universe,
and that it is possible to have real, objective confidence in those moral assessments,
and that it's therefore possible to make genuine moral progress. But, and this is the very delicate part,
it is entirely possible that one must move away from that right direction in order to navigate
towards a higher peak of moral states. This is the wrinkle that starts to paint his moral landscape
as a kind of mountain-hiking adventure
with endless peaks and valleys,
foggy hilltops, dangerous caverns,
canyons, wrong turns, impassable swamps,
and open, upward clearings.
What Sam argues is that morality,
when properly understood,
is a navigation problem which
requires ever-improving methods to draw better maps, manufacture accurate compasses, and
devise a good pair of binoculars so that we can have confidence that we are climbing to
higher and higher ground.
So when we brought up our first example to show the split between moral realists and anti-realists,
the idea that feeding a starving creature rather than depriving it, we added a tiny
four-word phrase in passing to qualify it, all things being equal. But the funny thing about
our actual lives and real-world situations is that all things are almost never equal.
and real-world situations is that all things are almost never equal.
In an actual situation you might encounter in the world,
the food in question may be your last bites,
and you'll starve to death if you feed the creature.
Or there may be several starving creatures in front of you,
and you only have enough food for one of them.
Or maybe this creature will devour two other healthy creatures if you feed it.
Adding wrinkles like this and playing with all of these crazy variables tends to make things unequal and morally complex.
But in an effort to distill and expound upon different moral frameworks
and discover psychological and philosophical insights,
philosophers and writers have been conjuring up fun
and sometimes diabolical
thought experiments in situations like this to try to flatten or equalize certain elements
and isolate others.
We'll be hearing some fun thought experiments, and some not-so-fun ones, throughout this
compilation.
So, let's get to our first clip and introduce a famous thought experiment that we'll be
returning to frequently. The clip is a conversation with Australian philosopher Peter Singer, who at this
point seems to have the descriptor of world's most influential living philosopher as a permanent
addendum to his name. We'll begin with what has become a famous, simple thought experiment that
Singer used in 1971 in Philosophy and Public Affairs,
an academic journal that was little known at the time. The thought experiment goes like this.
Imagine you have just purchased a nice pair of new shoes, and you're walking by a pond.
You know this pond well, and you know its depth and probable dangers.
You know this pond well, and you know its depth and probable dangers.
It's very shallow. It only comes up to your waist.
Suddenly, you see a small child in the pond, flailing for her life and struggling.
She's clearly in distress and in imminent danger of drowning.
Do you run into the pond and rescue her, knowing that you will muddy your shoes and certainly ruin them?
If you're waiting for a more complicated or challenging choice, it's not coming. That's the whole story, and that's the whole thought experiment. Nearly everyone responds by saying,
of course I run into the pond, who cares about the shoes? Now, Singer takes that answer and
suggests that we, and he's speaking mostly about those of
us in the affluent world, that we are all the time in a very similar moral position as the
pedestrian walking by the pond. Let's say that the shoes cost $90, and let's also say you already
had a pair of perfectly usable shoes at home. This purchase was a luxury.
Go back to the moment when you were at the shoe store and looking at them on display.
What if, instead of making that purchase,
you knew that you could donate that $90 to a charity
which had displayed solid data that it could use that money,
with a very high degree of probability,
to save the life of a child in Eritrea
who would otherwise soon die.
Is choosing to purchase the shoes anyway a choice that is morally equivalent to strolling past the
drowning child and keeping your new shoes shiny and clean while she drowns to death in front of you?
This arresting question has spawned a swarm of responses, supportive movements, clever challenges, creative edits, defeated frustrations, and counter-considerations.
We'll be playing with Singer's shallow pond a good bit throughout this compilation to flesh out Sam's take on it, and his particular run at the eternally vexing problem of morality.
run at the eternally vexing problem of morality. An obvious distinction to draw between the moment at the pond versus the moment at the shoe store is something like an act of omission versus an
act of commission. In other words, is there a difference between failing to act and choosing
to act if they result in the same moral outcome? Let's jump into the first clip, where Sam is speaking with Peter Singer in episode 48,
What is Moral Progress?
Is there an important moral distinction
between acts of omission and acts of commission?
We certainly act as though there were.
So how does, in your famous shallow pond example,
put some pressure on this here? So how do you and your famous shallow pond example put some pressure on this here.
So how do you think about the difference between not saving a life that would be very easy for you to save and taking one actively?
And this obviously also relates to end of life considerations of the sort you mentioned,
the difference we seem to hold on to between removing life support and passively letting someone die versus actively killing them, which in many cases might be the more merciful thing to do.
Yes. So my view is that the distinction between killing and letting die or between acts and omissions, it's put in different ways, is not itself of great intrinsic significance.
It may be a marker for other things of more significance, like it may be a marker for
motives, for instance. So if somebody would say to me, suppose I say, look, you should give to
this effective charity, let's be specific. You should give to the Against
Malaria Foundation because it will distribute bed nets in places where there's a lot of malaria
and where children die from malaria. And if you donate what I know you can afford to donate to
the Against Malaria Foundation, they will use it to distribute bed nets and you will be saving at
least one child's life. And that's factual. I think that is a real organization and a real example. And let's say the person doesn't do that, right? So then that
person has, in one sense, let a child die. Do I think of that person exactly the same as somebody
who traveled to Africa, shot a small child and then traveled back to the United States? Of course not. I know that there's a huge psychological
difference in that person that many of us are apathetic or don't care enough, don't feel
psychologically drawn to help people who we can't even see. But for someone to actually have the
malice and the will to travel, to find a child, to kill that child, it has to be a completely horrible, depraved person.
So sometimes the distinction between acts and omissions will signal something like that.
Why did this person go out of their way to kill?
Whereas in the other case, they simply didn't do enough to save a life.
But then let's look at another case,
the medical case that you mentioned. So an infant has been born prematurely and has had a very
severe bleeding in the brain, a hemorrhage. The doctors do a scan of the brain. They find that
all of the parts of the brain that are associated with consciousness,
like the cortex, have been irreversibly destroyed. Now, there's two possible things that might happen
in these circumstances. One might be that the doctors, after discussion with the parents,
say, look, your child really has a hopeless future. They'll
survive if we continue to treat them, but they'll just lie in bed all day and never be able to
communicate with anyone, probably never have any conscious experiences at all, have to be fed
through a tube and so on. And the doctors will then say, and the parents will usually agree,
so we could withdraw the respirator. Your baby is too small
to breathe on his own. We can withdraw the respirator and your baby will die. And parents
will typically say, if you think that's best doctor, then I'm okay with that. And the baby
will die. Now that is seen as a letting die, as an allowing to die, not as a killing. On the other
hand, it might have happened that because it took some
time to carry out the diagnosis because the baby was particularly vigorous and so on, that the baby
no longer needs a respirator. So the prognosis is exactly the same. The baby is never going to
communicate in any way, probably never going to be conscious. He's going to have to be fed through a
tube and lie on a bed. But you can't bring about the baby's death by withdrawing the respirator. And let's just say that there's
nothing else you can do that will bring about the baby's death. The baby is otherwise, apart from
this massive and irreparable brain damage, the baby is otherwise healthy. Now, I think that if
you're prepared to say that it was justifiable to withdraw the respirator, you ought to be prepared to say it would be justifiable to give the baby a lethal injection so that
the baby dies without suffering.
There is no moral difference.
In both cases, you know exactly what the consequences of your action will be.
In both cases, your intention is to bring about the death of the child.
Your motivation is equally, I would say, equally good, equally
reasonable in both cases. So the means is really irrelevant. But legally, of course, one is murder
and the other is, well, maybe it's slightly gray in some countries, but anyway, it's done in every
neonatal intensive care unit in every major city
in the United States, and nobody ever gets prosecuted for it. So it seems to be legally
acceptable. But that's, as I say, that's a case where I would think we ought to be able to accept
active steps on the basis of saying it's no different from the other case.
And certainly there are cases where the active step is the one that bypasses an immense amount
of suffering, right? Where the passive one may... Absolutely. That's right. And so other cases where
there is some consciousness, not exactly the case I described, but there is some consciousness.
I do know of cases where people will say, you know, no, we can't actually take
active steps to end life. But if the baby gets pneumonia, we won't give antibiotics. And so then
the baby will suffer a lingering death from pneumonia over days or maybe even a couple of
weeks, you know, which is a horrible thing and a pointless thing to do if you decided that it's
better that the baby should die. You know, why let the baby suffer in this way? I want to go back to the issue of the shallow pond.
So you admit that there's a difference. It would take a very different sort of person
to go to Africa with the intention of killing someone than merely decline to buy a bed net
when told on good information that this would save a human life.
Those are very different people, but I think you're saying that it's natural for us to view
them as different, and because it requires actually a different psychology to do one
versus the other, they are different. But if we abstract away from those differences and talk about public policies and what governments
should do, then the act and omission difference shouldn't be morally salient to us anymore.
Is that where you're headed with that?
I'm not going to say that it shouldn't be at all morally salient because there are questions
in what governments do in terms of the examples that they set.
But I do think it's very serious that governments allow people to die
when they could prevent them, when they have the resources to prevent them.
And so I certainly think that the governments of the wealthier nations
of the world should be getting together and developing policies
to eliminate preventable child deaths and preventable suffering from diseases.
They did make a reasonable effort in terms of the Millennium Development Goals to reduce
suffering, and progress was made.
The number of children dying fell quite significantly during that period, as did the number of people
in extreme poverty.
And that's a good thing.
But I'm concerned whether sufficient progress is continuing to be made.
I think more progress could have been made even in that period, although some progress
was made.
And I think we should be doing more.
And that applies to governments, but it also applies to individuals.
I think all of us who can afford to donate to effective charities
ought to be doing that because the governments are not doing enough.
How do you view the ethical significance of proximity, if there is any? I mean,
obviously there's an immense psychological significance that the starving person on my
doorstep is different, certainly more salient
than the starving person in a distant country whose existence I know about, at least in the
abstract. Presumably you think that that difference is far bigger than it should be, but is there any
ethical significance to proximity, the problem in your backyard as opposed to the problem an ocean away?
Well, I'd say not to proximity in itself. Again, we can perhaps be more confident about what we're
achieving when things are in our backyard and we actually can see what's happening. We can talk to
the people who are affected by it. But we do have very good research now about, uh, effective nonprofit organizations that are trying to help
people far away. Um, uh, so, uh, there's organizations like give well and do research
on effective charities. Um, there's an organization I founded called the life you can save. Uh, and
it has a website, uh, which lists charities that we've vetted. And some of it draws on GiveWell's research,
some of it draws on other research,
so that we recommend effective charities.
And if you can have a high level of confidence
in the effectiveness of what you're doing,
then it's not very different morally.
As you correctly said, it is very different psychologically,
but morally it's not very different
from things that are going on in your backyard. said, it is very different psychologically, but morally it's not very different from
things that are going on in your backyard. Given that it is so different psychologically,
I mean, presumably if I told you that there's a starving person by my front door today that I
just stepped over on the way to this podcast because I was, you know, I'm busy, you would
view me with something close to horror and repugnance and would be right to. But if I
told you that I got yet another appeal from a good charity, which I didn't act on, you would
just view me as a more or less psychologically normal, if somewhat aloof person. Do you view
our moral progress personally and collectively as a matter
of collapsing that distance as much as psychologically possible so that we really
can't put distance suffering out of sight and out of mind?
Yes, I do think that's an indicator of progress. The psychology is understandable, of course.
Our ancestors for millennia, for perhaps hundreds of thousands of years, if we could go back even to social primates before there were humans at all, these ancestors lived in small social groups, face-to-face groups, where they knew people and they would help others and cooperate with them in various ways. But they had no
relations, perhaps even to people who lived across the mountain range in the next valley.
And now suddenly, suddenly in terms of evolutionary time anyway, we live in a world where we have
instant communications, where we have very rapid delivery of assistance, where we have good ways of working out what is going to help
people most effectively. And our psychology has not changed rapidly enough to cope with this.
There's an interesting note about Singer's pond analogy and the idea that Sam raised about
evaluating the kind of person who
would stroll by a child drowning in a pond versus the kind of person who declines to donate to a
charity. Singer originally wrote The Pond Story in an essay about a mass humanitarian crisis in
East Bengal in 1971, spurred on by a civil war and a devastating cyclone. He presented the pond to argue for the
presence of a moral opportunity, and perhaps for a moral obligation, of wealthy countries to
intervene with food, shelter, and rescue. We can map that same character analysis that Sam suggested
onto the national level and ask, what kind of country declines to feas screaming child drowning in a pond is an emergency,
but the slow drip of individual preventable deaths
from hunger, illness, and poverty,
and spread across entire continents, does not seem to present itself in that way
or to expose the kind of people we are.
But shouldn't it?
What if we gathered all of those individuals
into one location, like a sports stadium,
and announced that a bomb would kill them all at midnight
unless we easily defused it?
That edit sounds extreme,
but it only gathers the location of these preventable deaths to the same venue,
and it makes explicit the imminence of their demise.
Somehow that makes it feel more like a newsworthy emergency
that only a moral monster would ignore.
But again, Singer argues that this may actually be the situation
that most of us are in
today, if we only bothered to notice it. This is the deeply challenging work that the pond analogy
does. So let's stay with that last thread from Sam and Singer's conversation of proximity,
and the tension between psychology and moral philosophy. Like all moral dilemmas and thought experiments,
you can start to tinker with the variables in certain ways
that are designed to highlight how your moral intuitions might shift with each edit.
For example, replay the pond analogy.
But this time, you see five children drowning instead of one.
They're all at different distances from you, spread throughout the pond.
You're quite certain that in the time it will take you to reach and rescue one of them,
the other four will drown and die.
Which do you go for?
Assuming you're still willing to ruin your shoes.
Maybe you decide that flipping a coin is the best method.
But what if one of the children happens to be your child?
Do you go for her no matter what,
waiting past the cries for help from an unfortunate, unknown child?
How about if you knew all of the struggling children,
and you know that one of them has a terminal illness
and is unlikely to live another year anyway?
Do you avoid going for that child?
What if one of the children
is known to be showing signs of being a scientific prodigy, and there are high hopes for her future,
and she's likely to be a great benefit to humanity? What if you think all of these factors
are just too vulgar, and you simply go to whichever one happens to draw you first while
you close your eyes? Would that method favor the child who happens to yell the loudest?
If we keep our eyes open and just follow our instinct, would we inevitably end up being
drawn towards the child who's the cutest?
Or even the child who looks a little like us and reminds us of our kin?
We can keep playing these kinds of games forever.
We could even make it nearly identical to the famous trolley problem,
the thought experiment which ties five people to a railroad track
while one person is fastened to a separate track.
In that now well-known nightmare,
you're given the choice to divert an out-of-control trolley towards the one
rather than the five by flipping a switch.
In our pond, we can imagine that four
of the children are clinging to a rapidly deflating life raft, and they could all grab
hold of it and be dragged to safety by you, while one isolated child is drowning by himself a hundred
feet away. Is there a right choice for problems like these? We're going to go to our second clip
to focus on the suggestion that there are right answers to these questions.
This guest will argue that our intuitions lead us to actions that are compromised by our evolved psychological biases to favor creatures with which we can empathize.
issue of proximity. It's certainly easier to empathize with someone who's close enough to be in our visual and auditory field and whose screams we can hear, rather than a distant, nameless,
faceless, voiceless child. We know that it's also easier to empathize with a single child whose name
and story we know over a huge number of distant, nameless children who you'll never meet. In fact, as you'll hear Sam and this next guest point out,
this specific aspect of our psychology is even more curious,
where our ability to empathize with a specific starving child is reduced
when you simply place the same child amongst the company of thousands of others just like him.
The next guest is Paul Bloom, a professor of psychology
formerly of Yale University and now with the University of Toronto. Bloom has had several
wonderful conversations with Sam, and this is their first, which came just after the release
of Bloom's book with the provocative title Against Empathy. In it, he argues that our much-ballyhooed capacity for
empathy is not the clean moral panacea which it is sometimes advertised to be. In fact,
it may often be more of a bug than a feature when it comes to our moral reasoning.
Here is Sam with Paul Bloom from episode 14, In Cold Blood.
You've come down very much on really a side of a controversy that most people didn't even know existed,
which is that empathy in many cases is harmful and is not a good piece of software if you want to be a reliable moral actor in normative terms.
So tell me about what you've said about empathy, and let's get into the details.
So I always have to begin with the most boring way ever to begin anything,
which is we're talking about terminology.
Because people use the term empathy in all sorts of ways.
And I think my position is easily misunderstood.
If you think, some people think empathy just as a word referring to anything good
compassion care love morality making the world a better place and so on under that construal of
empathy i have nothing against it i'm not a monster i mean i want to make the world a better
place other people use the term empathy very narrowly to refer to understanding in a cold
blooded way what's going on in the minds of other people, understanding what they think and what they feel. And I'm not against that too, though,
and we might want to talk about this. I think it's morally neutral. I think very great and
wonderful and kind people have this sort of cognitive empathy, if you want to call it that.
But so do con men, seducers, and sadists. Bullies are, one way One reason why bullies are very good at being bullies is that they
exquisitely understand what's going on in the heads of their victims. Yeah, yeah. That's often
misunderstood, by the way. We should just footnote that, that this form of cognitive empathy that
you've just distinguished from the other form that you're about to describe is something that
psychopaths have in spades.
When we talk about psychopaths being devoid of empathy, it's not the empathy that allows us to understand another person's experience. That is not something that prototypically evil people
lack. In fact, as you just said, they use this understanding to be as successfully evil as they
can be. That's exactly right. So, you know, another term for cognitive empathy is social intelligence.
And I like that way of talking because it captures the point that intelligence is an
extraordinary tool.
Without it, you know, we couldn't do any great things.
But in the hands of somebody with malevolent ends, intelligence could be used to make them
a lot worse.
And I think that social intelligence
is exactly like that. Mind reading, another term for it, is a tool that could be used any way you
want it. And the very best people in the world have tons of it, and so do the very worst people
in the world. So the sense of empathy I'm using, and this actually matches what most psychologists and most philosophers,
how they use the term, is empathy is in the sense of what Adam Smith and David Hume and
other philosophers call sympathy. And what it refers to is feeling what other people feel.
So if you're in pain and I feel empathy for you, I will feel to some degree your pain. If you're
humiliated, I will feel your humiliation. If you are happy, I will feel your happiness.
And you could see why people are such fans of this. It brings me closer to you. It dissolves
the boundaries between me and you. And there's a lot of psychological research showing that if I
feel empathy towards you, I'm more likely to help you.
Dan Batson has done some wonderful studies on them, and I don't contest that at all.
But the problem with empathy, and one of the problems of empathy, there are many, but the
main problem is it serves as a spotlight.
It zooms me in on a person in the here and now.
And as a result, it's biased, it's parochial, it's short-sighted, and it's innumerate.
One way I put it is, it's because of empathy that governments and societies care so much more
about a little girl stuck in a well than about millions or more people suffering and dying through climate change. It's because of empathy,
at least in part, that we freak out and panic over mass shootings, which, however horrible,
are a tiny proportion of gun homicides in America, 0.01% roughly. I mean, so if you ask people,
they would say mass shootings are the most terrible
things there are. And, you know, I live in Connecticut. Newtown's not that far away.
After the Sandy Hook killing, people were, including me, were deeply upset. But intellectually,
if you could snap your fingers and make all the mass shootings go away forever,
and then you did that, nobody would know based on the homicide numbers that it's so tiny.
So it misdirects us.
It causes us to focus on the wrong thing.
It causes us to freak out at the suffering of one and ignore the suffering of 100.
And in one of your books, I forget which one, you talk about the study where we care more
about one than about eight.
And you say something to the effect of, if there's ever a non...
That's Paul Slovic's work.
That's right.
That's right.
Some wonderful studies.
And also somebody named Retoff and other investigators have done this since.
And you described this, that if there's ever a non-normative finding in psychology, that's it.
And so I think there's many more examples like this that we could say, we could look and say
as rational people, well, you know, a black life matters as much as a white life. The life of an
ugly person who doesn't inspire my empathy matters just as much as a beautiful person who does.
And the lives of a hundred matter more than the life of one. Especially, and this is the amazingly non-normative
finding from Slovak's work, is that especially if those hundred include the one you were caring
about. So you can set up this paradigm where you show a reliable loss of concern when you add people
to the group. So you start with one little girl whose story is very emotionally salient and
people care about her to a maximal degree, and then you add her brother to the story and people
care a little less, and then you add eight more people to the story, keeping the same girl and people's care just drops off a cliff, that's truly amazing. It's not one attractive girl
versus a hundred faceless people. It can be the one attractive girl along with the hundred,
and you care less. It's a magnificent and horrible finding. And, you know, I've long
championed the forces of reason and rationality
and moral judgment. I think far more than many social psychologists that were capable of that.
And so there's an interesting duality here. On the one hand, our gut feelings push us towards
the one girl and not the hundred, even if the hundred includes the girl. On the other hand,
we're smart enough to recognize when we put it in this abstract way,
that that's a moral mistake. In some way, you could view the moral mistakes caused by empathy
as analogous to the mistakes in rationality that people like Danny Kahneman have chronicled,
where you see people just, you know, you get these puzzles and you ignore the base rates,
you get things all messed up. And then when you step back
and look at it and do the math, you realize, wow, that was a mistake. My gut led me in the wrong way.
Visual illusions are another case. It looks this way, but it isn't. You take out the ruler and you
measure it. And although the lines look like they're different lengths, they're the same.
So we have this additional capacity to do this, both for things that connect to the external warlike vision, but also for morality, where we have standards of reason and consistency.
And we could use this to say, wow, our empathy is pushing us in the wrong direction.
Yeah.
So now, do you see us correcting for this in a way that is adequate to the magnitude
of the moral error, or is our way of correcting for it more haphazard than that?
Our way of correcting this is always haphazard, but the analogy I make is with racism.
So we know we have racist biases. Many of us have explicit racist biases,
but there's a lot of evidence for implicit racial biases, biases that we don't know we have even,
but that influence us in all sorts of ways. So what do
you do? So suppose if you think racism is okay, then there's not a problem. But suppose, you know,
as you and I do, we think racism is wrong. So what do you do about it? Well, the answer is not you
try harder. You know, we know trying hard doesn't work for these sort of biases, but there are
different sorts of fixes. So in fact for for for biases often there's technological
fixes one story this may be apocryphal but it's a good story is that symphony orchestras were
heavily biased in favor of men because they they claim that you know the people making
judgments who were both men and women said men just sound better they have stronger more powerful
styles so what they did was they started auditioning people behind a screen.
And then the sex ratio became more normal.
So this is an example of you've got a bias, you don't like it,
and so you try to fix the world so it doesn't apply.
And I can imagine similar things happening with empathy,
where you change laws and policies so that empathy
plays less of a role.
Bloom and Sam agree quite a bit on a lens of morality and how we ought to work to discover
and then mitigate our worst built-in psychological impulses when making moral decisions.
But to underline the distinction between what Bloom was calling cognitive empathy
and other forms of concern for others, let's note the subtitle of Bloom's book, which is
The Case for Rational Compassion. Bloom's argument is a fascinating counter to the common advice to
trust your gut when it comes to difficult moral decisions.
Gut versus reason, or heart versus head, are colloquial phrases that people use to express the tense boundary between our evolved instincts and our rational moral reasoning. We're going to
continue to trace our way along that boundary in this compilation, and this time we'll jump back
towards the philosophical side of things.
To return to our initial taxonomies of moral philosophy, we should zoom in on another major fork in the tree. This is the split between consequentialism and deontology.
If you'd like to continue listening to this conversation, you'll need to subscribe at
samharris.org. Once you do, you'll get access to all full-length episodes of the Making Sense Thank you. And you can subscribe now at SamHarris.org.