Making Sense with Sam Harris - #150 — The Map of Misunderstanding
Episode Date: March 12, 2019Sam Harris speaks with Daniel Kahneman at the Beacon Theatre in NYC. They discuss the replication crisis in science, System 1 and System 2, where intuitions reliably fail, expert intuitions, the power... of framing, moral illusions, anticipated regret, the asymmetry between threats and opportunities, the utility of worrying, removing obstacles to wanted behaviors, the remembering self vs the experiencing self, improving the quality of gossip, and other topics. If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.
Transcript
Discussion (0)
Thank you. of the Making Sense Podcast, you'll need to subscribe at SamHarris.org. There you'll find our private RSS feed to add to your favorite podcatcher, along with other subscriber-only
content. We don't run ads on the podcast, and therefore it's made possible entirely
through the support of our subscribers. So if you enjoy what we're doing here,
please consider becoming one.
Welcome to the Making Sense Podcast.
This is Sam Harris.
Well, today I'm bringing you the audio from my live event with Danny Kahneman at the Beacon Theater in New York a couple of weeks back.
This was a sold-out event in a very cool old theater.
I'd actually never been to the Beacon before,
but it has a storied history in music and comedy. in a very cool old theater. I'd actually never been to the Beacon before,
but it has a storied history in music and comedy.
Anyway, it was a great pleasure to share the stage with Danny.
Daniel Kahneman, as you may know,
is an emeritus professor of psychology at Princeton University and also an emeritus professor of public affairs
at Princeton's Woodrow Wilson School of Public and International Affairs.
He received the Nobel Prize in Economics in 2002 for the work he did on decision-making
under uncertainty with Amos Tversky. Unfortunately, Tversky died in 1996, and he was a legendary
figure who would have certainly shared the Nobel Prize with Danny had he lived
longer. They don't give the Nobel posthumously. In any case, I think it's uncontroversial to say
that Danny has been the most influential living psychologist for many years now, but he's perhaps
best known in the general public for his book Thinking Fast and Slow, which summarizes much of the work he did
with Tversky. Michael Lewis also recently wrote a biography of the Kahneman-Tversky collaboration,
and that is called The Undoing Project. Anyway, Danny and I covered a lot of ground at the Beacon.
We discussed the replication crisis in science, systems one and two, which is to say automatic
and unconscious cognitive processes, and more conscious and deliberative ones. We talk about
the failure of intuition, even expert intuitions, the power of framing, moral illusions, anticipated
regret, the asymmetry between threats and opportunities,
the utility of worrying, removing obstacles to wanted behaviors, the remembering self versus
the experiencing self, improving the quality of gossip, and many other topics. Anyway, Danny has
a fascinating mind, and I think you'll find this a very good introduction
to his thinking.
Of course, if you want more, his book, Thinking Fast and Slow, also awaits you if you haven't
read it.
And now I bring you Daniel Kahneman. That's unusual.
Well, well, thank you all for coming.
Really an honor to be here.
Danny, it's a special honor to be here with you, so thank you for coming.
My pleasure.
Thank you.
It's often said and rarely true that a guest needs no introduction,
but in your case, that is virtually true.
We're going to talk about your work throughout, so for the one person who doesn't know who you are,
you will understand at the end of the hour.
But I guess by way of introduction, I just want to ask,
what is the worst thing about winning the Nobel Prize?
That's a hard question, actually. There weren't many downsides to it.
Okay, well, nobody wants to hear your problems, Dan.
Well, nobody wants to hear your problems, Dan.
So how would you, how do you think about your body of work?
How do you summarize the intellectual problems you have tried to get your hands around?
You know, it's been just a series of problems that occurred that I worked on.
There was no big program. When you look back, of course,
I mean, you see patterns and you see ideas that have been with you for a long time, but there was really no plan. I was, you know, you follow things, you follow ideas, you follow things
that you take a fancy to. Really, that's a story of my intellectual life. It's just one thing after another.
Judging from the outside, it seems to me that you have told us
much of what we now think we know about cognitive bias and cognitive illusion.
And really, the picture is of human ignorance having a kind of structure.
It's not just that we get things wrong.
We get things reliably wrong. And because of that, whole groups, markets, societies can get
things wrong because the errors don't cancel themselves out. I mean, bias becomes systematic.
And that obviously has implications that touch more or less everything we care about.
has implications that touch more or less everything we care about.
I want to track through your work,
as presented in your now famous and well-read book,
Thinking Fast and Slow.
And I just want to try to tease out what should be significant for all of us at this moment.
Because human unreason, unfortunately,
becomes more and more relevant, it seems.
And we don't get over
these problems. And I guess I wanted just to begin to ask you about a problem that's very close to
home now, what is called the replication crisis or reproducibility crisis in science, in particular
social sciences and in particular psychology. And for those in the room who are not aware of what has happened and how dire this seems, it seems that when you go back to even some of the most
celebrated studies in psychology, their reproducibility is on the order of 50, 60%
in the best case. So there was one study done that took 21 papers from Nature and Science, which are the most
highly regarded journals, and reproduced only 13 of them. And so let's talk about the problem
we faced in even doing science in the first place. Well, I mean, you know, the key problem and the
reason that this happens is that research is expensive.
And it's expensive personally, and it's expensive in terms of money.
And so you want it to succeed.
So when you're a researcher, you know what you want to find.
And that creates biases that you're not fully aware of.
And I think a lot of this is simply self-delusion.
That is, you know,
there is a concept that's known as p-hacking, which is people very honestly deluding themselves about what they find. And there are several tricks of the trade that, you know, people know about
them. You are going to do an experiment. So instead of having one
dependent variable where you predict the outcome, you take two dependent variables. And then if one
of them doesn't work, you stay with the one that does work. You do that and things like that a few
times, then it's almost guaranteed that your research will not be replicable. And that happens.
It was first discovered in medicine.
I mean, it's more important in medicine than it is in psychology,
where somebody famously said that most published research in medicine is false.
And a fair amount of published psychological research is false, too.
Yeah, but even some of the most celebrated results in psychology, like priming and the
marshmallow test and...
Well, in the...
Yeah, I mean, it's not only...
It's actually...
They get celebrated in part because they are surprising.
Yeah.
And the rule is, you know, the more surprising the result is, the less likely it is to be true.
And so that's how celebrated results get to be non-replicable.
Right. Well, and the scariest thing I heard, I don't know how robust this study was,
but someone did a study on trying to replicate unpublished studies and found that they replicated better than published studies.
Did you hear this?
I don't think that's replicable.
Oh, yeah, okay.
Let's talk about system one and two.
These are the structures that give us so much of our,
what can be a dispiriting picture of human rationality.
Summarize for us, what are these two systems you talk about?
Well, I mean, before starting with anything else,
there are clearly two ways that ideas come to mind.
I mean, so if I say two plus two, then an idea comes to your mind.
You haven't asked for it.
You're completely passive, basically.
Something happens in your
memory. If I ask you to multiply, you know, 24 by 17 or something like that, you have to work to get
that idea. So it's that dichotomy between the associative, effortless, and the effortful.
And that is phenomenologically obvious. You start from there.
And how you describe it and whether you choose to describe it in terms of systems,
as I did, or in other terms, that's already a theoretical choice. And in my view, theory is less important than the basic observation of, you know,
that there are two ways for ideas to come to mind. And then you have to
describe it in a way that will be useful. And what I mean by that is you have to describe the
phenomena in a way that will cause, help researchers have good ideas about facts and about experiments to run. And the system one and system two was,
it's not my dichotomy and even not my terminology. And in fact, it's a terminology that many people
object to, but I chose it quite deliberately. What are the liabilities? Because people,
in your book, you try to guard against various misunderstandings of this
well yes i mean you know there is a rule uh that you're taught fairly early in psychology
which is never to invoke what is called homunculi which are little people in your head
whose behavior explain your behavior or explain the behavior of people that's a no-no
and system one and system two are really homunculi.
So I knew what I was doing when I picked those. But the reason I did was that system one and
system two are agents. They have personalities. And it turns out that the mind is very good at
forming pictures and images of agents that have intentions and propensities
and traits and they're active. And it's just easy to get your mind around that. And that's why I
picked that terminology, which many people find sort of objectionable because they're really not
agents in the head. It's just a very useful way to think about it, I think.
So there's no analogy to be drawn between a classical, psychological, even Freudian picture
of the conscious and the unconscious. How do you think about consciousness and everything that
precedes it in light of modern psychology? I mean, you know, it's clearly related in the sense that what I call system one activities, the automatic ones, one characteristic they have is that you're
completely unconscious of the process that produces them. You just get, you know, you get the results.
You get four when you hear two plus two. In system two activities, you're often conscious
of the process. You know what you're doing when you're calculating.
You know what you're doing when you're searching for something in memory.
So clearly, consciousness and System 2 tend to go together.
It's not a perfect, you know, and who knows what consciousness is anyway.
But they tend to go together, and System 1 is much more likely to be unconscious and automatic.
Neither system is a perfect guide toward tracking reality, but system one is very effective in many cases.
Otherwise, it wouldn't have evolved the way it has.
But I guess maybe let's start with a picture of where our intuitions are reliable and where they reliably fail.
How do you think about the utility of intuition?
I'll say first about system one that our representation of the world, most of what we know about the world, is in system one.
so that we're going along in life with producing expectations or being surprised or not being surprised by what happens.
All of this is automatic.
We're not aware of it.
So most of our thinking, system one thinking,
most of what goes on in our mind goes on and we're not aware of it.
So that's, and intuition is defined as, you know, knowing or rather thinking
that you know something without knowing why you know it or without knowing where it comes from.
And it's fairly clear, actually, I mean, that's a digression, but there is a guy named Gary Klein,
a psychologist, who really doesn't like anything that I do.
And he is... How does your system one feel about that? I like Gary a lot, actually. But he believes in intuition and in expert intuition, and he's a great believer in,
and he has beautiful data showing, beautiful observations of expert intuition.
So he and I, I invited him actually to try and figure out our differences because obviously I'm a skeptic.
So where is intuition marvelous and where is it flawed?
And we worked for six years before we came up with something and we published an article called The Failure to Disagree because
in fact there is a fairly clear boundary about when you can trust your intuitions and when you
can't and I think that's summarized in three conditions. The first one is the world has to
be regular enough. I mean first of all recognition, and that's, Herbert Simon said that.
You have an intuition, it's just like recognizing, you know, that it's like a child recognizing what a dog is.
It's immediate.
Now, in order to recognize patterns in reality, which is what true intuitions are,
the world has to be regular enough so that there are regularities
to be picked up. Then you have to have enough exposure to those regularities to have a chance
to learn them. And third, it turns out that intuition depends critically on the time between
when you're making a guess and a judgment and when you get
feedback about it. The feedback has to be rapid. And if those three conditions are satisfied,
then eventually people develop intuition so that the chess players, chess is a prime example where
all three conditions are satisfied. So after, you know, many hours, I don't know, 10,000 or not, but many hours,
a chess player will have intuitions. All the ideas, all the moves that come to his or her mind
are going to be strong moves. That's intuition. Right. So the picture is one of intuition. I mean,
there are intuitions that are more innate than others. We're so primed to learn certain
things innately that no one remembers learning these things, you know, recognizing a human face,
say. But much of what you're calling intuition was at one point learned. So intuition is trainable.
There are experts in various domains, chess being a very clear one, that develop what we consider to be expert intuitions.
And yet much of the story of the blind spots in our rationality is a story of the failure of expert
intuition. So where do you see the frontier of trainability here? I mean, I think that what
happens is that when those conditions are not satisfied, people have intuitions too.
That is, you know, they have ideas that come to their mind with high confidence and they think
they're right. And so the main thing... I've met these people. Yeah, I mean, you know,
we've all met them and we see them in the mirror and you know that's so the it turns out you can have
intuitions for bad reasons you know so it all it takes is a thought that comes to your mind
automatically and with high confidence and you'll think that it's an intuition and you'll trust it
and but the correlation between confidence and accuracy is not high.
That's one of the saddest things about the human condition.
You can be very confident in ideas and the correlation.
You shouldn't trust your confidence.
So that's just, yes, a depressing but fascinating fact
that the signature of a high probability that you are correct is what you feel
while uttering that sentence. I mean, psychologically, confidence is the marker of
your credence in whatever proposition it is you're entertaining. And yet, we know they can become
totally uncoupled and often are uncoupled.
Given what you know or think you know scientifically,
how much of that bleeds back into your life and changes your epistemic attitude?
Mine personally?
Do you hedge your bet?
How is Danny Kahneman different given what he has understood about science?
Not at all.
Not at all?
That's even more depressing than I thought.
You know, in terms of my intuitions being better
than they were, no, and furthermore, I have to confess,
I'm also very overconfident.
So even that I haven't learned.
So it's hard to get rid of those things.
You're just issuing a long string of apologies?
I mean, how do you get through life?
Because you should know better.
If anyone should know better, you should know better.
Yeah, but I don't really feel guilty about it.
So I have stopped.
So how hopeful are you that we can improve?
How hopeful are you that an individual can improve?
And how hopeful are you that we can design systems of conversation and incentives that can make some future generation find us more or less unrecognizable in our stupidity?
I should preface by saying that I'm not an optimist in general, but I'm certainly not an optimist about those questions.
I don't think that, you know, I'm a case study because I've been studying that stuff for more than 50 years, and I don't. I can catch, recognize a situation as one in which I'm likely to be making a mistake.
And this is the way that people protect themselves against visual illusions.
You can see the illusions, and there's no way you cannot see it.
But you can recognize that this is likely to be an illusion,
so don't trust my eyes. Take out the ruler.
There is an equivalent.
You know, there is a similar thing goes on with cognitive illusions.
Sometimes you know that your intuitions, your confident thought is unlikely to be true.
That's quite rare.
It doesn't happen a lot. I don't think that I've become, in any significant way,
smarter because of studying errors of cognition.
Right.
Okay, let me just absorb that for a second.
What you must thirst for on some levels
is that this understanding of ourselves
can be made useful or more useful than it is, because
the consequences are absolutely dire, right? I mean, our decision-making is, one could argue,
the most important thing on earth, certainly with respect to human well-being, right? I mean,
how we negotiate nuclear test ban treaties, right? I mean, like everything from that on down,
this is all human conversation, human intuition,
errors of judgment, pretensions of knowledge,
and sometimes we get it right.
And the delta there is extraordinarily consequential.
So if I told you that we, over the course of the next 30 years,
made astonishing progress on this front,
right? So that we, our generation, looks like, you know, bumbling medieval characters compared
to what our children or grandchildren begin to see as a new norm. How did we get there?
You don't get there. You know, I mean, get there. You know, it's the same as if you
told me, will our perceptual system be very different in 60 years? And I don't think so.
Let's take one of these biases or sources of bias that you have found. I mean, the power of framing,
right? We know that if you frame a problem in terms of loss or you frame the same problem in terms of gains, you get a very different set of preferences from people because people are so averse to loss.
So the knowledge of that fact, let's say you're a surgeon and you're recommending or at least proffering a surgery for a condition to your patients who you have taken, you know, you have taken a Hippocratic oath to do
no harm. And you know, because you read Danny Kahneman's book, that if you put the possibility
of outcome in terms of mortality rates versus survival rates, you are going to be moving
several dials in your patient's head one way or the other reliably. Can you conceive of us ever
agreeing that there's a
right answer there, like in terms of what is the ethical duty to frame this correctly? Is there a
correct framing or are we just going to keep rolling the dice? Well, I mean, this is a lot
of questions at once. In the first place, you know, when you're talking about framing,
The first place, you know, when you're talking about framing, the person who is subject to the framing, so you have a surgeon framing something for a patient.
First of all, the patient is going to be completely unaware of the fact that there is an alternative frame.
That's why it works.
It works because you see one thing and you accept the formulation as it is given.
So that's why framing works.
Now, whether there is a true or not true answer.
So I should, let me mention the sort of the canonical problem, which actually my late colleague Amos Zursky invented.
my late colleague, Amos Zursky, invented.
So in one formulation, you have a choice between,
well, there is a disease that's going to cause 600 deaths unless something is done,
and you have your choice between saving 400 people
or a two-third probability of saving 600.
Or, alternatively, other people get the other framing that you have a choice between
killing 200 people for sure and not allowing them to die and a one-third probability that
600 people will die. Is there a correct answer if they're a correct frame
Now the interesting thing is people
Depending on which frame you presented to them. They make very different choices. Yeah, but now you
you confront them with the fact that
here here you've been inconsistent and
some people will deny it but
You know you you can convince them them this is really the same problem you know if if you save 400 then 200 will die and then what happens is they're dumbfounded and it's there
are no intuitions we have clear intuitions about what having with about what to do with gains we
have clear intuitions about what what to do with losses and We have clear intuitions about what to do with losses.
And when you strip it from that language
with which we have intuition,
we have no idea what to do.
So, you know, what is better
when you stop to think about,
you know, stop thinking about saving or about dying?
Well, actually, I've forgotten,
if that research was ever done,
I forgot what the results were.
Has the third condition been compared to the first two? What do people do when you give them
both framings and dumbfound them? I mean, you know... Where do the percentages go with respect?
This is not something that, you know, we've done formally, but I can tell you that I'm dumbfounded.
that we've done formally, but I can tell you that I'm dumbfounded. That is, I have absolutely no idea. I have the same intuitions as everybody else. When it's in the gains, I want to save lives,
and when it's in the losses, I don't want people to die. But that's where the intuitions are.
When you're talking to me about 600 more people staying alive with a probability two-thirds,
or when you're talking about numbers of people living, I have absolutely no intuitions about that.
So that is quite common in ethical problems and in moral problems, that they're frame-dependent.
And when you strip the frames away, people are left without a moral intuition.
And this is incredibly consequential when you're thinking about human suffering.
So your colleague, Paul Slovic, has done these brilliant experiments
where he's shown that if you ask people to support a charity,
you talk about a famine in Africa, say,
and you show them one little girl attached to a very
salient and heartbreaking narrative about how much she's suffering, you get the maximum
charitable response. But then you go to another group and you show that same one little girl and
tell her story, but you give her a brother and the response diminishes. And if you go to another
group and you give them the little girl and her brother,
and then you say, in addition to the suffering of these two gorgeous kids,
there are 500,000 suffering children behind them suffering the same famine,
then the altruistic response goes to the floor.
It's precisely the opposite of what we understand system two should be
normative, right? The bigger the problem, the more concerned and charitable we should be.
So to take that case, there's a way to correct for this at the level of tax codes and levels
of foreign aid and which problems to target. We know that we are emotionally gamed by the salient personal
story and more or less morally blind to statistics and raw numbers. I mean, there's another piece of
work that you did which shows that people are so innumerate with respect to the magnitude of
problems that they will more or less pay the same amount whether they're saving 2,000 lives, 20,000 lives or 200,000 lives.
Because basically, and that's a System 1 characteristic, basically you're saving one life.
You're thinking you have an image, you have stories, and this is what System 1 works on.
And this is where emotions are about. They're about stories. They're not about numbers. So it's
always about stories. And what happens when you have 500,000, you have lost the story. The story,
to be vivid, has to be about an individual case. And when you dilute it by adding cases,
you dilute the emotion. Now, what you're describing in terms of moral, the moral response
to this is no longer an emotional response. And this is already, you know, this is cognitive
morality. This is not emotional morality. You have disconnected from the emotion, you know that it's better to save 500,000 than 5,000,
even if you don't feel better about saving 500,000. So this is passing on to system two.
This is passing on to the cognitive system the responsibility for action.
And you don't think that handoff can be made in a durable way? I think it has to be made
by policymakers. And policymakers, you know, we hire some people to think about numbers and to
think about it in those ways. But if you want to convince people that this needs to be done,
you need to convince them by telling them stories
about individuals because numbers just don't catch
the imagination of people.
What does the phrase cognitive ease mean in your work?
Well, it means that some ideas come very easily to mind
and others come with greater and greater difficulty to the point of...
So that's what cognitive...
It's also called fluency.
It's, you know, what's easy to think about.
And there is a correlation between fluency and pleasantness, apparently.
That pleasant things are more fluent, they come more easily.
Not always more easily, but yes, they're more fluent.
And fluency is pleasant, so there is that interaction
between fluency and pleasure, which I hope replicates.
So the picture I get is of, I don't know if you referenced this in your book, I can't remember,
but what happens, what we know from split-brain studies,
that for the most part the left linguistic hemisphere confabulates.
It's continually manufacturing discursive stories that ring true to it.
ring true to it. And in the case of actual neurological compabulation, there's no reality testing going on. It's telling a story that is being believed. But it seems to me that most of us
are in a similar mode most of the time. There's a very lazy reality testing mechanism coming online. And it's just easy to take your own word for it
most of the time. I think this is really, as you say, this is a normal state. The normal state is
that we're telling ourselves stories. We're telling ourselves stories to explain why we believe in
things. More often than not, retrospectively, in a way that bears no relationship to the system one bottom-up reasons why we feel this way.
But for me, the example that was formative is what happened with post-hypnotic suggestions.
with post-hypnotic suggestions. So you put somebody under hypnosis and you tell them, when I clap my hands, you will feel very warm and you'll open a window. And you clap your hands
and they get up and open a window. And they know why they opened the window. And it has nothing to
do with the suggestion. It comes with the story. They felt really warm and uncomfortable
and they needed air and they opened the window.
Actually, in this case, you know the cause.
The cause was the hand was clamped.
Is that gonna replicate?
That one replicates, I'm pretty sure.
You know, I hope so.
Yeah, I'm sure.
Do you have a favorite cognitive error or bias?
Yeah.
Which of your ugly children do you like the most?
Yeah, I think, I mean, it's not the simplest to explain,
but my favorite one is sort of extreme predictions.
When you have very weak evidence,
and on the basis of very weak evidence,
you draw extreme conclusions.
I call it, technically, it's called non-regressive prediction,
and it's my favorite.
Right, right.
Where do you see it appearing?
Is there an example of it that you have seen?
Oh, I mean, you see it all over the place, but one very obvious
situation is in job interviews. So, you know, you interview someone and you have a very clear idea
of how they will perform. And even when you're told that your ideas are worthless because,
in fact, you cannot predict performance or can predict it only very poorly.
It doesn't affect it.
Next time you interview the person, you have the same confidence.
Interview somebody else.
I mean, that's something that I discovered very early in my career.
I was an officer in the Israeli army as a draftee. And I was interviewing candidates for officer training.
And I discovered that I had that uncanny power to know who will be a good officer and who won't be.
And I really could tell, you know, interviewing people. I knew their character. You get that sense of, you know, confident knowledge. And then, you know, then
the statistics showed that actually we couldn't predict anything. And yet the confidence remained.
It's very strange. Right. Well, so there must be a solution for that. Some people following
your work must recommend that you either don't do interviews
or heavily discount them, right? Yeah, that's absolutely true. Don't do interviews, mostly.
Right. And don't do interviews in particular, because if you run an interview, you will trust
it too much. So there have been many cases, you know, studies,
I don't know about many, but there have been studies
in which you have candidates,
you have a lot of information about them,
and then if you add an interview,
it makes your predictions worse,
especially if the interviewer
is the one who makes the final decision,
because when you interview, this is so much more vivid than all the other information you have
that you put way too much weight on it.
Is that also a story about just the power of face-to-face interaction?
It's face-to-face interaction.
It's immediate.
Anything that you experience is very different from being told
about it. And, you know, as scientists, one of the remarkable things that I know is how much more I
trust my results than anybody else's. So, and that's true of everybody I know. You know, we trust our own results. Why? No reason.
All right, then let's talk about regret. Okay. What is the power of regret in our lives? How
do you think about regret? Well, I think regret is an interesting emotion,
and it's a special case of an emotion that has to do with counterfactual thinking.
That is, regret is not about something that happened,
it's about something that could have happened but didn't.
And I don't know about regret itself, but anticipated regret, the anticipation of regret, plays an important role in lots of decisions.
That is, there's a decision and you tell yourself, well, if I don't do this and it happens, then how will I feel?
That expectation of regret is very powerful.
That expectation of regret is very powerful, and it's well known in financial decisions and a lot of other decisions.
It's connected to loss aversion as well, right?
It's a form of loss, and it's quite vivid that you are able to anticipate how you will feel if something happens,
and that becomes very salient.
Well, does the asymmetry with respect to how we view losses and gains make sense, ultimately?
I think at some point in your work you talk about an evolutionary rationale for it,
because suffering is worse than pleasure is good,
essentially, because there's a survival advantage for those who are making greater efforts to avoid
suffering. But it also just seems like if you put in the balance of possibility the worst possible
misery and the greatest possible pleasure, I mean, if I told you we could have the night we're going to have tonight
and it will be a normal night of conversation,
or there's a part of the evening where I can give you
the worst possible misery for a half hour,
followed by the best possible pleasure.
Let's have our conversation.
Yeah, let's just get a cheeseburger and a diet coke the
prospect of suffering in this universe seems to overwhelm the prospect of of happiness or well-being
i know you you put a lot of thought into the power of sequencing i think i can imagine that
feeling the misery first and the pleasure second would be better than the reverse
much but but it's it's not going to be enough to make it seem like a good choice, I would imagine.
How do you think of this asymmetry between pleasure and pain?
Well, I mean, you know, the basic asymmetry is between threats and opportunities.
And threats are more immediate.
And so in many situations, it's not true everywhere,
there are situations where opportunities are very rare,
but threats are immediate
and they have to be dealt with immediately.
So the priority of threats over opportunities
must be built in by and large evolutionarily.
But do you think we could extract an ethical norm
from this asymmetry?
For instance, could it be true to say that it is more important to alleviate suffering than to provide pleasure?
If we had some way to calibrate the magnitude of each?
Well, we did a study, Dick Thaler and Jack Netsch and I did a study a long time ago,
about intuitions about fairness.
And it's absolutely clear that that asymmetry rules intuitions about fairness.
That is, there is a very powerful rule of fairness that people identify with.
Not to cause losses.
That is, you have to have a very good reason to inflict a loss
on someone. The injunction to share your gains is much weaker. So that asymmetry, what we call
the rights that people have, quite frequently the negative rights that people have,
is the right not to have losses inflicted on you.
So there are powerful moral intuitions that go in that direction. And the second question that
you asked, because that was a compound question about well-being, yeah, I mean, I think, you know,
in recent decades, there's a tremendous emphasis on happiness and the search for happiness and the responsibility of governments to make citizens happy and so on.
And one of my doubts about this line of work and this line of thinking is that I think that preventing misery is a much better and more important objective than promoting happiness.
And so the happiness movement, I have my doubts about on those grounds.
Given what you've said, it's hard to ever be sure that you've found solid ground here. So
there's the intuition that you just cited that people have a very strong reaction to imposed losses
that they don't have to unshared gains, right? You do something that robs me of something I
thought I had. I'm going to feel much worse about that than just the knowledge that you didn't share
some abundance that I never had in the first place. But it seems that we could just be a conversation
away from standing somewhere that makes that asymmetry look ridiculous, analogous to the
Asian disease problem, right? Like it's a framing effect that we may have an evolutionary story to
tell about why we're here, but given some opportunity to be happy in this world, it
could seem counterproductive. I say this already being anchored to your intuition. I share this
situation. Yeah, I think that, you know, in philosophical debates about morality and well-being,
there are really two ways of thinking about it.
And there is one way about when you're thinking
of final states and what everybody will have.
And so you have, and there, there's a powerful intuition
that you want people more or less to be equal,
or at least not to be too different.
But there is another way of thinking about it,
which is given the situation
and the state of society, how much redistribution do you want to impose? And there, there is an
asymmetry because you are taking from some people and giving it to others. And you don't get to the
same point. So we have powerful moral intuitions of two kinds, and they're not internally consistent.
And loss aversion has a great deal to do with that.
So given that there are many things we want and don't want,
and we want and don't want them strongly,
and we are all moving individually and collectively
into an uncertain future where there are threats and opportunities,
and we're trying to find our
way. How do you think about worrying? What is the advantage of worrying? If there was a way to just
not worry, is that an optimal strategy? I think that Dalai Lama most recently articulated this
in a meme, but this no doubt predates him. Take the thing you're worried about, right? Either
there's something you can do about it or not. If there's something you can do about it, well, then do that thing.
If you can't do anything about it, well, then why worry? Because you're just going to suffer twice,
right? How do you think about worry, given your work here?
Well, I don't think my work leads to any particular conclusions about this. I mean,
the Dalai Lama is obviously right. I mean, you know, why worry?
But...
Some people are gonna tweet that
and it's not gonna work out well for you.
On the other hand,
on the other hand,
I would like to see people worry
a fair amount about the future.
And even because you don't know right now
whether or not you'll be able to do anything about it.
Right.
I mean... Maybe worry.
The only way to get enough activation energy into the system to actually be motivated to do something...
Is to worry.
You know, one of the problems, for example, when you're thinking of climate change, one of the problems, you can't make people worry about something that is so abstract and distant.
Yeah.
And, you know, if you make people
worry enough, things will change. But there is, scientists are incapable of making the public
worry sufficiently about that problem. To steal a technique that you just recommended,
if you could make a personal story out of it, that would sell the problem much more effectively.
It's just climate change is a very difficult thing to personalize.
It's very difficult to personalize and it's not immediate.
So it's really climate change is the worst problem in a way.
The problem that we're least well equipped to deal with because it's remote, it's abstract,
to deal with because it's remote, it's abstract, and it's not a clear and present danger. I mean,
a meteorite, you know, coming to Earth, that would mobilize people. Climate change is a much more difficult problem to deal with. And worry is part of that story.
It's interesting that a meteorite would be different.
I mean, even if you put it far enough out there,
so you have an Earth-crossing asteroid in 75 years,
there would still be some counsel of uncertainty.
People would say, well, we can't be 100% sure
that something isn't going to happen in the next 75 years
that will divert this
asteroid. Other people will say, well, surely we're going to come up with some technology
that would be onerously costly for us to invent now, but 20 years from now could be trivially
easy for us to invent. So why steal anything from anyone's pocketbook now to deal with it?
You could run some of the same arguments,
but the problem is crystallized in a way that climate change... The difference is there is a story about the asteroid.
You have a clear image of what happens if it hits.
And the image is a lot clearer than climate change.
So one generic issue here is the power of framing.
I mean, we are now increasingly becoming students of the power of framing,
but we are not...
We should just be able to come up with a list of the problems
we have every reason to believe are real and significant
and sort those problems by
the variable of this is the set of problems that we are we know that we are very unlikely to feel
an emotional response to right we are just we are not wired to appreciate to be motivated but
by what we rationally understand in the areas, and then take the cognitive step
of deliberately focusing on those problems. If we did that, if everyone in this room did that,
what we're then left with is a political problem of selling this attitude toward the rest of the
world. You use a tricky word there, and the word is we. Who is we? I mean, you know, you use the tricky word there, and the word is we.
Who is we?
I mean, you know, in that story, who is we?
So you're talking about a group of people, possibly political leaders,
who are making a decision on behalf of the population that, in a sense,
they treat like children who do not understand the problem.
I mean, it's quite difficult.
Surely you can't be talking about our current political leaders.
No, I'm not.
But it's actually, I find it difficult to see how democracies can effectively deal with a problem like climate change. I mean, you know, if I had to guess, I would say China is more likely to come up with effective solutions than the West
because they're authoritarian.
Okay, so is that an argument
for a benevolent dictatorship of some kind
to get us out of this mess?
If you'd like to continue listening to this conversation,
you'll need to subscribe at SamHarris.org.
Once you do, you'll get access to all full-length episodes of the Making Sense podcast, along
with other subscriber-only content, including bonus episodes and AMAs and the conversations
I've been having on the Waking Up app.
The Making Sense podcast is ad-free and relies entirely on listener support, and you can
subscribe now at SamHarris.org.