The Knowledge Project with Shane Parrish - #68 Daniel Kahneman: Putting Your Intuition on Ice
Episode Date: October 15, 2019Psychologist and Nobel laureate Daniel Kahneman shines a light on the biases that cripple our decision-making, hamstring negotiations, and damper our thinking, and shares what limited actions we can t...ake to combat their effects. Go Premium: Members get early access, ad-free episodes, hand-edited transcripts, searchable transcripts, member-only episodes, and more. Sign up at: https://fs.blog/membership/ Every Sunday our newsletter shares timeless insights and ideas that you can use at work and home. Add it to your inbox: https://fs.blog/newsletter/ Follow Shane on Twitter at: https://twitter.com/ShaneAParrish Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
Delay your intuition.
Don't try to form an intuition quickly, which is what we normally do.
Focus on the separate points.
And then when you have the whole profile, then you can have an intuition and it's going to be better.
Hello and welcome.
I'm Shane Parrish and this is the Knowledge Project.
exploring the ideas, methods, and mental models that help you master the best of what other
people have already figured out. To learn more and stay up to date on new episodes, go to fs.blog
slash podcast. I also put together a weekly newsletter that I think you'll love. It's called
brain food and it comes out every Sunday. Much like this podcast, it's high signal, low noise,
it's timeless, and mind expanding. Read what you're missing at fs.blog slash newsletter.
today I'm speaking with Daniel Conman, Emeritus Professor of Psychology at Princeton, who received
the Nobel Prize in economics in 2002 for the work he did on decision-making with Amos Tversky.
Danny is the most influential living psychologist, a true legend in his field, and this conversation
was a great honor. Publicly, he's probably best known for his book, Thinking Fast and Slow,
and his work on drawing attention to our cognitive biases. Our conversation revolves around how to
better decisions, our intuitions, what, if anything, we can do to reduce our cognitive biases
and how rules make great defaults. It's time to listen and learn.
Daniel, I'm so happy to get that chance to talk to you.
Well, I'm happy to have you here.
What was your childhood like?
What were you like as a child?
Oh, my God.
That was a long time ago.
I was an early child, as you might expect, I suppose.
I thought I'd be a professor when I was like three or four years old
because people told me it would be
because I'd probably spoke with long words and stuff like that.
And then the rest of my childhood, I mean, I was five when World War II began.
So, and I was a Jew in France, so I've had a difficult childhood, but from that point on,
but what I was, was I like, yeah, I was a nerdy child.
I was quite inept physically, very fortunately for me when I finally moved to Israel at age 12,
they held me up a grade and then, and then it was all right, but that's, that's what I was like.
Are any particular lessons or memories that stand out for you?
There are two of them that I speak about.
One is that I was a psychologist very early on.
That was very clear.
I wrote an essay.
Before I was 11, I remember where, because it was a German counterattack.
It was during that period we were in Paris.
And I wrote an essay about faith and religion.
And it was a very pompous essay.
I had a little book that was titled
What I Write About What I Think, Something Pompous Like That.
But the essay started with another pompous thing
That I quoted Pascal.
My sister had passed her exams and I had read,
you know, she studied some Pascal and I had read it.
And Pascal had said that
Faith is God made sensible to the heart.
And, you know, little me, I said, how true.
That's what my essay said.
And then, but then I said, but faith is really hard to get.
You don't sense God all the time.
So that's what religious pomp is for.
Cthedrals, organ music, they give you, and I call that Ozat's face.
sort of substitute faith because it's a similar feeling. It's got to do with God, and that's what
you must do with. That's a psychologist. So it's clear that, you know, that was my calling.
And so that's one significant memory of my childhood.
So you were born to be a psychologist.
I think so. I think so. I mean, you know, it's always had that.
that point of view that later, as a teenager, I was interested in all the philosophical issues
like, you know, does God exist and good and bad and stuff like that, and why shouldn't we
masturbate, you know, serious questions? But I discovered that actually I was less interested
in the question of whether or not God exists than in why do people believe that he exists.
that I thought was interested
and I wasn't particularly
interested in the question of what's good or bad
but I was really interested in what makes
people angry and indignant
so I've had the
psychological point of view since
terms of since my childhood
was there anybody that sort of influenced
you to go on
to study this? I mean it's one thing
to have these dreams as like a 12, 13
14 year old boy
it's another to turn this into
probably the most
eminent career that's ever happened for a psychologist?
No, not the most element career.
You know, and I wasn't sure, actually, that I would do psychology.
When I took a vocational exam to tell me what I was good at,
and psychology and economics stood out, but, you know, that was unexpected.
I was, and then I took psychology as an undergraduate and mathematics
at which I was not particularly good, so.
And no, it's not.
that I knew at the time that, you know, I had that calling to be a psychologist. It didn't occur to me.
I thought, you know, I thought I'd be a professor in one thing or another. I mean, I thought
I'd be an academic, but not psychology specifically.
You worked for a, with Amos Tversky for a long time. Are there any particular stories that
you remember about working with him that bring a smile to your face?
Almost everything about working with him brings a smile to my face.
You know, he was a very unusual person.
Most people who knew him thought that he was the smartest person that ever met.
And in fact, the famous psychologist Nick Nisbet said that it's sort of an intelligence test
when you said that when you are with Amos, how long does it take you to figure out that he's smarter than you are?
And the faster you figure that out, the smarter you are.
So, yeah, he was super bright and very, very funny.
He joked a lot.
He laughed a lot at his own jokes, and that was infectious.
When I was with him, I was very funny, too.
More than half of the last of my life, with my lifetime I've had during the 10 years I worked with him.
You have an interesting distinction between happiness and satisfaction.
And can you walk us through that?
Yeah, sure.
I mean, the word happiness is so ambiguous
and it means so many things to many people.
But one sensible interpretation of it
is that it's got to do with your emotions,
with how you feel, with the emotional tone of your life,
whether it's a happy life.
You know, it's pleasant to be you.
Life satisfaction is a completely different thing.
I mean, life satisfaction is how you feel about your life when you think about your life.
And most of the time you don't think about your life, you just live.
But, you know, sometimes sort of look.
And that's when you determine how satisfied you are.
That's life satisfaction.
It's not satisfaction.
It's life satisfaction.
Should we balance the two, or how would you think about them?
Should we be more happy when we're younger, more satisfied when we're older?
That thought had never occurred to me when I began to work on this.
I started out thinking that happiness in that sense of how you feel when you live,
that that was reality and that life satisfaction was just stories that people tell themselves
and the important thing was to be happy in real time.
But later, when we did more research, it turned out that the source, the story
circumstances that make people happy and the circumstances that make them satisfied with their
life are not the same. So happiness is mostly social. It's, you know, it's being with people
who love you back. That's, that's a lot of what happiness is. Life satisfaction is much more
conventional. It's to be successful. And, you know, so it's money, education, prestige, that sort
of thing is what life satisfaction is about.
So those are two very different things.
I thought that life satisfaction is irrelevant.
You know, that's how I began.
And we had a research program where we were trying to, you know, to show that this is the case.
But then, after a few years, I realized that what people really want in their life is they don't seem to care about how happy they'll be.
They seem to want to be satisfied with their life.
they seem to want to have a good story about their life.
And then I was in the position of saying that to define well-being in a way that people
didn't seem to care particularly about.
So that was not a tenable position.
So I dropped back into saying that I had no idea how to deal with it.
Was this a result of the research?
You did some research that was, I think it said above 70,000, you don't become happier.
But do you become more satisfied?
No.
The research I did with Angus Deaton, at Princeton, famous economists, we showed that in terms of happiness, in terms of emotional tone, positive and negative, having a lot of money doesn't make you happier, but being poor makes you miserable. So that's above a threshold that was like $70,000 approximately in the US. Then extra money didn't.
make you emotionally happier.
But with life satisfaction, it was a different story.
With life satisfaction, that doesn't satiate.
So it's always good to have more.
Because basically, I think, money is a proxy for success
and it's a proxy for subjective success in many cases.
So it's not necessarily but spending it or doing something with it.
It's just a measure.
Just getting it.
I mean, you know, you look at all those people,
all those billionaires working their heads off.
and they're clearly not doing this because they need more money.
They're trying to get more money.
They're trying to get more money
because that would be an indication
that they're good at what they do,
I think mostly it's approximately.
Do either of those variables correlate to longer living,
happiness or satisfaction?
That's both, apparently.
But, you know, it's hard to separate.
And I haven't been followed, you know,
shortly after deciding that I didn't know what well-being was,
I sort of stopped doing research on this, so I haven't been following.
But I think there's clear evidence that being effectively happy is really good for you.
And you do live longer and you live better and so.
And life satisfaction works in the same direction.
Whether it's separable, which of them is more important that I don't.
I want to switch gears a little bit and talk about behavior.
and I'd love your insider expansion upon the idea of we can change behavior
and how do we go about changing our behavior?
Well, you know, I'm not sure by the premise.
I think changing behavior is extremely difficult.
There are a few tips and, you know, a few guidelines about how to do that,
but anybody who is very optimistic about changing behavior is just deluded.
It's hard to change other people's behavior.
It's very hard to change your own.
Not simple.
This is what marriage is all about, right?
Among other things.
You know, people, when, you know, married people
try to change each other's behavior.
It's a lot of dissatisfaction.
Yeah, not on their way to a good marriage, I think.
We'd all be happier with lower expectations.
Yes, I mean, and even if you have expectation,
don't try to change because, you know,
it's very unlikely to work in a significant way.
I can think of the common.
ways that we would sort of go about behavior change, and it would be, you know, making good
behaviors more easy or negative behaviors harder?
I think that's the main, the main insights, you know, when you want to influence somebody's
behavior, that's a very big insight. I've always thought that this is the best psychological
idea ever, you know, so far as I'm concerned. But it's that when you want somebody to move
from A to B in terms of their behavior, you can think of it that there are two ways of doing
it. You can push them. Or you can ask the question, why aren't they doing B already?
Which is an unusual question, but you know, why? So then when you ask, why not, why aren't
they doing B? They ought to, as I think they ought to. Then you get a list of what's
both Lewin, that's a psychologist who, my guru and this, my hero, and many people's hero.
He spoke of restraining forces.
I mean, so there are reasons why they're not where you want them to be.
So he spoke of behavior as an equilibrium.
There are forces that are pushing you one way, forces that are pushing you the other way.
So how loud you speak, how fast you drive.
It's easy to think of it as an equilibrium.
And what we tend to do when we want to move people from A to B is we push them.
We add to the driving forces.
And Kutluen's insight was that this is not what you should do.
You should actually work on the restraining forces and try to make them weaker.
And that's a beautiful point.
And he showed, he had that image that, you know, I've had since I was an undergraduate.
it. And I'm not sure, actually, whether it was his image or something that I drew from reading
him, but it's like you have a plank and it's being held by two sets of springs. You know,
you wanted to move one direction. And so you could add another spring that would push it that
way, or could remove one of the springs that are holding it back. And the interesting thing,
and that's the striking outcome, is when it moves, if it moves, if it moves because,
because of the driving force, you've added to the driving force,
then at equilibrium, it will be in a higher state of tension
than it was originally.
That is because you've compressed when spring,
and so it's pushing back harder.
But if you remove a restraining force, at equilibrium,
there'll be less tension on the system.
It must have been 20 years old.
I thought that's just so beautiful.
What do you wish that everybody knew about psychology,
that you don't think that they do?
If that was class one, what's class two?
You know, class two, which is the development from class one, you know, it's the same idea extended.
Class two is that behaviors don't necessarily reflect the personality, but behaviors have a lot to do with the situation.
And so if people behave in strange ways, look at the situation they're in, and what are the pressures in the situation that make them act as well?
So there is a bias that a social psychologist, well-known social psychologist,
the fundamental attribution error.
And that means that when you see people acting in some way,
you think that it's because of their personality that they do it.
That may not be the case.
It's quite likely that the situation is making them do it.
I'd like people to know that motivation is complex
and that people do good things for a mixture of good and bad reasons,
and they do bad things for a mixture of good and bad reasons.
And I think that there is a point to educating people in psychology
is to make them less judgmental.
Just have more empathy and more patience,
and being judgmental doesn't get you anywhere.
When you talk about situational,
one of the things that comes to mind is it's so easy for us to give our friends' advice.
But if we were in that situation, we might not necessarily see it.
Why is that the case?
Why is it so much easier to give other people advice?
I mean, feelings get in the way of clear thinking.
There is a phenomenon that we call the endowment effect,
which is that when I'd ask for more money to sell you my sandwich
than I'd pay to get it.
I mean, that's essentially the endowment effect.
And our explanation of it, there are many explanations,
but the story I like to tell about it is that it's more painful
to give something up than to get something.
But there is an interesting result.
that if you have an agent making decisions on somebody's behalf, that agent doesn't have
loss of those.
So that agent sells and buys at the same price, which is the economically rational thing to do.
Where this goes into policy and governments and really important things, that governments
are like agents or people who think about the good of society.
And agents, they take the economic view.
They take the view of what things will be like at the end.
they don't figure out that there are some people are going to be losing because of the reform
that they make. And it turns out that you can really expect losers, potential losers, to
fight a lot harder than potential winners. And that's the reason that reforms are frequently
fail and that when they succeed, they're almost always way more expensive than anticipated.
And they're more expensive because you have to compensate the losers. And that frequently is
not anticipated. So that's an example of a story that incorporates behavior change and the
difference between perspective, between being, you know, in the situation, feeling the pain
of giving up the sandwich and not feeling the pain of giving up the sandwich.
That would have huge public policy sort of implications, too, right, that we don't tend to think
about or discuss. That's a really interesting angle there. I want to come back.
to sort of situational decision-making based on sort of like what we see is all there is
and we have these feelings that we can't sort of disassociate with.
How does environment play a role, like the physical environment,
in sort of what we decide or does it?
I mean, you know, there are sort of obvious thing that we know.
If people are hot and bothered and distracted and there is a lot of noise and so on,
and then they'll think less well, and that we know.
But even there, there are puzzles.
I mean, many people think and work a lot better in cafes,
you know, where there is actually ambient noise and activity around them,
and it helps them concentrate better.
So there isn't a very simple story of the environment.
But certainly, you can make the environment tough enough
so that people won't be able to think properly.
That's feasible.
Are there things that we can do to do?
I guess push the environment to be more conducive to clear thinking, the physical environment in this
case? Oh, there are all sorts of, you know, odd findings, you know, the color of the, color of the
room. Some colors are better than others. You would expect that. Some colors are more calming than
others. So you wouldn't want to be in a red room. Making decisions. Making decisions. But, you know,
those are extreme and minor effect.
I want to come to intuition and noise later.
Is there anything else that stands out
that gets in the way of clear thinking
that we can sort of bring to the surface now?
Well, you know,
what gets in the way of clear thinking
is that we have intuitive views
of almost everything.
So as soon as you present a problem to me,
I have some ready-made answer.
And what gets in the way of clear
thinking of those ready-made answers, and we can't help but have them. So that's one thing that
gets in the way. Emotions get in the way. And I would say that independent, clear thinking
is to a first approximation impossible in the sense that, you know, we believe in things most
of the time, not because we have good reasons to believe them. If you ask me for reasons,
I'll explain you. I'll always find a reason.
But the reasons are not the causes of our beliefs.
We have beliefs because mostly we believe in some people, and we trust them, and we adopt their beliefs.
So we don't reach our beliefs by clear thinking, you know, unless you're a scientist or doing something like that.
But even then, it's probably a very narrow...
But that's very narrow, and there is a fair amount of emotion when you're a scientist as well
that gets in the way of clear thinking, you know, commitments to your previous views,
being insulted that somebody thinks he's smarter than you are. I mean, lots of things get in
the way even when you're a scientist. So I'd say there is less clear thinking than people
like to think. Is there anything that we can do at the belief formation stage? Like, it sounds
almost as though when you say that we're reading a newspaper, we read this op-ed, and
it's well-constructed and fits with it our view of the world, therefore we adopt that
opinion, and we forget the context that we didn't learn it through our own experience or
reflection. We learned it sort of from somebody else, so we don't know when it's sort of
likely to work or not work, but we just proffer that as our opinion, is there?
That's how I believe in climate change. I believe in the people who tell me there is
climate change. And the people who don't believe in climate change, they believe in other
people. But similarly, there's like fake news and all this other stuff that we would have the
same reaction to you. You know, but I'm much more likely to believe fake news on my side than
the fake news on the other side. I mean, it's true that there is a huge degradation in public
discourse in the recent 10, 15 years of the United States. I mean, there used to be an idea that
facts matter. What would be your hypothesis as to why that is playing it? Were they getting
into politics because I don't want to talk politics? But like, why is that? Well, I mean,
it's hard to answer that question without politics because the general political polarization
has had a very big effect.
and the fact that people can choose the sources of information.
Let's switch gears a little bit and talk about intuition.
I think one of the things that strikes me the most about some of the work that you've done
is the cases where we're likely to trust our intuition and when we're not.
And so, correct me if I'm getting this wrong,
so it's sort of like a stable environment, repeated attempts,
and rapid feedback.
It strikes me that most decisions made in organizations
do not fit that environment.
And yet, we're making a lot of these decisions
on judgment or experience.
What are the ways that we can sort of make better decisions
with that in the context?
Well, in the first place, I think, you know,
you shouldn't expect too much.
Back to low expectations.
through, yeah, should have low expectations about improving decisions.
I mean, there is, you know, one basic rule is slow down, especially if you have that
immediate conviction, slow down.
There are procedures, you know, there are ways of reaching better decisions, but reaching
better judgments, and we can talk about them.
I would love to.
If you really want to improve the quality of decision-making, use algorithms.
I mean, whenever, wherever you can, if you can replace judgments by rules and algorithms, they'll do better.
Now, there's big social costs to trusting, allowing algorithm to make decisions, but the decisions are likely to be better.
So that's one thing.
If you can't use algorithms, then you slow yourself down.
And then there are things that you can do for certain types of problems, and there are different types of problems.
and there are different types of problems.
So one class of problems
like forecasting problems.
I've trained Phil Tetlock
has that book on super forecasters
where he identifies
with people who are good at forecasting the future
what they do.
That makes them good.
And, you know, it tries to train people
and you can improve people.
So that's one class of problems.
I'm interested specifically
in another kind of,
problem, judgment problems, where basically you're considering options or you're evaluating
a situation and you're trying to give it a score. There, there is advice, I think, on how to
do it. For me, it goes back to something I did in the Israeli army when I was like 22 years
old. So that's a long time ago, like 63 years ago. I was a psychologist in the Israeli armed
and I was assigned the job of setting up an interviewing system for the army.
It was ridiculous, but, you know, this was the beginning of the state of Israel,
so people were improvising all over the place.
So I had a BA, and I think I were the best trained psychologist in the army.
My boss was a chemist.
Brilliant.
But anyway, and the existing system was one where people would interview.
view and try to form an intuitive global image of how well that recruit would do as a combat soldier,
which was the object of the interview. And because I had read a book by Paul Meal, I took a
different tack. And the different tack was I identified six traits that I sort of made up. And I
had them ask questions and evaluate each of these traits independently and score it and write down
the score, then go on to the next trade. And they had to do it for all six traits. And that's all I
asked them to do. And the interviewers, who were about one year younger than I, all recruits,
but very, very smart, selected for being good at it, they were furious with me. And they were
furious with me because they wanted to exercise their intuition. And I still remember that
one of them said, you're turning us into robots. So I compromised with them. And
I said, okay, you do it my way.
And I told them, you try to be reliable, not valid, you know.
I'm in charge of validity.
You'll be reliable, which was pretty arrogant, but that's how I presented it.
But then, when you're done, close your eyes and just put down a number of how good a soldier is that guy going to be?
And when we validated the results that they interviewed, it was a big improvement.
and what had gone on before.
But the other surprise
was that
the final intuitive judgments
added,
it was good,
it was as good as the average
of the six straights
and not the same.
It added information.
So actually,
we ended up with a score
that was half,
was determined
by the specific ratings
and the intuition
got half the way.
And that, by the way,
stayed in the Israeli Army
for well,
over 50 years. I don't know whether it's, I think probably some version of it was still
been forced, but around 15 years ago, I visited my old base, and the commanding officer of
the research unit was telling me how they run the interview, and then she said, and then
we tell them, close your eyes. So that had stayed for 50 years. Now, the close your eyes,
and that whole idea is now the basis of the book that I'm writing.
So actually, I have the same idea, really,
that when you are making decisions,
you should think of options as if they were candidates.
So you should break it up into dimensions,
evaluate each dimension separately,
then look at the profile,
and the key is delay your intuition.
Don't try to form an intuition quickly,
which is what we normally do.
focus on the separate points, and then when you have the whole profile, then you can have an
intuition, and it's going to be better. Because people make form intuitions too quickly,
and the rapid intuitions are not particularly good. So if you delay intuition until you have
more information, it's going to be better. I'm curious how we delay intuition. You delay intuition
by focusing on the separate problems.
So our advice is that if you have, you know,
a board of directors making decisions about an investment,
we tell them you do it that way.
Take the separate dimensions
and really think about each dimension separately and independently.
And don't allow, you know, if you're the chair,
don't allow people to give their final judgment.
So we wait until we cover the whole thing.
I mean, if you find a deal breaker, then you stop.
But if you haven't found a deal breaker, wait to the end and look at the profile,
and then your decision is almost certainly going to be better.
Does that include weighting the different aspects of the problem differently?
Do you highlight that in advance?
Yeah, I mean, it makes you see the trade-offs more clearly.
Otherwise, when we don't follow that discipline, there is a way in which people form impressions.
Very quickly, you form an impression, and then you spend most of your time confirming it instead of collecting evidence.
And so if accidentally your impression was in the wrong direction, you are going to confirm it,
and you don't give yourself a chance to correct it.
Independence is the key because otherwise when you don't take those precautions, it's like having
a bunch of witnesses to some crimes and allowing those witnesses to talk to each other.
They're going to be less valuable if you're interested in the truth than keeping them rigidly
separate and collecting what they have to say.
What have you seen work in a repeatable way, maybe a
a particular organization or across organizations to not only reliably surface disconfirming
evidence, but then place a value on what is surfaced instead of being dismissive.
Is there a framework for that?
Is there?
Well, yeah, there are many, you know, there are many procedures like red team, blue team,
the devil's advocate.
I mean, there have been, you know, many attempts.
In general, you know, if you are the head of the group that makes decisions, one of your
missions would be to protect the dissenters because they're very valuable and you should
make it painless to dissent or as painless as possible because it's hard to dissent.
It's painful and costly.
So protecting dissenters is important.
I'm curious about the distinction between intuitive.
intuition and judgment.
You had mentioned intuition, judgment, intuitive judgment.
Can you walk me through some of how those differ?
It's a bit hard to separate.
Judgment is what you do when you integrate a lot of information informally into a score of some kind.
We speak, we being my co-authors in the book we're writing.
We speak of judgment as measurement, but it's measurement where the measuring instrument is your mind.
But you do it informally, and because you do it informally, people are not necessarily going to agree.
So wherever we say it's a matter for judgment, we're allowing for differences, for variability.
Now, judgment can be more or less slow, more or less systematic.
So at one end, you have pure intuition, where you allow the judgment to go very quickly and so on.
And at the other end, you try to delay intuition, but ultimately, if you're making it by judgment, you're going to have a judgment, and it's going to be like an intuition, and you're going to go with it.
So the more or less deliberate judgment, intuition is always involved at one point or another.
you're either sort of like listening to it or fending it off.
Yeah.
And our recommendation is fend it off.
Are there ways to judge the quality of somebody's judgment?
Yeah, sure.
I mean, some of them would be unique to the actual scenario,
but what are the sort of other ways that we could?
Well, I mean, you may require people to explain their judgments
and evaluating the quality of the explanation is, you know,
whether it's logical, whether it uses the evidence, whether it uses all the evidence,
whether it is strongly influenced by wishes, whether the conclusion was reached before the
judgment supposedly is made. There are lots of ways for judgment to fail that can be
recognized. So it's harder to recognize very good judgment, but it's fairly easy to see
you know, what goes wrong, and there are quite a few ways for judgment to go wrong.
And I think some of those ways are the cognitive biases, like overconfidence and sort of using small
or extrapolating from small sample sizes. And one of the interesting things that I've heard you
say in interviews before, so correct me if I'm off here, is that you've studied cognitive biases
effectively your whole life and you're no better at avoiding them than anybody else.
Yeah, certainly not much better, no.
What hope do the rest of us have?
Not much, I mean, I never, you know, I think, you know, the quality of people's judgment is affected by education.
But so in general, you know, more educated people make better judgments, I think, on average.
But people decide, I'm going to make better judgments.
I don't think that's very hopeful.
I'm much more hopeful about organizations
because organizations think more slowly
and they have procedures for thinking
and so you can control the procedures.
Individual judgment is really hard to fix.
Not impossible.
One of the things that I see people do in response
to cognitive biases and trying to account for them
is to sort of make a list of them,
almost like a checklist.
and then go through that checklist and explain or rationalize why those things don't apply in this situation.
It also strikes me that the more intelligent you are, the more stories you'd be able to conjure up about why you're avoiding this.
I really think that's not very hopeful because there are so many biases, and the biases work in different directions anyway.
So sometimes you can recognize a situation.
is one in which you're likely to be wrong in a particular way.
So that's like illusions.
If you recognize a particular pattern
as something that gives rise to visual illusion,
then you don't trust your eyes.
You know, you do something else.
And the same thing happens when you recognize
this is a situation where I'm likely to make an error.
So sometimes you can recognize
the importance, for example, of what we've called an anchor.
So you're going to negotiate a price with somebody.
They start very high, and that has an effect.
So you know or you should know that the person who moves first in a negotiation has an advantage.
Because the first number changes everybody's view of what is considered plausible.
So it moves things in that direction.
That's a phenomenon.
People can learn that, and they can learn to resist it.
So when I was teaching negotiations, I would say somebody does that to you,
comes up with the number that's absurd.
I would say, lose your temper, make a scene, say,
I will not start the conversation from that number.
It's an absurd number.
I don't want to.
That's erase that number.
So that's something that, you know, you can improve if you recognize it.
I think people are aware of the fact that you shouldn't make a decision about road safety
within a short interval of a terrible accident.
You know, so you should allow things to settle down and cool down.
There is a more subtle error and harder, harder to fix, but that the best prediction, the best guess,
is always less extreme than your impression.
Intuitive prediction is, as we say, not regressive.
It doesn't recognize regression to the mean.
But statistics is statistics.
In statistics, things are less extreme.
Should I give you my favorite example of a bias?
Yeah, please.
I have been unable to think of a better one.
But the story is about Julie.
That's part of the story.
That's her name.
She is a graduating senior at university,
and I'll tell you one fact about her
that she read fluently when she was four.
What's her GPA?
And the interesting thing here
is that everybody has a number.
As soon as I told you that,
the number came to mind.
Now, we know where that number came from.
We really, that's one of the few things
that I'm reasonably sure I understand perfectly.
And this is that when
you hear she read fluently at age four, you get an impression of how smart she is, of how
precocious she was at age four. And you could put that in percentiles. You know, where
did that put her on a percentile for sort of aptitude, ability? And it's high. It's not,
you know, if she had read fluently at age two and a half, it would be more extreme, but age
four is pretty high. So say it the 90th percentile. And then, you know, she had read fluently at age two and a half,
And then the GPA that comes to your mind is around the 90th percent in the distribution of GPA.
So you pick something, your prediction is as extreme as your impression.
And it's idiotic, statistically, completely stupid, because clearly the age at which a child learned to read is not all that diagnostic with respect to GPA.
So it's better than nothing.
If you didn't know anything, you would predict the mean GPA,
whatever it had, 3.1, 3.2.
Now, she's bright, so probably a little higher,
but not 3.7.
You don't want to.
So that's called, that's a bias.
That's non-regressive prediction.
And that's very hard to resist.
Sometimes I'm able to resist it.
But never when it's important.
you know, when I'm really involved in something, I don't think about it, but sometimes I will
recommend, oh, you know, that's a situation. I should moderate my prediction.
Reading, playing, learning. Stellist lenses do more than just correct your child's vision.
They slow down the progression of myopia. So your child can continue to discover all the world has
to offer through their own eyes. Light the path to a brighter future with stellist lenses for
myopia control.
Learn more at SLR.com and ask your family eye care professional for SLR Stellas lenses at your child's next visit.
And if you're conscious of it, that's an example of one you can sort of talk yourself out of?
Yeah, you can talk yourself into.
Although, you know, you usually will find a way to cheat and end up with your intuition.
It's remarkable.
You know, when you've been in academic,
make life a long time, so you've been in many situations where people discuss a job candidate.
And absurdities of that kind are very common.
So somebody, a job candidate gives a talk, and people evaluate the talk.
And this is something that happened, you know, at Berkeley when I was teaching there,
that somebody gave a talk.
It wasn't a very good talk.
Stambered a bit.
Now, that person had teaching prizes, and yet what was said about him in the discussion, he can't teach.
You know, we heard the talk.
So that's a mistake.
But the funny thing is, you can point out to people that that's a mistake.
They still don't want to hire him because he gave a lousy talk.
So it's hard to resist.
It's interesting.
I think one of the ways I probably got my job is using psychology in the end.
interview, which is asking why I was there and then reinforcing those beliefs throughout the
interview. I want to come back just one second to the immediacy of sort of having a stimulus and
then making a decision. So we use the example of roads and a tragic accident happens and
you're rethinking sort of policy or laws around the roads. How much of that do you think is
social pressure? And I'm wondering if we could even extract.
that a little more to, we're taught to answer questions on a test right away, right?
So we see this question and then we answer it.
We're taught that we, or maybe it's reinforced, taught is probably the wrong word,
that politicians need to have an immediate response to, and even if they know the best
thing to do is like, okay, like let this settle, take some time.
It's society writ large seems to demand it.
Like, the environment is not conducive.
I think it's pretty clear that people prefer leaders who are intuitive and who are overconfident.
Leaders who deliberate too much are viewed with suspicion, you know.
So I think Obama was at a certain disadvantage relative to George Bush, you know.
Because he was seen as more deliberate or thoughtful.
Yeah, he was more deliberate.
And then when you're very deliberate, you look as if you're.
don't know what you're doing. But when you act with confidence, so people want leaders who are
intuitive, I think, by and much, provided they agree with it. I'm just working my way back through
some of these rabbit holes that we've gone down. You taught negotiations. I'm curious what would
be in your sort of syllabus for negotiations that everybody should learn about negotiations when it
comes to your work and psychology?
Well, you know, that goes back to a theme that we started with, the essence of teaching
negotiations, that negotiations is not about trying to convince the other guy.
It's about trying to understand them.
So again, it's slowing yourself down.
It's not doing what comes naturally because trying to convince them is a prime pressure.
arguments, promises, and threats are all ways of applying pressure.
And what you really want is understand, you know, what you can do to make it easy for them
to move your way.
Very non-intuitive, that's a surprising thing when you teach negotiation.
It's not obvious.
You know, we are taught to apply pressure, socialize that way.
You mentioned that there was procedures for thinking in organizations.
Are there any that stand out in your mind that we could use to elevate thinking, and if not elevate, but give feedback on the quality of thinking to improve it?
Well, I think one of the ideas that people like the most is an idea by Gary Klein, that he calls the pre-morton, and that's a universal winner.
People really like that idea.
And this is that when you're about to make a decision, a group, not quite, because if you've made it, it's too late, but you're approaching it.
And then you get people in a room, and it can be the people who are making the decision.
And you said, suppose it's two years from now.
And we made the decision that we're contemplating.
And it turned out to be a disaster.
Now, you have a page in front of you.
write the history of that disaster in bulletproof. That's the premortle. And it's beautiful as an
idea. It's beautiful because when people are coming close to a decision, it becomes difficult
to raise doubts or to raise questions. People who are slowing the group down when the group's
nearing a decision are perceived as really, you know, punishes, annoying. You know, they want to get rid of
them. And the premortem legitimizes that sort of dissent and that sort of doubts. Not only legitimizes
it, you know, it rewards it. And so that's a very good idea. I don't, you know, I don't think that
it's going to prevent people from making mistakes, big mistakes, but it could, certainly, it will
alert people to possible loopholes, to things that they ought to do to make a safer decision. So that's a
That's a good procedure.
And there are many others.
What comes to mind.
What comes to mind is to make intelligence,
I mean, the collection of information independent of the decision-makers' wishes.
And you really want to protect the independence of the people who are collecting the evidence.
And I would add to a procedure that really people don't like,
but if it were possible to implement it, I think, would be good.
And that's that when you're going to be discussing a topic
and it's known in advance and people have been sent some material
to think about the topic,
that you may want them to write down the decision they are in favor of
before the discussion starts.
That has many advantages.
It's going to give you a broader diversity of points of view.
because people tend to converge very quickly in a group discussion.
And it forces people to be better prepared, except people don't want this.
So I don't know whether it's even possible to implement it.
But clearly, if you could, would be a good idea.
What are the reasons people don't want it?
Too much work.
Right.
Forces you to do a bunch of work.
Rather than the signaling, you can.
sort of get away with.
Yeah, and then, you know, there's somebody who is going to prepare the case,
and so I glanced at the material, and then, you know, so a lot of meetings are tremendous
succinct for wasted time, and improving the quality of meetings would be a big thing.
Do you have any insights on how to do that?
Keeping them short, you know, I'm not a professional at fixing meetings, so I, I,
I have a few ideas, but not an complete view.
The question of structuring the meetings to be discussing topics one at a time, that I think
is really useful.
I'll give you an example.
I mean, it's something that I suggested when I was consulting, but for some reason people
didn't buy that suggestion.
So when an investment is being discussed, say, by an investment.
firm. Some staff people, if it's a big investment, staff people will prepare a briefing
book with chapters. Now, our recommendation would be that the staff should end each chapter
with a score. How does that chapter taken on its own independently of anything else
affect the likely decision? And then you could structure the meeting that discusses of this
and the meeting of the board, say,
to discuss these scores one or the time.
That has the effect that I was talking about earlier,
making the decisions,
making the judgments about the dimensions,
we call them mediating assessments,
is a dragon term.
The mediating assessments come first,
and then you have a profile of them,
and then you make a global judgment.
And you can structure it.
So if the staff has presented a score and you discuss in the board, do we accept their score, you're forcing people to have a look at the evidence.
And think about why they would accept or reject, and then they feel like they have to construct an argument that might be less intuitive.
That's it.
So, you know, there are ways of doing this, but if you're going to be too rigid about it,
it won't work either.
I'm curious what other advice you gave as a consultant that nobody followed.
Oh, I mean, virtually all the advice I get people don't follow.
I mean, you know, I think that's not, you shouldn't, you know,
you're not going to be a consultant if you expect your advice to be taken.
You have to give the best advice you can.
What would be other examples of something you think would be widely applicable
that you would have advised people and you used to sort of like saw them drop the ball
Well, I mean, you know, I would advise people who make a lot of decisions to keep track their decisions and of how they turned out so that later you can come and evaluate your procedures and see whether there is anything that is in common with those decisions that turned out well and then not so well and so on. People hate doing this.
Why do you think people hate doing it?
Oh, because retrospectively they may look foolish, some of them or all of them,
or in particular the leader.
So they really don't like keeping track.
I mean, there are exceptions.
Ray Dalio and his firm and where everything is explicit.
Bridgewater, yeah.
Yeah, Bridgewater.
But in general, in my experience, I haven't consulted with Bridgewater, they don't need me.
But in general, when I suggested that, never went anywhere.
What are the variables that you would recommend people keep track of?
What would your decision journal look like?
Oh, I mean, my decision journal would be a mess.
I don't, I'm not putting myself as an example.
So obviously the outcome, but you've got to do that post after.
Yeah, but no, no.
You would want to say what were the main arguments?
pro and calm? What were the alternative that were considered? It doesn't have to be very detailed,
but it should be enough so that you can come later and debrief yourself. Should you have a
calibration? Like what degree of confidence you are? That would be good. I mean, you know,
it would depend on something that you could evaluate later. It strikes me that decision journals
and premortems are a way to identify people.
that are sort of perhaps suppressed by their manager,
where you have somebody who's actually a better,
better at exercising judgment than the person that is, you know,
that they're working for.
And this would be a pain-free sort of way
to calibrate that score over time
and identify the quality of judgment in a consistent way.
Oh, yeah.
I mean, that strikes me as worth a lot of money to an organization.
Yeah, but also very causal.
And you will see that certainly anything that threatens the leader is not going to be adopted.
And leaders may not want something that threatened their subordinates either.
People are really very worried about embarrassment.
You're writing a book now on noise.
Yeah.
Tell me about noise and decision making.
Can you explain the concept?
Yeah.
I can really explain it by saying what, you know, was the beginning of it.
which was a consulting assignment in an insurance company where we had the idea of running a test
to see whether people in a given role who are supposed to be interchangeable agreed with each other.
So, you know, when you come to an insurance company and an underwriter gives you a premium,
the underwriter speaks for the company.
And sort of it's, you expect that any underwriters, that it doesn't matter which underwriter you get to vote a premium.
And the company has that expectation.
It shouldn't make much difference.
So we tested that.
And they constructed some cases.
And then we had some like 50 underwriters assess a premium for the case.
With the same information.
Yeah, with a really very realistic, we didn't construct it.
They constructed the case.
And they conducted the experiment.
But now, the interesting question is,
how much variation do you expect there to be?
So we asked the executives, the following question,
suppose you take two underwriters of random.
By what percentage do they differ?
I mean, you look at the difference between their premium,
divide that by the average premium,
what number do you get?
And people expect 10%.
By the way, it's not only the executives in that company
for some reason, people expect 10%.
And it was roughly 50%, 5-0.
So that's what made me curious about noise.
That and the fact that the company was completely unaware
that it had noised.
It took them completely by surprise.
So now we're writing a book because there's a lot of noise.
So wherever a rule is, wherever there is judgment, there is noise,
and more of it than you think.
So that's the pattern.
Are there procedures to reduce noise?
And conversely, it strikes me that the variation would be good,
but maybe only in an evolutionary concept.
Well, we call that noise is useless variability.
I mean, variability can be very useful if you have a selection mechanism
and some feedback.
So evolution is built on variability, and of course it's useful.
But noise among underwriters is used.
useless. There's nothing. Nothing gets learned. There's no feedback. It's just noise. And it's costly.
The first advice, of course, would be algorithms, as I said earlier. So algorithm are better
than people, than judgment. That's not intuitive, but it's really true. And after that,
then, you know, the procedure that I mentioned earlier for making decisions in an order
way by breaking it up into assessments and that's the best that we can do. And there is one very
important aspect that I haven't mentioned. And this is training people in what the scale is.
So there is one piece of advice that you'd have for underwriters that they should always compare
the case to other cases. And if possible, if you can have them share,
the same frame of reference with other underwriters, we're going to cut down on the noise.
Oh, that's a clever idea, yeah.
So controlling the scale.
And that exists in human resources where performance evaluation, which is one of the scandals
of modern commerce, how difficult it is, but performance evaluation, they have a thing
that's called frame of reference training, which is teaching people, you know, how
how to use the scale.
There's a lot of variability in the scale.
And a part of what the super forecasters do,
they make judgments in probability units,
and they teach them to use the probability scale.
So learning the scale is a very important aspect of reducing noise.
I know we're coming out to the end of our time here.
What have you changed your mind on in the past 10 years?
Oh, large.
Anything big?
Yeah.
There's been a replication crisis in psychology, and some of the stuff that I really believed in
when I wrote Thinking Fast and Slow, some of that evidence has been discredited, so I've had to
change my mind.
What's the big one?
Some of the sexiest stuff, priming and unconscious priming, and so it just hasn't held up in
replication, and I believed it, and I wrote it as if it were true, because.
because, you know, the evidence suggested it.
And in fact, I thought that you had to accept it
because that was published evidence.
And I should have, I blame myself
for having been a bit gullible,
that as I should have known,
that you can publish things even if they're not true,
but I just didn't think that through.
So I changed my mind.
I'm now much more cautious about spectacular findings,
I mean, very recently, I think I have a theory about why psychologists are prone
or social scientists generally are prone to exaggerate, to be overconfident about their
hypotheses.
So I've done quite a bit of learning.
What's the theory?
Well, the theory, one element of the theory is that all these hypotheses are true in what
sense that, you know, if I, there's a famous study that you mention brinkles to people and then
you measure speed at which they walk and they walk more slowly. Turns out that hasn't held up
in replication, which is very painful. It's one of the favorite studies. But actually, you know
that if you mention wrinkle and it's going to have any effect on the speed of walking,
it's not making to make, it's not going to make people faster. It has any influence. It's going to
to make them slower. So directionally, all these hypotheses are true. But what there is, is what people
don't see is that there are huge number of factors that determine the speed at which individuals
walk and the differences in the speed of walking between individuals. And that's noise. And people
neglect noise. And then there is something else, which touches on both philosophy and
psychology. When you have intuitions about things, there are clear intuitions and there are
strong intuitions. They're not the same. So a clear intuition is if I offer you a trip to Rome,
a trip to Rome and an ice cream call, you know what you prefer. It's easy. But it's very weak,
of course. I mean, the amount of money you would pay to get a trip to Rome.
I took to Roman an ice cream cone, nothing.
But when you are a philosopher, and I should add one thing,
to see the clear intuitions,
you have to be in this kind of situation
that psychologists call within subject,
that you have both with the ice cream cone
and without the ice cream cone.
So in a within-subject situation, that's an easy problem.
In a between-subject situation, it's an impossible problem.
But now if you're a philosopher, you're always in a within-subject situation.
But people live in a between-subject situation.
They live in one's condition.
And the same thing is true for psychologists.
So psychologists live in a, when they cook up their hypotheses, they're in a within-subject situation.
But then they make guesses about what will happen between subjects.
And they're completely lost between clear intuition.
and strong intuition.
We have no way of
calibrating ourselves.
So that makes us wildly overconfident
about what we know
and reluctant
to accept that we may be wrong.
That's a great place to end
this conversation, Danny. Thank you so much.
The Knowledge Project is produced
in collaboration with Jason
Oberholzer and the team at Chartes.
and leisure. You can find show notes on this episode as well as every other episode at
fs.blog slash podcast. If you find this episode valuable, share it on social media and leave a
review. To support the podcast, go to fs.blog slash membership and join our learning community.
You'll get hand-edited transcripts of all the podcasts and so much more. Thank you for listening.
Thank you.