Modern Wisdom - #424 - Steven Pinker - The Problem With Trying To Be Rational
Episode Date: January 20, 2022Steven Pinker is a Cognitive Psychologist at Harvard University, a psycholinguist and a Best Selling Author. It would be nice to always make the right decision. To escape the prison of human emotions ...and biases and operate from a purely rational place. Steven's new book breaks down rationality into it's components in an attempt to understand just what we're all missing from our mental makeup. Expect to learn why betting websites are the most accurate forecasters of the future, why learning lists of cognitive biases won't always make you more effective, whether smart people are more or less rational on average, whether politics makes you dumber, how to balance rationality with a desire to be intuitive and present and much more... Sponsors: Join the Modern Wisdom Community to connect with me & other listeners - https://modernwisdom.locals.com/ Get 10% discount on your first month from BetterHelp at https://betterhelp.com/modernwisdom (discount automatically applied) Get a Free Sample Pack of all LMNT Flavours at https://www.drinklmnt.com/modernwisdom (discount automatically applied) Extra Stuff: Buy Rationality - https://amzn.to/3qtQ84X Follow Steven on Twitter - https://twitter.com/sapinker Get my free Reading List of 100 books to read before you die → https://chriswillx.com/books/ To support me on Patreon (thank you): https://www.patreon.com/modernwisdom - Get in touch. Instagram: https://www.instagram.com/chriswillx Twitter: https://www.twitter.com/chriswillx YouTube: https://www.youtube.com/modernwisdompodcast Email: https://chriswillx.com/contact/ Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
Hello friends, welcome back to the show. My guest today is Stephen Pinker. He's a cognitive
psychologist at Harvard University, a psycho linguist and a best-selling author. It would
be nice to always make the right decision, to escape the prison of human emotions and
biases and operate from a purely rational place. Stephen's new book breaks down rationality
into its component parts and attempts to understand just why we're
all missing something from our mental makeup.
Expect to learn why betting websites are the most accurate forecasters of the future,
why learning lists of cognitive biases won't always make you more effective, whether
smart people are more or less rational on average, whether politics makes you dumber
how to balance rationality with a desire to be intuitive and present, and much more.
If you haven't already joined the Modern Wisdom Locals community, then get your face
and take it to modernwisdom.locals.com.
You can join over 2,000 other people who will listen to the show.
It is a community where we can post memes, thoughts, insights, other podcasts, and pieces
of content that we found engaging,
and I'm doing regular Q&A live streams only for local supporters. That's modernwisdom.locals.com.
But now, it's time to learn about rationality with Stephen Pinker, block at the show.
Thank you, nice to be here.
It was a time not long ago when I thought that reading another Elliott Yukowski blog post
or another chain parish mental model definition about some cognitive bias that I didn't realize
that I had, there was a period where I was adamant that that was going to be the solution
to all of my problems in life.
And then I found out that it wasn't.
Why is it that I need a glossary mental models toolkit in order to be able to function,
has making sense of the world always been this difficult?
It always has. I think we are equipped to reason about cause and effect and about logical implications and about probability. When the problems are ones that we have dealt with all our lives,
when they're involved subjects that we deeply care about,
that and pin John us, but when it comes to general-purpose tools
that we can apply across the board, including to novel situations,
like, oh, it didn't occur to me. This is another example of the sunk cost fallacy
or of the availability bias, namely reasoning from Hannockdote. Having those tools at your
fingertips as generic, all-purpose cognitive tricks that you really do need to be reminded
of. You need to know the names of the fallacies and how to avoid them, the names of the normative models that is rules and
systems of how you ought to reason to deal with novel situations, ones that aren't and
abstract situations.
Daniel Kahneman got asked, I think, by Sam Harris when they did a live event about, after
all of this time, Daniel, learning about the human brain and biases has it made you
any more rational and his response was basically no what's your thoughts on that have you managed
to make yourself any more rational? Um somewhat I mean I'm we know from the literature on biases that
I'm probably not the person to ask because all of us are subject to a bias bias namely all of us
think that everyone else is biased, but
not us.
So I might be the person least equipped to spot my own biases, but I tend to think so,
and there is research that suggests the people who are less susceptible to the classic
cognitive fallacies and biases have better outcomes in life in general.
They're less likely to get into accidents, to lose their jobs, to break up their relationships. As always, these pertained to averages. Certainly
less likely to be scammed by psychic or medical charlatans. So applying the average to myself, I would think so on average. I would hope so.
Are smart people any more or less likely to be rational? On average, yes. There is a correlation
between intelligence and rationality, but it's far from a perfect one. So there are plenty of smart people who are vulnerable to cognitive biases, who are fixed
in their beliefs and don't adjust them in response to changes in evidence, particularly
when it comes to beliefs that are sacred values of one's own tribe, one's own coalition,
one's own tribe, one's own coalition, one's own political ideology.
The so-called my side bias, that is you steer your reasoning toward a conclusion that makes
your own tribe look good.
Where tribe can be your religion, your political party, your hobby group.
Then smart people are just as vulnerable as less smart people.
This is one of the conclusions from Keith Stanovich in his book The Bias that divides us.
Stanovich has developed what he calls somewhat cheekily a rationality quotient as a complement
or alternative to the intelligence quotient.
And they are correlated, but far from perfectly.
Is there something that smart people should look out for in particular with regards to irrationality?
Certainly the my side bias, namely, are you really committed to some belief that is emblematic of your
politics or your theory and some academic dispute. And it's, you're probably susceptible to
various kinds of motivated reasoning
like biased assimilation, which is to say you gobble up
information that seems to support your view
and you stay away from or don't read
or nitpick to death
evidence that goes against your beliefs. You try to spin doctor everything so
that if there's evidence that would seem to go against one of your beliefs, you
try to find loopholes in ways that you don't have to believe it. Of course, we
all do this. We're all lawyers that make the best strongest possible case
for our clients, but it certainly is a way in which each one
of us can be less than optimally irrational.
But of course, it makes it all the harder
to spot those biases in yourself.
That's why we belong to communities, where it isn't just
up to you, but you expose your ideas to criticism.
You allow them to be challenged.
You have a community that abides by free speech so that any opinion can be voiced and then
evaluated.
The way we undo our bias typically, there are some self-aware souls, no doubt, who can
step outside their own biases.
More commonly other people do it for us.
It's interesting thinking about the effect socially that we have.
I think a lot of what we need in the modern world is the ability to be rational while there's
social pressure around us.
It's very difficult to exist in a vacuum now because even if you are the most excluded,
isolated person working in a lighthouse in
Antarctic or somewhere, you probably still have Facebook and Twitter and an
internet connection. So other things that we need to look out for when we're in a
group, particular biases or ways that our rationality gets perturbed and
perversed when we're in a group. Yeah, certainly to be open to sources of information
other than the one that's going to ratify your,
your side's beliefs.
So not just to read, you know, the,
the guardian or the telegraph,
but to dip into the source that you don't
habitually read.
To seek out sources that have themselves cultivated a reputation for objectivity and accuracy.
You mentioned a prominent member of the rationality community, Elias Yidd Kowski, although also
in that community is Scott Alexander writing under the in the blog,
Astral Code X 10.
And he does often will do literature reviews,
astonishingly thorough and astonishingly quickly,
where it's pretty clear at the outset
that he has not made up his mind
and he'll does his best to say whether lockdown policies
are effective at stemming the spread of COVID,
a highly politicized issue. But when he did a literature review, he said he concluded,
there's a number of months ago, that yes, they are somewhat effective, better than not having them,
but only after looking at both sides of the issue and all of the extant studies that he could find.
Did you see that he got married the other week?
I did see that.
I just got the notice yesterday.
Although I knew that, I'm in the Bay Area this year,
so I had dinner with him a couple of months ago,
and I met his fiancee.
Amazing. Yeah, I saw the photo.
I'm on the mailing list, Rastall Code X-10.
It's so strange.
After years and years of him writing under this anonymous pseudonym and
just being this person, well you have a relationship with their words but not with them.
Yes.
And so Bazarta now see a face to the words.
It is indeed.
And for many months until I met him, I did wonder, what was the actual physical body behind
this brilliant intellect?
It's so strange how we do attach that sort of embodied sense
to someone, you know, the pseudonymity that the internet affords.
And there's a lot of talk at the moment as well
about pseudonomous, pseudonomous.
Whatever it is.
Soudonamis.
Soudonamis, thank you for walking me through that word.
Those sorts of accounts increasingly zero HP,
Lovecraft is one of those.
I think it's like Orange Tree is another one.
We have a bunch of different people online
that are deploying information,
but hiding behind because they don't necessarily
want to put themselves out there.
And yet it's interesting to think that Scott,
you know, was doing that whatever 10 or 15 years ago.
Well, indeed, in his particular case,
he's protective of his full name
because he is a practicing
psychiatrist and he wanted his patients not to be aware, be affected by his other identity
as a blogger and commentator.
Talk to me about basing, basing and reasoning, because it's a term that I've come up against
previously, and yet I've never really understood how to use the principle of Bayesian reasoning to apply it to my own life on a day-to-day basis.
Yes, so this is one of the kind of signifiers or identifiers or identity badges for being a member of the so-called rationality community.
Do you think that every person should understand Bayes' rule? And that's almost a membership requirement that you understand it and endorse it.
It's the eponymous rule of the Reverend Thomas Bayes from the 18th century.
And it's actually, it is an algebraic formula, but it's actually pretty simple.
And it's already spilled over into our everyday language
in the common term priors.
What are your priors?
That is actually taken right out of Beis rule.
The rule is simply that the purpose of the rule
is how should I calibrate my degree of credence
in a hypothesis, depending on the strength of the evidence?
So the idea is you don't just believe it or disbelieve it.
You've got a degree of belief from 0 to 1.
And the key insight is, as soon as you take that conceptual leap,
then you can apply the arithmetic of probability
to the problem of how to calibrate your belief to the evidence.
And so the idea is simple.
The output, the deliverable, the point of Bayes rule,
is what's called a posterior probability.
Posterior just means after you've looked at the evidence.
So when all is said and done, how much should I believe
that say masks stop the spread of COVID?
Or to what degree should I think that I have prostate cancer
if I get a positive prostate-specific antibody test reading? Now, so how do you figure that out?
Well, there's just three numbers according to baseball. First is the prior. That is, before you
even look at the evidence, the symptoms of a patient, the test result,
the data and the literature, how credible is it to begin with?
What is the sort of accumulated weight of evidence and plausibility before you look at
a single data point?
That's the prior.
Now admittedly, there's some subjectivity that goes into that.
For a disease, you usually take the base straight in the population, what percentage of men
have prostate cancer.
It raises a question of, well, do you look at 65-year-old men?
Do you care whether it's white men or black men?
But let's just say that you start off with some kind of prior. Then you multiply that by the likelihood,
and in the lingo of a,
these were likelihood means if the hypothesis is true,
what are the chances that you would see the data
that you are now seeing?
So in the case of a medical test of all the people say,
who do have prostate cancer,
what percentage of them does the test correctly pick that up?
It's the sensitivity of the test in the kind of a lingo of testing.
So that is technically probability of the data given the hypothesis.
That is, we still don't know whether it's true or not, but if it was true, how likely
is it that we would get those results, those symptoms, those test data?
You just divide that by how common the evidence is
across the board.
If the symptoms or the test results occur a lot,
that is a high false positive rate together with a true positive rate.
That goes into the denominator.
As we all know from elementary school,
if the denominator gets bigger the whole
fraction gets smaller and so it's just the prior times the likelihood divided by the
commonness and it is kind of intuitive. We all know that if a symptom of a disease for
example occurs for a lot of diseases you know fatigue then just because I don't know
Rocky Mountain spotted fever has the symptom of fatigue.
You don't conclude that you have Rocky Mountain spotted fever
because fatigue can come from lots of things.
That's the commonest of the day in the denominator.
Likewise, if something is really going around a lot,
like Omicron, then you have some of the symptoms,
that would be the prior.
You say, well, I am feeling a bit aching.
Omicron doesn't seem like an implausible
explanation.
And likelihood two is kind of intuitive, namely, if it were true, if you do have, say,
Omokron, are you going to have a sore throat?
Most people with Omokron do have a sore throat.
I have a sore throat? Most people with Omicron do have a sore throat. I have a sore throat. Well, that ups my credence that I have Omicron.
Anyway, that's Bayes rule. People are not very good at applying it to novel situations,
especially when the numbers are presented in a little bit the way that you and I have in terms of numbers between 0 and 1.
If on the other hand, you present it in terms of frequencies.
That is, there are a thousand men in the population.
Ten of them have prostate cancer of the 990 who don't.
Ten of those will test positive of the 10 who do, 9 will test positive, you test positive,
do you have prostate cancer?
People are pretty good at saying, well, basically what you said, it's kind of 50-50, isn't it?
If I gave you the same numbers, the same information in terms of the chance that you have prostate
cancer is .01, then people kind of blow off the base, right?
They just think, oh my god, positive tests, I must have the disease.
And then they're, according to classic research by Coniman and Tversky, people neglect the
base rates.
They reason kind of by stereotype, by anecdote, and then they're not so good.
So that's kind of the base story.
And anyone who's just heard it has all of a sudden become much more rational.
I'm not going to use Scott Alexander for every example here, but he does a thing at the beginning
of the year where he makes predictions for the forthcoming year by the end of it.
Is most of that, do you think that that's an attempt at Bayesian reasoning?
Yes, so, forecasting, rational forecasting does use Bayesian reasoning as one of its
essential ingredients.
Yes, so just to give you a concrete example, this is based on the work of Philip Tetlock
and Barbara Mellers and who run forecasting tournaments.
They have become more popular through prediction markets where you can actually get skin in the
game, put money in the line.
What is the chance that Russia will invade Ukraine?
What is the chance that inflation will exceed 10% this year?
And you bet against other people.
But the way the Bayesian reason comes in is a good forecaster.
And good forecasters tend not to be your name brand pundits,
who tend to have a pretty crummy track record
because they're always pushing their political ideology
and it blinds them to the specifics and situation.
But the way Bayesian reasoning enters into
accurate forecasting is the first thing you start off with
is the base rate rate they're the prior
So for example, will there be a terrorist attack this year that will kill more than 10 people?
Well, the first thing that a rational forecast to do is go to Wikipedia
Look at the number of terrorist attacks that have taken place every year for the last 10 years in that part of the world and say well
Let me start with that as my
First guess and then I'll bump it up or down depending on the specifics of what's happening this year and say, well, let me start with that as my first guess.
And then I'll bump it up or down, depending on the specifics of what's happening this year.
Likewise, for Putin invading Ukraine, they might start off with, well, how many invasions
have one country by another have we seen?
That's where I, that's my starting point.
Now let me increment or decremented.
And that's not typically the way people do forecasting,
which is why they're not particularly good at forecasting.
They're not Bayesian enough.
Isn't it that betting websites are usually the most accurate ones
when it comes to upcoming elections?
I don't know whether that's a myth that I've seen on the internet
or if that's actually correct.
So I think that the so-called super-forecasters
in Tetlock and Mellers tournaments outperform
prediction markets, but prediction markets outperform just about anything else.
Yeah.
That's right.
Well, because you've got a lot of people who are highly motivated to learn about the
situation.
It's not just that they toss off some opinion.
Yeah, it's unlimited skin in the situation. It's not just that they toss off some opinion. Yeah, it's unlimited skin in the game.
That's right. And the what ultimately counts is not reputation, but are you right or wrong?
And so it's not like plugging the ideology, making your side look good.
The proof of the pudding is in the eating.
Yeah, what about measuring risk and reward for people as they go through their life?
That's another chapter in my book, Rationality. eating. What about measuring risk and reward for people as they go through their life?
That's another chapter in my book, Rationality. It pertains to what's sometimes called expected utility theory or rational choice theory. One of the less popular theories, people often
blame it for the whole homo-economicous rational man. It's basically the idea that you should,
Romicus rational man. It's basically the idea that you should, when you're faced with an option under a risky
option, you should multiply the probability of each outcome by its cost or benefit, its
reward or its penalty, add them up and choose the option with the highest, some the highest expected utility that is probability times payoff.
Now, and there's a literature from,
it goes back 50 or 60 years,
several Nobel prizes have gone to
economists and game theorists who've shown cases where people
don't seem to abide by the axioms of expected utility
theory.
But by and large, it isn't a bad guideline to start with.
I mean, obviously, we often live under not risk where we know the probabilities, but
uncertainty where we don't even know the probabilities, as Donald Rumsfeld famously
put it unknown unknowns, as opposed to known unknowns.
But still, if you think through,
what is a chance that something will happen,
how good or bad will it be?
It probably would lead to some wisdom.
Like if I step on the accelerator to get home faster,
because I really don't want to be late for dinner
or miss the first
minute of a show.
And what's the benefit of that? Now, I am taking a slightly greater chance of getting killed in a car
accident. How much value do I place on my life? If you started to think that way,
or should I wear a bicycle helmet, should I fasten my seat belts, you'd probably make
a lot of a bunch of wiser decisions, or even more mundane ones like I just bought an
appliance, should I also buy the additional extended warranty.
Now that costs typically about 25% of the price of the product itself, the salespeople will
push it aggressively, and the reason is the product itself, the sales people will push it aggressively.
And the reason is, the expected utility calculation works out in favor of the store and not
the customer.
Namely, you buy something, unless it's one out of four of those products is going to
break, paying a quarter of the price for a warranty does not make any sense. It'd be much better off just occasionally absorbing the price of the repair or replacement
and you're going to be ahead.
I mean, it really does not make sense to buy a life insurance policy for your toaster.
I was going to say, every insurance company would be bankrupt if that wasn't the case.
Well, the thing is that for insurance makes make sense for catastrophic losses that you can't
recover from replaced.
You're your house, for most people, your car, the livelihood of a breadwinner.
But for things where you really could replace it, even though you'd be really annoyed if
you had to buy a new toaster the month after
you bought one, you probably could afford it and it doesn't make sense to keep forking
over money for every appliance that you buy. So self-insurance makes a lot more sense.
Do you think about balancing rationality with a more sort of intuitive natural flow just
generally to life? So rather than a lot of the time when I find myself thinking a lot about
the mental models and the biases that I use as I move through my day, I often find myself
getting in my own heads and I don't find as much ease or flow or intuition with the way that I go
about. Do you think that there's a tension there pulling between those two?
There there is. I mean there is a phenomenon of, as we call it, overthinking. But, you know,
and there are about 20 years ago, as a result of a bestseller by Malcolm Gladwell called Blink.
There was the popular idea, people in business love it, that you should go with your gut. Your
first intuition is going to be wiser than overthinking. It's probably not true in general that the...
there are rare cases in which the gut feeling is more accurate than raising it out.
But by and large, you're probably better thinking twice.
Granted, a lot of times you just don't have the information. No one has stated the probabilities.
It's not like buying a lottery ticket where you can look up the odds or a casino gambling.
And so we have no choice but to make intuitive guesses.
But in general, not acting on impulse
is probably the wise philosophy of life.
The problem that we have is that we're not privy
to the codes behind whatever the odds are of whatever have is that we're not privy to the codes behind whatever the odds
are of whatever it is that we're considering, right?
And we can kid ourselves into believing, you know, it's for COVID masking or for the prostate
cancer rate for men of your age in your area or of your particular genetic heritage.
There are some things, but most of the decisions that we make on a daily basis about whether to
stay with a partner or leave, about whether to go to a new city or not, where would you
even begin to try and do that?
And I suppose that this is where those more messy and emotional decisions are where people
are hoping that rationality will give them a lifeline out of it.
And then ultimately, a lot of the time end up coming back to something that was quite intuitive
in the first place in any case.
Sometimes, yes, and indeed it is true that often that those critical probabilities, no
one knows and you can't find out, life is uncertain.
But here's, there is a bit of advice that I'm going to credit to my colleague Daniel Gilbert
and the psychology department at Harvard, which is that when people try to imagine how
they will feel in the
future, they are often not very good at it, that we probably give too much credence to
our own powers of imagination, and that often you're better off looking for someone at
someone else who has faced that decision, and to look how it turned out for them.
And they've actually, because then you're not relying on imagination, now granted no to
people are interchangeable.
So maybe their situation is different.
MIRRORs, maybe the world has changed, you know, that was them, this is now.
Still, probably a lot of the time you're better off at trying to, as we might put in, you
know, kind of gather data, real world data for how it turned out.
And it was, it's particularly poignant for me
because he told me this many years ago when,
over dinner, when I was at MIT at the time,
I had a job offer from Harvard,
I was agonizing over whether to take it or not.
We had dinner and granted he had an interest in the answer
because he was recruiting me to be his colleague at Harvard.
But he said, you know, most people went faced with an agonizing decision,
rely too much on their own power of imagination, and you're better off finding out how it really did turn out for someone who's faced with that decision,
who went one way or another.
And sure enough, I asked two of my, I was in an unusual position perhaps,
that two of my colleagues, written on MIT,
had made that exact jump.
They had switched to Harvard.
They had been poached.
And I asked them, do you regret your decision
or did it turn out well?
And they both said it turned out well.
And that decided me.
So I decided to accept the offer, and I have been happy ever since.
I'm pretty sure that there's some evidence that shows people who make changes in their
life on average are happier, that simply making a change.
If you have a choice between staying where you are and making an alteration, that on average
people tend to be more happy with the new. I mean, to the extent that that's true, it would say that I didn't necessarily choose wisely,
other than deciding to make the move, but I'm perfectly prepared to believe that.
Not least because once you have made the choice, you tend because of cognitive dissonance,
that is, cognitive dissonance reduction, rationalization.
We don't want to look like idiots for having made
the wrong choice, so we do tend to find, after the fact,
what made it the best choice and make the best of it.
Not everyone, there are people who are constantly
blaming their misfortune on some regret.
But that is interesting. I was not aware of that.
Yeah, I think a lot about people that say,
it was meant to be that they use that,
or yeah, they go through perhaps some sort of hardship
and then come out the other side.
And they say, well, it was meant to be
because look at the situation that I got myself into.
And it always feels to me like that person's completely
destroyed their own input into the good things that they made out of a tough situation. Let's say that you've got
in some accident or some sort of injury and then you end up on the other side of that
finding a calling in life that really speaks to you. And I think well, saying it was meant
to be takes away the agency that you had from overcoming what was a pretty shitty situation.
Like you did this. It wasn't meant to be.
You made the best of a bad environment.
Well, and it's even more pernicious when it's done in advance.
You say, well, it's faded.
I thought, I'm going to get COVID or I'm not.
There's nothing I can do about it.
I'm going to get lung cancer or not. And so that you actually avoid making reason decisions because you're fatalistic.
And people with who believe in fate tend to have, certainly, worse predictors.
One of the ingredients of successful prediction is believing in contingency that things could have turned out different, that
things are not faded in the stars.
And I'm not sure if this is specifically been looked at, but I'd be willing to bet that
people who don't believe in some sort of predestination, who actually attribute some sort of agency
to themselves probably have better outcomes in life. That is less likely to get sick.
I think the goal is to have sufficient thinking that you can make your decisions appropriately,
but not so much that it slows you down, because I definitely have some friends, and I've
worked with some people as well, who are overthinkers that make me look like I'm the most rash
like Playboy in town.
I guess. No, and like playboy in town.
Yes, no, and that is an important point.
It's sometimes called in the literature, bounded rationality from Herbert Simon.
The idea that reasoning itself has costs, namely time, information that you have to gather, computational resources, memory, and data and so on.
And the benefit of choosing the optimal decision always has to be traded off against the costs
of the actual reasoning. You can't spend the rest of your life gathering data because then
your life is gone. You've got to, at some point, act on the information
you have, knowing that you're taking a risk, but still weighing in the cost of inaction.
Sometimes he who hesitates is lost. It doesn't mean it doesn't guarantee you that the best
outcome, but it sometimes means that acting on imperfect information is always essential, unavoidable.
I think it was, is it Seth Stevens' Davidowitz that wrote algorithms to live by?
Are you familiar with that?
I'm familiar with, it's plausible, I forget whether that was his exact title.
He did write a book that, for which I wrote a forward call, Everybody Lies.
Oh, no, it wasn't that one.
I think algorithms to live by was actually a guy at
Brian Christian.
Brian Christianson, maybe.
Yeah.
Anyway, yes, I think.
Yes, I think.
Algorithms to live by.
Algorithms to live by has a really funny way of how you should pick your partner.
So it's averaged out that over a hundred dates, you should go for,. So it's averaged out that over 100 dates,
you should go for, I think it's go to 30 or 33 dates,
go out on 33 dates, and then go out,
decide to pick the next person that is the best out
of the people that you've seen,
and that on average is going to be the best person
that you're going to find.
Well, though, and indeed, that speaks to it's an old finding,
and I think it was, I originally learned
in the context of you're driving down the highway
and you want to know, you want to figure out
where, which restaurant to stop at.
And you don't know where there'll be a really, really good
restaurant just around the bend.
I'm going to hand you getting hungry.
And so the mathematics is the same.
Namely, it is, it is a theorem.
I'm not sure why it comes out this way.
But yeah, sample about a third of the expected number and then choose the first one that's above the
average of that third that you've measured so far. So it applies in that case as well.
Yeah, again, that's a really good example because it's trading off the costs of waiting indefinitely.
Namely, you don't want to, you know, marry your perfect
match at the age of 80 because it's taken that long to wait for them.
On the other hand, you don't want to propose marriage to the first good-looking person
you meet, either.
What about conspiracies?
They seem particularly good at subverting rationality.
Indeed.
And for a number of reasons, one of them is that they fall into a category of
belief that is peculiarly unfalcifiable. They are memes in the original Richard Dawkins
sense of ideas that are adapted to being spread by their very nature. And the part of
the conspiracy theory that says that the lack of evidence for the theory is proof of what a
diabolical and genius conspiracy it is, makes them uniquely
invulnerable to reputation. Kind of like the other beliefs like God works in mysterious ways.
Or if you
deny that you're,
that this is racist, that proves that you're a racist.
Or, so there are, by their very nature,
some ideas are contagious simply because they,
by their very nature, they are designed to evade
our cognitive immune system.
Also, the conspiracy theories tend to be moralistic.
They are excellent examples of a my side bias and that there's usually some villain and
the villain is often an identifiable opponent such as the theory that Hillary Clinton ran
a child sex ring out of a basement of a Washington, Pizzeria.
Now, needless to say, it was not members of the Democratic Party who believed that conspiracy
theory.
It was people who hated Hillary.
And indeed, a belief like that is kind of just another way of saying, boo, Hillary,
the fact that it has propositional content is kind of irrelevant to why people believe
it. And that was one of the major epiphanies that I had on in writing rationality.
And in dealing with these bizarre beliefs, such as that jet contrails are mind-altering
drugs dispersed by a secret government program, or that COVID vaccines are actually
a subterfuge by Bill Gates to inject microchips into our body to surveil us. And you ask the
interdu... how can people believe these things? And part of the answer is it depends what you mean by
believe. That is, for a lot of people, factual, warrant, empirical evidence,
just that's not why you believe things.
When it comes to things that don't impinge
on your day-to-day life, if it's gonna affect
whether your car's gonna run out of petrol
or whether there's gonna be beer in the fridge,
then people are very, very attuned to reality.
They kind of have to be.
But when it comes to belief about things
that you'll never encounter in life, like that
Pizzeria or Hillary Clinton, people believe things because it expresses the right values,
the right moral. It identifies villains that you think are evil, it identifies heroes, more
often villains in the case of conspiracy theories. Sometimes it's an identifiable one like Hillary Clinton. Sometimes it's just a general and a hatred of the establishment of elites, of institutions.
There's a fairly sizeable minority who just believes that the need for chaos is Michael
Bang-Peterson put it. that is they just think the whole system
should burn.
It's so corrupted evil.
And so any concentrated source of power, governments, corporations, scientists, the
public health establishment could often be, figure into these conspiracy theories, the
theory portraying them as uniquely evil and insidious.
That's fascinating. That's such an interesting way to look at it. It's kind of like mental
lapping in a way. These people do live action roleplay, but bounded within their own minds.
One of the things that I've found really fascinating about conspiratorial thinking, especially
recently, is that over the last two years, faith in institutions has just gone through the
floor. We've continued to see the people that were supposed to know what they were doing
in being charged. Just put their foot in their mouth on live camera daily, sometimes just
over and over and over again. We've seen whatever duplicitous, purposeful neglects, just
idiocy, play out in front of us. And I think that that has enabled people to
have far more belief in non-typical bureaucratic, what would you say?
Positions of power, speaking truth, we just have no, we have no time for it, therefore we're going
to make our own truth here. But increasingly now people, I see a lot on
the internet that people say, look, when you're saying conspiracy, why shouldn't I believe in conspiracies?
This was a conspiracy and that was a conspiracy. And the definition of what a conspiracy consists of now
has started to be expanded to include a lot of different things. Conspiracy used to mean something
that was basically totally unbelievable, but now because the boundary of what is true and what isn't, and our faith in institutions has been eroded,
that really anything is permitted to be a part of a conspiracy.
It's no longer flat earth and beyond.
It's a whole host of things that come back from that, towards stuff that me and you probably
might even have believed.
Yeah, so a couple of things. One is that many conspiracy theorists say, well, look, the CIA
really did mountain invasion in the Bay of Pigs in Cuba in 1961. The CIA really did
overthrow, help overthrow our bends in Guatemala and Mocidek in Iran.
But still there, that is an unwillingness to consider just how many probabilities would
have to be multiplied for something as a landish, as that the moon landings were faked
or that jet contrails are tranquilizing drugs. How many people would
have to be silent, how many people would have to not screw up, just how many pieces would
have to fall into place. So that quantitative thinking of conspiracy, one kind of conspiracy
can be very different from another. And in fact, it's vaguely, it's even stretching it to call,
say, the Bay of Pigs and Conspiracy.
There was government secrecy, as there always is.
But it doesn't mean that anything can happen.
And it's true that trusted institutions have gone down.
Since there, although it's important to keep in mind
that probably the default is that people
don't trust institutions.
There was kind of a peak in the 60 60s where trusted institutions reached their high water market.
They've been sinking ever since.
It isn't helped when the institutions themselves don't take steps to safeguard their integrity
and objectivity.
When you have experts that either make
pronouncements as if they were oracles, they don't kind of show their work. That
is explain how they arrived at their recommendations, but just, you know,
trust us, we're scientists. That just, as you say, that just sets them up
for failure because no one is infallible. Even our best experts are going to
make mistakes. That doesn't just credit them.
That's just a reflection of the fact
that humans are not oracles with a pipeline to the truth.
That should be made clearer,
that our starting position in any new phenomenon
is ignorance, SARS-CoV-2 popped up,
and no one knew anything.
And that should have been clear
and should always have been
that the public health instructions should be based
on the following evidence that we have so far.
This is what we recommend for the time being.
That at least would have helped.
The other is, many of our institutions
are flagrantly politicizing themselves.
They're just advertising.
We are a branch of the political left.
They use the vocabulary, the cant words, the cliches,
and people who aren't,
haven't identified themselves as part of that branding,
that political coalition,
are just gonna say, well, this is just another,
another bunch of woke academics or journalists.
And they've kind of set themselves up to be rejected under the principle that applies
to all of us that we tend to be more receptive to people from our own coalition.
We made it, Stephen Pinker, ladies and gentlemen.
Stephen, what are you working on next?
What can people expect, whatever this year or next year?
Well, I have for a number of years been working on the psychology of common knowledge
in the technical sense of, I know something, you know it, I know that you know it, you know
that I know it, I know that you know that I know that you know that I know that I know
it, add in for an item.
And I think we're, even though that sounds like it would just make anyone dizzy to think
through, and it does, we have an intuitive sense of common knowledge in the sense that something is kind
of out there or public or you can't take it back.
And I've been exploring how that enters into a range of psychological and economic and
political phenomena.
And that will be the topic of my next book, The Psychology of Common Knowledge.
Pretty cool, I like it. Cheers Stephen, thank you very much.
Thanks Chris, thanks for having me on.
you